## Difficulty in understanding a portion in the proof of the $\text{“white path”}$ theorem as with in CLRS text

I was going through the $$\text{DFS}$$ section of the Introduction to Algorithms by Cormen et. al. and I faced difficulty in understanding the $$\Leftarrow$$ direction of the proof of the white path theorem. Now the theorem which is the subject of this question depends on two other theorems so I present the dependence before presenting the actual theorem and the difficulty which I face in the said.

Dependencies:

Theorem 22.7 (Parenthesis theorem) In any depth-first search of a (directed or undirected) graph $$G = (V, E)$$, for any two vertices $$u$$ and $$v$$;, exactly one of the following three conditions holds:

• the intervals $$[d[u], f[u]]$$ and $$[d[v], f[v]]$$ are entirely disjoint, and neither $$u$$ nor $$v$$ is a descendant of the other in the depth-first forest,

• the interval $$[d[u], f[u]]$$ is contained entirely within the interval $$[d[v], f[v]]$$, and $$u$$ is a descendant of $$v$$; in a depth-first tree,

• the interval $$[d[v], f[v]]$$ is contained entirely within the interval $$[d[u], f[u]]$$, and $$v$$ is a descendant of $$u$$ in a depth-first tree.

Corollary 22.8 (Nesting of descendants’ intervals) Vertex $$v$$ is a proper descendant of vertex $$u$$ in the depth-first forest for a (directed or undirected) graph $$G$$ if and only if $$d[u] < d[v] < f[v] < f[u]$$.

Theorem 22.9 (White-path theorem)

In a depth-first forest of a (directed or undirected) graph $$G = (V, E)$$, vertex $$v$$ is a descendant of vertex $$u$$ if and only if at the time $$d[u]$$ that the search discovers $$u$$, vertex $$v$$ can be reached from $$u$$ along a path consisting entirely of white vertices.

Proof

$$\Rightarrow$$ : Assume that $$v$$ is a descendant of $$u$$. Let $$w$$ be any vertex on the path between $$u$$ and $$v$$ in the depth-first tree, so that $$w$$ is a descendant of $$u$$. By Corollary 22.8, $$d[u] < d[w]$$, and so $$w$$ is white at time d[u].

$$\Leftarrow$$:

1. Suppose that vertex $$v$$ is reachable from $$u$$ along a path of white vertices at time $$d[u]$$, but $$v$$ does not become a descendant of $$u$$ in the depth-first tree.
2. Without loss of generality, assume that every other vertex along the path becomes a descendant of $$u$$. (Otherwise, let $$v$$ be the closest vertex to $$u$$ along the path that doesn’t become a descendant of $$u$$.)
3. Let $$w$$ be the predecessor of $$v$$ in the path, so that $$w$$ is a descendant of $$u$$ ($$w$$ and $$u$$ may in fact be the same vertex) and, by Corollary 22.8, $$f[w] \leq f[u]$$.
4. Note that $$v$$ must be discovered after $$u$$ is discovered, but before $$w$$ is finished.$$^\dagger$$ Therefore, $$d[u] < d[v] < f[w] \leq f[u]$$.
5. Theorem 22.7 then implies that the interval $$[d[v], f[v]]$$ is contained entirely within the interval $$[d[u], f[u]]$$.$$^{\dagger\dagger}$$
6. By Corollary 22.8, $$v$$ must after all be a descendant of $$u$$. $$^\ddagger$$

$$\dagger$$ : Now it is clear that since $$u$$ is the first vertex to be discovered so any other vertex (including $$v$$) is discovered after it. In point $$1$$ we assume that $$v$$ does not become the decendent of $$u$$, but by the statement that but before $$w$$ is finished I feel that this is as a result of exploring the edge $$(w,v)$$ (this exploration makes $$v$$ ultimately the descendant of $$u$$, so the proof should have ended here $$^\star$$)

$$\dagger\dagger$$ : Considering the exact statement of theorem 22.7 , I do not get which fact leads to the implication in $$5$$.

$$\ddagger$$ : The proof should have ended in the $$\star$$, but why the stretch to this line $$6$$.

Definitely I am unable to get the meaning the proof of the $$\Leftarrow$$. I hope the authors are using proof by contradiction.

(I thought of an alternate inductive prove. Let vertex $$v$$ is reachable from $$u$$ along a path of white vertices at time $$d[u]$$. We apply induction on the vertices in the white path. As a base case $$u$$ is an improper descendant of itself. Inductive hypothesis, let all vertices from $$u$$ to $$w$$ be descendants of $$u$$ , where $$w$$ is the predecessor of $$v$$ in the white path. We prove the inductive hypothesis by the exploration of the edge $$(w,v)$$. But I want to understand the proof the text.)

## Rice’s Theorem for Turing machine with fixed output

So I was supposed to prove with the help of Rice’s Theorem whether the language: $$L_{5} = \{w \in \{0,1\}^{*}|\forall x \in \{0,1\}^{*}, M_{w}(w) =x\}$$ is decidable.

First of all: I don’t understand, why we can use Rice’s Theorem in the first place: If I would chose two Turingmachines $$M_{w}$$ and $$M_{w’}$$ with $$w \neq w’$$ then I would get $$M_{w}(w) = M_{w’}(w) = x$$. But this would lead to $$w’$$ not being in $$L_{5}$$ and $$w \in L_{5}$$. Or am I misunderstanding something?

Second: The solution says, that the Language $$L_{5}$$ is decidable as $$L_{5} = \emptyset$$ because the output is clearly determined with a fixed input. But why is that so? I thought that $$L_{5}$$ is not empty because there are TM which output x on their own input and there are some which do not.

## A bit confused about the time complexity of this theorem vs. my situation

In this paper polynomiality of n-fold integer programming problem is discussed.

(My question is very simple and it is stated at the end. There is no need to deeply understand the excerpts they are provided for the purpose of being complete.)

Excerpt 1:
The n-fold integer programming problem. Fix a $$p \times q$$ integer matrix $$A$$. Given positive integer $$n$$ and integer vectors $$b = (b^0, b^1, \ldots , b^n)$$ and $$c = (c^1, \ldots , c^n)$$, where $$b^0 \in \mathbb{Z}^q$$, and $$b^k \in \mathbb{Z}^p$$ and $$c^k \in \mathbb{Z}^q$$ for $$k = 1, \ldots, n$$, find a nonnegative integer vector $$x = (x^1, \ldots , x^n)$$, where $$x^k \in \mathbb{N}^q$$ for $$k = 1, \ldots, n$$, which minimizes $$cx= \sum_{k=1}^{n}c^k x^k$$ subject to the equations $$\sum_{k=1}^{n} x^k = b^0$$ and $$Ax^k = b^k$$ for $$k = 1, \ldots, n$$.

Excerpt 2:
In this article, we establish the following theorem. Naturally, the input size is $$n$$ plus the bit size of the integer objective vector $$c \in \mathbb{Z}^{nq}$$ and the integer right-hand side vector $$b \in \mathbb{Z}^{q+np}$$.
Theorem 1.1. Fix any integer matrix $$A$$. Then there is a polynomial-time algorithm that, given any $$n$$ and any integer vectors $$b$$ and $$c$$, solves the corresponding n-fold integer programming problem.

My Question:
I have an Integer Programming at hand. For a given integer $$k$$, my $$A$$ (like above) is $$3 \times 2k$$, and my $$n$$ (like above) is $$k$$, and let us assume that bit size of my integers is of $$O(\log (k))$$. Is the aforementioned theorem still applicable to my situation? Especially because $$q$$ in my situation depends on $$k.$$

## Help understanding a theorem Kleinberg proves related to sequence alignment

This is from Kleinberg’s Algorithm Design text (Theorem 6.14)

Let $$M$$ be an alignment of $$X$$ and $$Y$$. If $$(m, n) \notin M$$, then either the $$m^{\text{th}}$$ position of $$X$$ or the $$n^{\text{th}}$$ position of $$Y$$ is not matched in $$Y$$.

The theorem does not state that $$m, n$$ must be the last elements of $$X$$ and $$Y$$ respectively, but earlier, when introducing the topic, he uses $$m, n$$ to denote the last entries of the two strings. Which of these is correct?

## Consistent theory based on L and not(A->A) is a theorem

I am working on this problem in which I have a theory T based on L language and the only information we have is that T is consistent and |- not(A -> A). Given this information, how can I know if this theory is sound, complete and/or decidable?

My only guess is that I can say that T is sound because since T is consistent and we can derive not(A->A) from axioms and inference rules, not(A->A) is a theorem and because of that we can assume that T is sound (because the premises and conclusions are true).

Thank you!

## Regarding space of linear speed-up theorem

I was reading the proof of speed-up lemma from this slide (page 10 to 13) but I could not understand why the plus two factor appears in the new space bound. Would anybody elaborate?

Furthermore for a Turing machine that uses a linear amount of space, isn’t it possible to reduce the amount of space used by a constant factor without additional constant overhead? (i.e. to have only the εf(n) part as the new space)

Theorem: Suppose TM M decides language L in space f(n). Then for any ε > 0, there exists TM M’ that decides L in space εf(n) + 2.

## Euclidean geometry theorem proving complexity

Euclidean geometry is complete, so the problem of determining whether a statement $$A$$ is provable is computable. Do we know its time complexity?

## Clarification of the proof involving the regularity condition in Master Theorem

I was going the text Introduction to Algorithms by Cormen Et. al. Where I came across the following statement in the proof of the third case of the Master’s Theorem.

(Master theorem) Let $$a \geqslant 1$$ and $$b > 1$$ be constants, let $$f(n)$$ be a function, and let $$T (n)$$ be deﬁned on the nonnegative integers by the recurrence( the recursion divides a problem of size $$n$$ into $$a$$ problems of size $$n/b$$ each and takes $$f(n)$$ for the divide and combine)

$$T(n) = aT(n/b)+ f (n)$$ ;

where we interpret $$n/b$$ to mean either $$\lceil b/n \rceil$$ or $$\lfloor b/n \rfloor$$. Then $$T(n)$$ has the following asymptotic bounds:

1. If $$f(n)=O (n^{log_ba – \epsilon})$$ for some constant $$\epsilon > 0$$, then $$T(n)=\Theta (n^{log_ba})$$.

2. If $$f(n)=\Theta (n^{log_ba})$$, then $$T(n)=\Theta (n^{log_ba}lg n)$$

3. If $$f(n)=\Omega (n^{log_ba + \epsilon})$$ for some constant $$\epsilon > 0$$, and if $$af(n/b) \leqslant cf(n)$$ for some constant $$c < 1$$ and all sufﬁciently large n, then $$T(n)=\Theta (f(n))$$.

Now in the proof of Master’s Theorem with $$n$$ as exact power of $$b$$ the expression for $$T(n)$$ reduces to :

$$T(n)=\Theta(n^{log_ba})+\sum_{j=0}^{log_bn -1} a^jf(n/b^j)$$

Let us assume,

$$g(n)=\sum_{j=0}^{log_bn -1} a^jf(n/b^j)$$

Then for the proof of the 3rd case of the Master’s Theorem the authors prove that,

If $$a.f(n/b)\leqslant c.f(n)$$ for some constant $$c<1$$ and for all $$n\geqslant b$$ then $$g(n)=\Theta(f(n))$$

They say that as, $$a.f(n/b)\leqslant c.f(n) \implies f(n/b)\leqslant (c/a).f(n)$$ then interating $$j$$ times yeilds, $$f(n/b^j)\leqslant (c/a)^j.f(n)$$

I could not quite get the mathematics used behind iterating $$j$$ times.

Moreover I could not quite get the logic behind the assumption of $$n\geqslant b$$ for the situation that $$n$$ should be sufficiently large.(As the third case of the Master’s Theorem says).

Moreover in the similar proof for the third case of the general master theorem( not assuming $$n$$ as exact powers of $$b$$) there again the book assumes that $$n\geqslant b+b/(b-1)$$ to satisfy the situation of sufficiently large $$n$$.

I do not quite understand what the specific value has to do and why such is assumed as sufficiently large $$n$$

(I did not give the details of the second situation as I feel that it shall be something similar to the first situation)

## Worked out example of Slepian-Wolf Theorem

Note: First posted this on Theoretical Computer Science Stack Exchange, but deleted it from there since it seems to be off-topic.

The Slepian-Wolf theorem states that sequences of outputs from two separate random variable sources A and B that have a joint probability distribution defined on them, if encoded with the following rates, can be completely retrieved when decoded together:

$$R_A \geq H(A|B) \ R_B \geq H(B|A) \ R_A + R_B \geq H(A,B)$$

$$R_x$$ refers to the bits required for encoding one symbol of $$X$$, assuming all logarithms are taken to the base 2.

Given this, I wanted to try out an example, especially because I find fractional number of bits per symbol slightly confusing to think about.

Consider two sources $$A$$ and $$B$$, that either sprout out 0 or 1 following this probability distribution:

A \ B 0 1
0 0.5 0
1 0.25 0.25

We calculate the entropies as follows: $$H(A,B) = 3/2 = 15/10\ H(A|B) = 7/10 \ H(B|A) = 1/2 = 5/10\$$

Now, assume that the a certain sequence of bits that A and B give out are as follows:

A B
0 0
0 0
1 0
1 1
1 0
0 0
0 0
1 1
1 0
0 0

I should be able to find an encoding that allows A to send atleast 7 bits, B atleast 5 and a total of atleast 15, such that they can be decoded completely, right?

Unfortunately I am unable to think of an encoding where they send less than 10 bits each.

For example, B does not have to send anything when A sends 0, however B does not know when A sends 0.

I would also like to know if this is the wrong way to interpret the theorem (perhaps a longer sequence is required), or if there is another way to see its working.

## Having trouble understanding a proof of Mahaney’s theorem

I was reading this blog post: https://blog.computationalcomplexity.org/2011/09/mahaneys-theorem.html?m=1, but I am not sure why a’ cannot be in between w_i and w_j in the case 1, and also why a’ cannot be in between w_0 and w_1 in the case 2. Can anyone give me some explanation?