How to proof that Turing machine that can move right only a limit number of steps is not equal to normal Turing machine

I need to prove that a Turing machine that can move only k steps on the tape after the last latter of the input word is not equal to a normal Turning machine.

My idea is that given a finite input with a finite alphabet the limited machine can write only a finite number of “outputs” on the tape while a normal Turing machine has infinite tape so it can write infinite “outputs” but I have no idea how to make it a formal proof.

any help will be appreciated.

Rice theorem, the proof of the part when the empty language belongs to the property

I was going through the classic text “Introduction to Automata Theory, Languages, and Computation” by Hofcroft, Ullman and Motwani where I came across the proof the Rice theorem as shown.

$ L_u$ => Language accepted by universal turing machine

$ P$ => property of a regular language

$ L_P$ => set containing the codes of the turing machines which accept those languages in $ P$

Theorem : (Rice’s Theorem) Every nontrivial property of the RE languages is undecidable.

PROOF: Let $ P$ be a nontrivial property of the $ RE$ languages. Assume to begin that $ \phi$ , the empty language, is not in $ P$ . we shall return later to the opposite case. Since $ P$ is nontrivial, there must be some nonempty language $ L$ that is in $ P$ . Let $ M_L$ be a $ TM$ accepting $ L$ . We shall reduce $ L_u$ to $ L_P$ , thus proving that $ L_P$ is undecidable, since $ L_u$ is undecidable. The algorithm to perform the reduction takes as input a pair $ (M,w)$ and produces a $ TM$ $ M’$ . The design of $ M’$ is suggested by Fig; $ L(M’)$ is $ \phi$ if $ M$ does not accept $ w$ , and $ L(M’)$ = $ L$ if $ M$ accepts $ w$ .

Construction of M' for the proof of Rice's Theorem

$ M’$ is a two-tape $ TM$ . One tape is used to simulate $ M$ on $ w$ . Remember that the algorithm performing the reduction is given $ M$ and $ w$ as input, and can use this input in designing the transitions of $ M’$ . Thus, the simulation of $ M$ on $ w$ is “built into” $ M’$ ; the latter $ TM$ does not have to read the transitions of $ M$ on a tape of its own. The other tape of $ M’$ is used to simulate $ M_L$ on the input $ x$ to $ M’$ , if necessary. Again, the transitions of $ ML$ are known to the reduction algorithm and may be “built into” the transitions of $ M’$ . The $ TM$ $ M’$ is constructed to do the following:

  1. Simulate $ M$ on input $ w$ . Note that $ w$ is not the input to $ M’$ rather, $ M’$ writes $ M$ and $ w$ onto one of its tapes and simulates the universal $ TM$ $ U$ on that pair. If $ M$ does not accept $ w$ , then $ M’$ does nothing else. $ M’$ never accepts its own input, $ x$ , so $ L(M’) = \phi$ . Since we assume $ \phi$ is not in property $ P$ , that means the code for $ M’$ is not in $ L_P$ .

  2. If $ M$ does not accept $ w$ , then $ M’$ does nothing else. $ M’$ never accepts its own input, $ x$ , so $ L(M’) =\phi$ . Since we assume $ \phi$ is not in property $ P$ , that means the code for $ M’$ is not in $ L_P$ .

  3. If $ M$ accepts $ w$ , then $ M’$ begins simulating $ M_L$ on its own input $ x$ . Thus, $ M’$ will accept exactly the language $ L$ . Since $ L$ is in $ P$ , the code for $ M’$ is in $ L_P$ .

Constructing $ M’$ from $ M$ and $ w$ can be carried out by an algorithm. Since this algorithm turns $ (M,w)$ into an $ M’$ that is in $ L_P$ if and only if $ (M,w)$ is in $ L_u$ , this algorithm is a reduction of $ L_u$ to $ L_P$ , and proves that the property $ P$ is undecidable.

We are not quite done. We need to consider the case where $ \phi$ is in $ P$ . If so, consider the complement property $ \overline P$ , the set of $ RE$ languages that do not have property $ P$ . By the foregoing, $ \overline P$ is undecidable. However, since every $ TM$ accepts an $ RE$ language, $ \overline {L_P}$ , the set of (codes for) Turing machines that do not accept a language in P is the same as $ L_{\overline P}$ , the set of TM’s that accept a language in $ \overline P$ . Suppose $ L_P$ were decidable. Then so would be $ \overline {L_P}$ , because the complement of a recursive language is recursive.

I could not understand the last paragraph in bold.

Proof of a greedy algorithm used for a variation of bin-packing problem

we are given an array of weights W (All the wights are positive integers) and we need to put the weights inside bins. Each bin can hold a maximum of Max_val (Wi <= Max_val (0 <= i < W.size())). The variation is that the ordering of weights should not be changed (ie; Wi should be inside a bin before Wj is inserted for all i < j).

For this problem statement, intuitively we can see that a greedy approach of filling a bin till its maximum value is reached and creating a new bin for further weights will produce the minimum number of bins. I am unable to come up with a formal proof that the greedy solution is optimal. Any hints or guidelines would be great!

PS: Wi represents ith element of W array.

I want proof of these

Let Σ be an alphabet. (a) Show that every finite set of words over Σ is regular. (b) Are regular languages closed under infinite union? That is, if L0, L1, . . . ⊆ Σ^* are infinitely many regular languages, is n∈N Ln also regular?

Proof for an algorithm to minimize $\max(a, b, c) – \min(a, b, c), a \in A, b \in B, c\in C$, A, B, C are arrays in ascending order

Problem Statement

I came across this problem here. For given arrays $ A$ , $ B$ and $ C$ arranged in ascending order, we need to minimize the objective function $ f(a, b, c) = \max(a, b, c) – \min(a, b, c), a \in A, b \in B, c\in C$ .

It can be thought of as a problem to select a number from each of the three arrays such that the numbers are as close to each other as possible (max element is as close to min element as possible).


The editorial solution to the problem is based on a greedy approach running in linear time. Here are the steps, summarized:

  1. The algorithm involves three pointers, one for each array.
  2. Initially, all pointers point to the beginning of the arrays.
  3. Till the end of atleast one of the arrays is reached, steps 4 and 5 are repeated.
  4. the element combination formed by current pointer configuration is checked to see if it is the new minimum value of the objective function.
  5. The pointer pointing to the least element is incremented to get a new configuration.

This is the C++ code for reference and reproducibility:

int f(int a, int b, int c){ //objective function     return max(a, max(b, c)) - min(a, min(b, c)); }  int solve(vector<int> &A, vector<int> &B, vector<int> &C) {     int i=0, j=0, k=0;     int best = INT_MAX;      while(i<A.size() && j<B.size() && k<C.size()){         int mine = min(A[i], min(B[j], C[k]));         best = min(best, f(A[i], B[j], C[k]));          if(A[i] == mine)             i++;         else if(B[j] == mine)             j++;         else             k++;     }      return best; } 


While this approach seems reasonable to me (and does work), I cannot convince myself of its correctness. I have made some observations about the nature of the problem and the algorithm, but I cannot seem to arrive at a solid reasoning for why this solution works. Any help towards a proof, or towards a reasoning for why this approach is correct would be greatly appreciated.

I started by thinking along the lines of finding a loop invariant, thinking that the pointers would always point to the best configuration for subarrays $ A[0..i], B[0.j], C[0..k]$ . This line of thought is incorrect (i, j, k point to sub optimal confirugations as well)

This is what I have come up with so far:

TL;DR: if any element except the minimum element is incremented(next element), the objective function would increase or stay the same(unfavourable). If the minimum element is incremented, the objective function might decrease, increase or stay the same. So, the only “hope” of finding a lower objective function is to increment the minimum element in that iteration.

consider that the elements being pointed to by the pointers are $ x, y, z$ such that $ x \le y \le z$ . $ x, y, z$ could belong to any of the three arrays. If the elements following elements $ x, y, z$ in their respective arrays are elements $ x^{+}, y^{+}, z^{+}$ , then the solution asks for always incrementing the pointer pointing to $ x$ , so that it points to $ x^{+}$ .

Since x is the minimum element ans z is the maximum element, f$ (x, y, z)=z-x=f_{old}$ .

If we increment $ z$ to $ z^{+}$ :

  • $ f(x, y, z^{+})=z^{+}-x \ge f_{old}$ , as $ z^{+} \ge z$ .

So, $ f_{new}\ge f_{old}$

If we increment $ y$ to $ y^{+}$ :

  • If $ y^{+}<=z$ , $ f(x, y^{+}, z)=z-x = f_{old}$ .
  • If $ y^{+}>z$ , $ f(x, y^{+}, z)=y^{+}-x \ge f_{old}$

So, $ f_{new}\ge f_{old}$

If we increment $ x$ to $ x^{+}$ :

  • If $ x^{+} < y$ , $ f(x^{+}, y, z)=z-x^{+} \le f_{old}$
  • If $ y \le x^{+} \le z$ , $ f(x^{+}, y, z)=z-y \le f_{old}$
  • If $ z<x^{+} \le z+(y-x)$ , $ f(x^{+}, y, z) = x^{+}-y \le z-x$ $ (= f_{old})$
  • If $ x^{+}>z+(y-x)$ , $ f(x^{+}, y, z) = x^{+}-y > z-x$ $ (= f_{old})$

So, $ f_{new}\le f_{old}$ as long as $ x^{+} \le z+(y-x)$ .

I have a hunch that for the solution to work, in the case where $ f_{new}> f_{old}$ , when $ x^{+} > z+(y-x)$ , it must be impossible to get a lesser objective function without incrementing all pointers, however, I cannot prove this.

Nonetheless, none of these observations convince me that the method is correct (although I know that it is). If someone could make a loop invariant condition for this solution and the configuration of pointers, that would be the most straightforward proof.

Proof that uniform circuit families can efficiently simulate a Turing Machine

Can someone explain (or provide a reference for) how to show that uniform circuit families can efficiently simulate Turing machines? I have only seen them discussed in terms of specific complexity classes (e.g., $ \mathbf{P}$ or $ \mathbf{NC}$ ). I would like to see how uniform circuit families is a strong enough model for universal, efficient computation.

proof and intuition behind given observation

Consider following problem:

Given an undirected tree answer following type of queries. (No. of queries can be as high as $ 10^5$ )

$ \text{LCA}(r, u, v)$ : Find the Lowest Common Ancestor of vertices $ u$ and $ v$ assuming vertex $ r$ as the root.

Now, in solution it’s given that answer will always be one this: $ r, u, v, \text{LCA}(r, u), \text{LCA}(r, v), \text{LCA}(u, v).$

Where $ \text{LCA}(u,v)$ denotes Lowest Common Ancestor of vertices $ u$ and $ v$ if we assume vertex number $ 1$ as the root.

So I’m looking for a proof for claim made in a solution.