## Turing recognizable and decidable

Question: Let $$S$$ = $$\{ | TM$$ $$M$$ on input 3 at some point writes symbol “3” on the third cell of its tape $$\}$$. Show that $$S$$ is r.e. (Turing acceptable) but not recursive (decidable).

I’m bit confused about this question, I’m not even sure how exactly should I start an approach to it. As I study from Sipser’s book, in general when we proof a language is undeciable, we use contradiction to show that if $$S$$ is decidable, then by doing some trick, $$A_{TM}$$ is also decidable but it’s not. Thus, $$S$$ is not decidable. However, this question also asked to show $$S$$ is acceptable. I’m thinking if we show that $$S$$ is undecidable, it’s sufficient to see that $$S$$ is acceptable?(I’m totally not sure) Any suggestion?

## simple question about epsilon and estimation turing machines

i am getting really confused by it. i got to a point i had to calculate the lim when $$n \rightarrow \infty$$ for an optimization problem, and i got to the point that i had to calculate a fairly simple limit: $$lim_{n \rightarrow \infty} {3-\frac{7}{n}}$$.

now i used $$3 – \epsilon$$ and i am trying to show that there can’t be any $$\epsilon>0$$ so that the estimation of the algorithm is $$3-\epsilon$$, because there exists a “bigger estimation” – and this is the part i am not sure about, what is the correct direction of the inequality? $$3-\frac{7}{n} > 3 – \epsilon$$ or the opposite? i am trying to show that the estimation ration is close to 3.

i think that what i wrote is the correct way, but not sure. would appreciate knowing what is correct in this case. thanks.

## In what sense the computer program (Turing machine) can be considered as the complex system and its IIT Phi can be measured and improved?

I am reading https://global.oup.com/academic/product/a-world-beyond-physics-9780190871338?cc=us&lang=en& about one approach of complex systems’ theory for the emergence of the life. It is about the autocatalytic soup of molecules from which the life can emerge. Of course, one is thinking further – every computer program, every Turing machine is more ore less autocatalytic soup of interacting software components from which the consciousness, mind and Artificial General Intelligence (https://content.sciendo.com/view/journals/jagi/jagi-overview.xml) can emerge.

Of course – ther program (Turing machine) should be of sufficient complexity for such consciousness to emerge – and such complexity/level of consciousness can be measured by Phi measure of https://en.wikipedia.org/wiki/Integrated_information_theory .

My question is – how the theory of complex systems is applied to the software programs (including logic programs, sets of first order formulas, knowledge bases, functional programs, lambda terms), e.g. to derive the directions (modular organization, wealth of functionality, et.c) for evolving the programs into the programs with higher level of Phi, whith higer level of autonomy and consciousness? I presume, that consciousness is the driving force for the program to exhibit commonsense knowledge, for the capability to make generalizations and transfer skills among tasks (all those are very hot topics and deep learning community without almost any theory behind them). All these issues are the grails of current computer science and that is why my question is very applied in nature.

Currently computer programs and algorithms are developed in trial-and-error process. The development of software systems that are called cognitive architectures (http://bicasociety.org/cogarch/architectures.php) or that are called cognitive systems (http://www.cogsys.org/journal) is prominent features of such efforts. But maybe the theory of complex systems can be applied to such programs to determine why such programs exhibit or do not exhibit the capabilities of consciousness (as determined by IIT or any other computational theory of consciousness or mind – there are some others, academically sound) and what can be do on such programs to evolve them into capable systems with higher Phi. We have tried to design and program features but no cognitive architecture has achieved sufficient level of AGI. We can try still more harder. But maybe the theory of complex systems can provide some guidance to estimate the weak points and to provide some direction?

Just some reference for applying the theory of complex systems to the programs in the widest sense?

## Is it possible to run more than one Turing Machine emulator using only one processor kernel?

I had this question on computer architecture exam and can’t find an answer anywhere. Is it possible to run several Turing Machine emulators at once using only one processor kernel?

## Spaced-bounded Probabilistic Turing Machine Always Halts

For example, in the definition of BPL, we require that the probabilistic Turing machine has to halt for every input and every randomness. What is the reason for us to define them this way? What would happen if they don’t halt? On the other hand, we don’t require space-bounded non-deterministic Turing machines to halt.

## Why assume Turing machine can compute arbitrary results in Kraft-Chaitin theorem?

The Kraft-Chaitin theorem (aka KC theorem in Downey & Hirschfeldt, Machine Existence Theorem in Nies, or I2 in Chaitin’s 1987 book) says, in part, that given a possibly infinite list of strings $$s_i$$, each paired with a desired program length $$n_i$$, there is a prefix-free Turing machine s.t. there is a program $$p_i$$ of length $$n_i$$ for producing each $$s_i$$, as long as $$\sum_i 2^{-n_i} \leq 1$$. The list of $$\langle s_i, n_i \rangle$$ pairs is required to be computable (D&H, Chaitin) or computably enumerable (Nies).

The proofs in the above textbooks work, it seems, by showing how to choose program strings of the desired lengths while preserving prefix-free-dom. The details differ, but that construction is the main part of the proof in each case. The proofs don’t try to explain how to write a machine that can compute $$s_i$$ from $$p_i$$ (of course).

What I’m puzzled about is why we can assume that an infinite list of results can be computed from an infinite list of arbitrarily chosen program strings using a Turing machine that by definition has a finite number of internal states. Of course many infinite sequences of results require only finite internal states; consider a machine that simply adds 1 to any input. But in this case we’re given a countably infinite list of arbitrary results and we match them to arbitrary (except for length and not sharing prefixes) inputs, and it’s assumed that there’s no need to argue that a Turing machine can always be constructed to perform each of those calculations.

(My guess is that the fact that the list of pairs is computable or c.e. has something to do with this, but I am not seeing how that implies an answer to my question.)

## Is it decidable whether Turing Machine never scans any tape cell more than once when started with given string

The problem:

Is it decidable that the set of pairs $$(M,w)$$ such that TM $$M$$, started with input $$w$$, never scans any tape cell more than once.

How can I easily prove above to be decidable. I found following proof confusing:

How is $$l+m$$ is upper bound on number of steps? I feel we should be doing at least $$l\times 𝑄\times \Gamma\times\{𝐿,𝑅\}+1$$ steps ($$Q$$ being number of states,$$\Gamma$$ being set of tape alphabet, $$l$$ is string length, $$L$$ and $$R$$ are head movement directions).

## Describe a Turing machine which decides for any two words w,v in {a}* whether or. not their lengths have the same parity

Describe formally (by means of a transition function) a Turing machine which decides for any two words w, v in {a}* whether or not their lengths have the same parity (i.e., either both lengths are even or both lengths are odd). You may assume that the input is the word of the kind w*v, where * is a new letter.

## Are Turing unrecognizable and undecidable languages, recognized and decided by hyper computation?

Do the hyper computing machines/models that are supposed to be more powerful than Turing machines, capable of recognizing and deciding the languages that are not recognizable/decidable by Turing machines?

## Why aren’t distributed computing and/or GPU considered non-deterministic Turing machines if they can run multiple jobs at once?

So we know a nondeterministic Turing machine (NTM) is just a theoretical model of computation. They are used in thought experiments to examine the abilities and limitations of computers. Commonly used to dicuss P vs NP, and how NP problems cannot be solved in polynomial time UNLESS the computation was done on the hypothetical NTM. We also know an NTM would use a set of rules to prescribe more than one action to be performed for any given situation. In other words, attempt many different options simultaneously.

Isn’t this what distributed computing does across commodity hardware? Run many different possible calculations in parallel? And the GPU, does this within a single machine. Why isn’t this considered an NTM?