Domain of a Type-2 Computable Function

In Weihrauch’s Type-2 computability theory, a string function $ f\colon\subseteq \Sigma^{\omega}\rightarrow \Sigma^{\omega}$ (the $ \colon\subseteq$ indicates that $ f$ may be a partial function) is computable iff there exists a Type-2 Turing machine that exactly realizes $ f$ , which implies that the Turing machine must fail to produce an output on inputs that are not in the domain of $ f$ .

To extend this to functions between arbitrary domains $ D_1$ and $ D_2$ , one specifies representations $ \gamma_i\colon\subseteq\Sigma^{\omega}\rightarrow D_i$ , and Weihrauch defines $ f\colon\subseteq D_1\rightarrow D_2$ to be $ (\gamma_1,\gamma_2)$ -computable iff there exists a Type-2 computable string function $ g$ such that $ f(\gamma_1(y)) = \gamma_2(g(y))$ whenever $ f(\gamma_1(y))$ is defined.

But $ \gamma_2(g(y))$ is allowed to be defined even when $ f(\gamma_1(y))$ is not.

Why is it defined this way? It seems to me more natural to require $ f\circ\gamma_1 = \gamma_2\circ g$ , i.e. the domain of the realization $ g$ must match the domain of $ f$ . With Weihrauch’s definition we have the odd condition that, for a string function $ f$ , the statements “$ f$ is Type-2 computable” and “$ f$ is $ (\mathrm{id},\mathrm{id})$ -computable” (where $ \mathrm{id}$ is the identity function) are not equivalent.

Reference: Weihrauch, K. (2000), Computable Analysis: An Introduction, Springer.

About computable sets

Let TOT be the set of all Turing Machines that halt on all inputs. Find a computable set B of ordered triples such that:

TOT = {e : ($ \forall$ x)($ \exists$ y)[(e, x, y) $ \in$ B]

This definition means that TOT is a set of all Turing machines e such that they halt on all inputs. The “for all” x denotes all inputs to that machine, and “there exists” a y denotes that e halts under y steps. x consists of 0s and 1s, y and e are Natural numbers too ( e denotes Turing machine $ T_e$ if we were to number all our turing machines)

I had a very fundamental doubt because of which I couldn’t progress at all. How do we construct B?

Thank you in advance.

Is this a computable function? Is the reduction correct?

Let $ A$ be a set, $ K=\{x:\phi_x(x)\downarrow\}$ . Let c to be a total computable function such that $ \phi_{c(x,y,n)}(z)=\begin{cases}\phi_n(z) & \text{if }\phi_x(y)\downarrow\\uparrow &\text{otherwise}\end{cases}$

Suppose $ \forall x,y\exists a.\phi_x(y)\downarrow \Leftrightarrow c(x, y,a)\in A$ .

The question is if the function: $ f(x)=a$ such that $ x\in K \Leftrightarrow c(x, x, a)\in A$ is total computable.

Hence, can I prove $ K\leq _m A$ with $ c(x,x, f(x))$ as reduction function?

Is there such a notion as “effectively computable reductions” or would this be not useful

Most reductions for NP-hardness proofs I encountered are effective in the sense that given an instance of our hard problem, they give a polynomial time algorithm for our problem under question by using reductions. All reductions in the 21 classic problems considered by R. Karp work this way. A very simple example might be reduction from INDEPENDENT_SET to CLIQUE, just built the complement graph of your input.

But when considering the proof of the famous Cook-Levin Theorem that SAT is NP-complete, the reduction starts from a non-deterministic TM and some polynomial, that exists by definition. But to me it is not clear how to get this polynomial effectively, meaning given an non-deterministic TM from which I know it runs in polynomial time, it is not clear how to compute this polynomial, and I highly suspect it is not computable at all.

So soley from the encoding of NP-complete problems by the class of non-deterministic polynomial time TMs (the polynomial itself is not encoded as far as I know) I see no way how to give a reduction in an effective way, the aforementioned proof just shows that there exists some, but not how to get it.

Maybe I have understood something wrong, but I have the impression that usually the reductions given are stronger in the sense that they are indeed computable, i.e. given a problem we can compute our reduction, and not merely know its existence.

Has this ever been noted? And if so, is there such a notion as “effectively computable reduction”, or would it be impractical to restrict reductions to be itself computable? For me, from a more practical perspective, and also the way I sometimes see reductions introduced (“like we have an algorithm to convert one instance to another”) it would be highly desirable to know how to get this algorithm/reduction, so actually it seems more natural to demand it, but why is it not done?

Logspace Computable of the composition

The bit-graph of $ f: \{0,1\}^* \rightarrow \{0,1\}^*$ is the language:

$ \text{BIT}_f := \{<x,i> : 1\leq i \leq|f(x)| \text{ and the i-th bit of } f(x) \text{ is } 1\}$

It is said that $ f$ is logspace computable if $ \text{BIT}_f$ is decidable in space $ O(log(n))$ . Decidable means that there exist a Turing Machine $ M$ such that:

  1. if $ <x,i> \in \text{BIT}_f$ then $ M(<x,i>) = 1$
  2. if $ <x,i> \notin \text{BIT}_f$ then $ M(<x,i>) = 0$

Prove that the composition $ (f \circ g)(x)=f(g(x))$ of two logspace computable functions $ f,g$ is also a logspace computable function.

Any hint on this exercise? What I tried so far is playing with the composition of the two Turing Machines associated with $ f$ and $ g$ , but I didn’t succed because it always end up with a case analysis exercise.

Applying the Parameter Theorem to show that a function is not computable

Show that $ g: \mathbb{N} \to \mathbb{N}$ such that $ $ g(x)=\begin{cases} 1 & \text{if halt}(2833,x) \ 0 & \text{otherwise} \end{cases}$ $ is not computable.

We know that

$ $ g(x)=\begin{cases} 1 & \text{if }\Phi_x(2833)\downarrow \ 0 & \text{if }\Phi_x(2833)\uparrow \end{cases}$ $

How can I use the parameter theorem to reduce $ g$ to $ \text{halt}(x,x)$ ? I’m very confused.

Is the following fuction computable?

I’m trying to show that $ K_1 \le_1 K$ where $ K$ is the diagonal halting set $ \{x : \varphi_x(x) \downarrow\}$ and $ K_1=\{x: \exists y \,\, \varphi_x(y) \downarrow\}$ , then I defined the function $ $ \psi(x, y)= \begin{cases} 0 \, \, if \, \, x \in K_1 \ \uparrow \, if \, x \notin K_1\end{cases}$ $ By the Parameter Theorem there is a one to one computable function $ f$ such that $ \varphi_{f(x)}(y) = \psi(x, y)$ and it’s easy to show that $ x \in K_1 \leftrightarrow f(x) \in K$ . All above works if and only if $ \psi$ is partial computable, I think that function is partial computable because $ K_1$ is r.e. Is this true? I mean, Can I always define a function computable by $ $ \psi_A(x, y)= \begin{cases} 0 \, \, if \, \, x \in A \ \uparrow \, if \, x \notin A\end{cases}$ $ for each r.e. set $ A$ ?

is the set of decidable language the same as the set of computable functions?

is the set of decidable languages the same as the set of computable functions? As well as the complement: is the set of undecidable languages the same as the set of uncomputable functions?

I feel like decidable languages only represent the subset of computable functions that are from [0,1]* –>[0,1] and leave out being able to represent the functions from [0,1]* –>[0,1]*

Are non halting programs not computable?

Are non halting programs not computable? How are these two sets of programs related: is a non halting program just a specific example of a type of program that is not computable or is it technically computable just not in finite time (the computation can be defined it just goes on forever). Im just not sure about the goes on forever part — does that circle back and make it not computable?