Many-one reductions between the set of true sentences and a particular arithmetical set

Never used this site before so not sure the best way to get help. However, I’ve been looking at many-one reductions in relations to sentences in logic. I wondered if there was a way to prove the following:

TH(N) = {ϕ : ϕ is a first order sentence in the language of arithmetic and N |= ϕ} (Where N is the standard model of arithmetic – , x, <, +, 0, and the successor; S)

Let γ(x) be some formula with exactly one free variable, namely x. Then assume X = {n : γ(n)} to be the arithmetical subset of the natural numbers defined by γ(x).

Is there a way to show either of X or TH(N) many one-reduce to each other?

Proof of lambda reductions

I am not sure how to approach this question or what exactly it is asking. Need to prove the following reductions:

Need to prove those knowing that: N = λf.λc.(f (f . . .(f c)). . .) and that: addition: + = λM.λN.λa.λb.((M a)((N a) b)) multiplication: × = λM.λN.λa.(M (N a)) exponentiation: ∧ = λM.λN.(N M)

Is there such a notion as “effectively computable reductions” or would this be not useful

Most reductions for NP-hardness proofs I encountered are effective in the sense that given an instance of our hard problem, they give a polynomial time algorithm for our problem under question by using reductions. All reductions in the 21 classic problems considered by R. Karp work this way. A very simple example might be reduction from INDEPENDENT_SET to CLIQUE, just built the complement graph of your input.

But when considering the proof of the famous Cook-Levin Theorem that SAT is NP-complete, the reduction starts from a non-deterministic TM and some polynomial, that exists by definition. But to me it is not clear how to get this polynomial effectively, meaning given an non-deterministic TM from which I know it runs in polynomial time, it is not clear how to compute this polynomial, and I highly suspect it is not computable at all.

So soley from the encoding of NP-complete problems by the class of non-deterministic polynomial time TMs (the polynomial itself is not encoded as far as I know) I see no way how to give a reduction in an effective way, the aforementioned proof just shows that there exists some, but not how to get it.

Maybe I have understood something wrong, but I have the impression that usually the reductions given are stronger in the sense that they are indeed computable, i.e. given a problem we can compute our reduction, and not merely know its existence.

Has this ever been noted? And if so, is there such a notion as “effectively computable reduction”, or would it be impractical to restrict reductions to be itself computable? For me, from a more practical perspective, and also the way I sometimes see reductions introduced (“like we have an algorithm to convert one instance to another”) it would be highly desirable to know how to get this algorithm/reduction, so actually it seems more natural to demand it, but why is it not done?

Reductions from non decision problems

I want to show a minimization problem $ Y$ has no approximation factor of 1.36. To be more specific the problem $ Y$ is the exemplar distance problem between two genomes. Could I reduce from the min vertex cover problem instead of the decision version of the vertex cover problem. The problem I am having with reducing from the decision version is that a vertex cover of size k maps to the $ Y$ of size $ \leq ck$ , where $ c$ is a constant. A decision version for problem $ Y$ for me makes no sense, as there will always be a brekpoint distance between two genomes. I tried to research on the internet but I always only find reductions from decision problems. Could we reduce from non-decision problems.

Also when doing reductions from the vertex cover problem. I can’t assume the given instance $ G,k$ is such that k is the size of the optimal vertex cover right? $ k$ is just any size of a Vertex cover.

What are the requirements for a superset of P to be closed under karp reductions?

So today in our exercise session on complexity theory we discussed that P, NP, and BPP are closed under karp reduction. We also figured that the proofs could likely be expanded to straight generalizations of these three “classes of classes” (nondeterministic, deterministic, probabilistic) like EXPTIME.

This, to us, begged the question:

Suppose you have a class of languages $ C\supseteq P$ , what are neccessary (and sufficient?) conditions for $ C$ to be closed under karp reduction?


Terminology:

  • A language $ A$ is karp-reducible to another language $ B$ , written $ A\leq_m B$ iff $ \exists $ deterministic poly-time turing machine $ K_{A,B}:\forall x\in\{0,1\}^*: x\in A\iff K_{A,B}(x)\in B$ holds.
  • A class $ C$ is closed under $ \leq_m$ iff $ \forall B,A\subseteq \{0,1\}^*: B\in C\land A\leq_m B\implies A\in C$ holds.

Reductions between LIS and LCS

Given an oracle that returns both the length and the subsequence for the Longest Increasing Subsequence of a given input $ A$ of $ n$ elements $ \text{LIS}(A,n)$ , can one use a polynomial number of calls to it to find the length of the Longest Common Subsequence of two sequences $ A$ and $ B$ ($ \text{LCS}(A,B)$ )? If so, how few calls can we query the LIS oracle in order to find the length of $ \text{LCS(A,B)}$ ?

Alternatively, given an oracle for finding both the subsequence and length of the Longest Common Subsequence between two sequences, it’s easy to find $ \text{LIS}(A)$ : Given a sequence $ A$ , sort $ A$ into $ A’$ , and calculate $ \text{LCS}(A,A’)$ .

This question is prompted by the fact that there are $ O(n\log n)$ algorithms for LIS but not LCS. On a separate note, are there any results comparing the hardness of the two problems or giving a superlinear/quadratic lower bound to LCS?

Intersection of reductions is a reduction

Let $ (R, \mathcal{m},K)$ be local ring with $ J_0\supseteq J_1 \supseteq J_2\supseteq \cdots$ are all reductions of I. Then, $ \bigcap_nJ_n$ is also a reduction of I.

I have done.

I take the family $ \{\frac{J_i+\mathcal{m}I}{\mathcal{m}I}\}$ . Each element has finite dimention as vectoral space over $ K$ . Then, there exists $ N$ nonnegatibe integer such that $ \frac{J_N+\mathcal{m}I}{\mathcal{m}I}=\frac{J_i+\mathcal{m}I}{\mathcal{m}I}$ for $ i \geq N$ , and so $ J_N+\mathcal{m}I=J_i+\mathcal{m}I$ for $ i \geq N$

Shortening the number of reductions to prove NP-Completeness

This question is based on the slides from this pdf:

  • Slide 54, they define the Subset Sum Problem.

  • Slide 65, they define the Partition problem.

  • Slide 74, they talk about the Job Scheduling problem.

My question is, is it possible to cut the middle man of the partition problem and reduce Subset Sum to Job Scheduling directly?


To prove scheduling is NP, we can use what is already stated since it’s not related to partition.

I’m not sure how to combine the reduction from both onto one though.