It is simple to decide powers of 2 in $ O(n)$ time because it’s just "0-bit Unary" after bit-1. (eg. $ 1000$ is a power of 2 in binary).
I haven’t found many other trivial powers of $ K$ that can be decided in polynomial-time with the binary-length of the input.
Can we decide if a number is a power of any given $ K$ in polynomial-time and in a practical amount of time?
Something not naive such as keep dividing $ N$ by $ K$ until you reach the smallest value $ 2$ for deciding a power of 2.
I searched a lot on internet, including here, but I couldn’t find an explanation that could convince me. The problem is the same of the title, if A is polynomial-time reducible to B and B is NP-complete, can I say that A is NP-complete too?
Actually, I would say yes, because if I can convert a problem that I don’t know how to solve to one that I know, then that problem must be at least as hard as the reducible one. So, A should be NPC.
However, I got to another idea that I can convert an easier problem to a hard one, so I could say that A is NP, but I couldn’t guarantee that it’s NPC.
Which ideia is correct?
If $ X$ is polynomial-time reducible to $ Y$ and $ X$ is polynomial-time reducible to $ Z$ ,
$ Y$ is polynomial-time reducible to $ Z$ ?
If $ X \leq_p Y$ and $ X \leq_p Z$ then $ Y \leq_p Z$ ?
True, false or we don’t know? Why?
What would the conqesquences of finding a quasi polynomial-time algorithm for 3-Sat?
Would this result in their being a quasi polynomial-time algorithm for all NP-complete problems?
Function Problem that finds the solution
This means we must exclude integers $ 1$ and $ N$ .
An algorithm that is pseudo-polynomial
N = 10 numbers =  for a in range(2, N): numbers.append(a) for j in range(length(numbers)): if N/(numbers[j]) in numbers: OUTPUT N/(numbers[j]) X numbers[j] break
Soltuion Verified: 5 x 2 = N and N=10
The algorithm that solves the Decision Problem
if AKS-primality(N) == False: OUTPUT YES
Since the decision problem is in $ P$ must finding a solution also be solvable in polynomial-time?
Goldbach’s Conjecture says every even integer $ >$ $ 2$ can be expressed as the sum of two primes.
Let’s say $ N$ is our input and its $ 10$ . Which is an integer > 2 and is not odd.
1.Create list of numbers from $ 1,to~N$
2.Use prime-testing algorithm for creating a second list of prime numbers
3.Use my 2_sum solver that allows you to use primes twice that sum up to $ N$
for j in range(list-of-primes)): if N-(list-of-primes[j]) in list-of-primes: print('yes') break
4.Verify solution efficently
if AKS-primality(N-(list-of-primes[j])): if AKS-primality(list-of-primes[j]): print('Solution is correct')
yes 7 + 3 Solution is correct
If the conjecture is true, then the answer will always be Yes. Does that mean it can’t be in $ Co-NP$ because the answer is always Yes?
Reductions for showing algorithms. The following fact is true: there is a polynomial-time algorithm BIP that on input a graph 𝐺 = (𝑉 , 𝐸) outputs 1 if and only if the graph is bipartite: there is a partition of 𝑉 to disjoint parts 𝑆 and 𝑇 such that every edge (𝑢, 𝑣) ∈ 𝐸 satisfies either 𝑢 ∈ 𝑆 and 𝑣 ∈ 𝑇 or 𝑢 ∈ 𝑇 and 𝑣 ∈ 𝑆. Use this fact to prove that there is polynomial-time algorithm to compute that following function CLIQUEPARTITION that on input a graph 𝐺 = (𝑉 , 𝐸) outputs 1 if and only if there is a partition of 𝑉 the graph into two parts 𝑆 and 𝑇 such that both 𝑆 and 𝑇 are cliques: for every pair of distinct vertices 𝑢, 𝑣 ∈ 𝑆, the edge (𝑢, 𝑣) is in 𝐸 and similarly for every pair of distinct vertices 𝑢, 𝑣 ∈ 𝑇 , the edge (𝑢, 𝑣) is in
I have read that minimizing regular expressions is, in general, a PSPACE problem. Is it known whether minimizing regular expressions without the Kleene closure (star, asterisk) is in P?
The language of any such regular expression would be guaranteed to be finite. I suppose an equivalent question is whether the problem of constructing a minimal regular expression from a known finite language is any easier than minimizing an arbitrary regular expression. It seems like this should be the case.
(If the answer is that it is easier and there’s an obvious proof, I’m happy to go attempt it, I just haven’t thought about the problem deeply yet and wanted to see what I’d be getting myself into first.)
I am aware that the vertex cover problem is NP complete and I have read the reduction from clique problem. I have written an algorithm that determines the minimum vertex cover of a graph in polynomial time. Could someone explain what’s wrong in my thinking?
I have attached an image of the algorithm and an example.
I have always thought that the ellipsoid algorithm is an algorithm which can be used to solve LP in polynomial-time. However, what confuses me is the dependence on the ratio of volumes of the balls (one contained in the polytope, one containing it). I have tried finding some lecture notes online but none have explained the following problem.
Why is the ratio “small”? Ok, ok, I guess one could get an upper bound on the volume of the bigger ball based on the description length of the problem (is this actually what happens?). However, more problematic is the ball contained in the polytope. What if there is only one feasible solution?
Actually, I have watched a video lecture from MIT about this and at the end of the lecture, the lecturer showed a reduction from feasibility to optimisation in LP by “taking union the problem and its dual”. But isn’t this specifically very likely to result in LP which has only one feasible solution?