Solving Exact2IS using IS

i encountered the following problem:

Exact2IS ={G has exactly 2 independent sets}

Assuming that given a graph G i can find an independent set how can i check if G has exactly 2 independent sets.

(i can check if a graph contains independent set in O(1) and also find some independent set in O(1))

I was thinking on find some independent set(if there is any) of size k and then remove one vertex from the set and check if the graph still has independent set of size k -after i check i return the vertex to the graph exactly as it was.

The problem my idea check only if a graph G contains at least 2 independent sets and not exactly.

Anyone have an idea how can i check if a graph has exactly 2 independent sets(in polynomial time and given the fact that find and check if there is independent set is O(1))?

Any clues or ideas will be appreciated 🙂 Thanks

Is solving a quadratic equation using Turing machine impossible?

I’ve just started Algorithms at university. There’s a task to write an algorithm for a Turing machine to solve quadratic equations. The task doesn’t specify if it’s x^2+bx+c or ax^2+bx+c. I’ve searched whole bunch of information over Russian and English Internet.

I did find articles, which say it’s not possible because we’ve got real numbers A, B, C. Please confirm if that’s true. I may not get it correct.. But I think that’s impossible. I still don’t know how to prove my thoughts.

Thanks in advance!

Logic behind a single-tape NTM solving the TSP in $O({n}^4)$ time at most

I was going through the classic text “Introduction to Automata Theory, Languages, and Computation” by Hofcroft, Ullman, Motwani where I came across a claim that a “single-tape NTM can solving the TSP in $ O({n}^4)$ time at most” where $ n$ is the length of the input given to the turing machine (an instance of the TSP). The authors have assumed the encoding scheme as follows:

The TSP’s problem could be couched as: “Given this graph $ G$ and limit $ W$ , does $ G$ have a hamiltonian circuit of weight $ W$ or less?”

graph

Let us consider a possible code for the graphs and weight limits that could be the input. The code has five symbols, $ 0$ , $ 1$ , the left and right parentheses, and the comma.

  1. Assign integers $ 1$ through $ m$ to the nodes.

  2. Begin the code with the value of $ m$ in binary and the weight limit $ W$ in binary, separated by a comma.

  3. If there is an edge between nodes $ i$ and $ j$ with weight $ w$ , place $ (i, j, w)$ in the code. The integers $ i$ , $ j$ , and $ w$ are coded in binary. The order of $ i$ and $ j$ within an edge, and the order of the edges within the code are immaterial. Thus, one of the possible codes for the graph of Fig. with limit $ W = 40$ is

$ 100, 101000(1, 10, 1111)(1, 11, 1010)(10, 11, 1100)(10, 100, 10100)(11, 100, 10010)$

The authors move to the claim as follows:

It appears that all ways to solve the TSP involve trying essentially all cycles and computing their total weight. By being clever, we can eliminate some obviously bad choices. But it seems that no matter what we do, we must examine an exponential number of cycles before we can conclude that there is none with the desired weight limit $ W$ , or to find one if we are unlucky in the order in which we consider the cycles.

On the other hand, if we had a nondeterministic computer, we could guess a permutation of the nodes, and compute the total weight for the cycle of nodes in that order. If there were a real computer that was nondeterministic, no branch would use more than $ O(n)$ steps if the input was of length $ n$ . On a multitape $ NTM$ , we can guess a permutation in $ O({n}^2)$ steps and check its total weight in a similar amount of time. Thus, a single-tape $ NTM$ can solve the TSP in $ O({n}^4)$ time at most.

I cannot understand the part of the above paragraph in bold. Let me focus on the points of my problem.

  1. What do they mean that the branch would take no more than $ O(n)$ ?
  2. On a multitape $ NTM$ , how can we guess a permutation in $ O({n}^2)$ steps ?
  3. How are we getting the final $ O({n}^4)$ ?

I neither get the logic nor the computation of the complexity as very little is written in the text.

Solving a maze with DFS vs “wall following”

Consider a problem where a “robot cleaner” is placed on a room modeled as a grid. Each cell in the grid can be empty or blocked and all accessible cells are connected, meaning, all empty cells will be accessible by the robot regardless of its starting position.

We are told that the robot cleaner can only take one of four actions:

  • move forward
  • turn left (90 degrees, without moving)
  • turn right (90 degrees, without moving)
  • clean the current cell in the grid

We are asked to design an algorithm for the robot to clean the entire room.


My question is: Can this problem be framed as a maze solving problem? I mention this because a common strategy (if the maze is simply connected) for maze solving is to be a “wall follower” (e.g. always try right, or always try left), and I wonder if wall following would work here.

More generally, why would “wall following” be a good strategy for either problem? Isn’t it enough to do DFS (with backtracking) even if we pick an arbitrary order of directions that are “left to explore” from each grid position? (or explore directions at a given position in any order?)

I Want a solving for this algarithem,palace?

A Maximization Problem P of n items has a Bounding function B. A brute force/exhaustive search solution for P has an order of growth of n 2 5 n . A solution is a vector (x 1 ,x 2 ,…x n ), where x i =- 2,-1, 0, 1 or 2, i=1,n. P is to be solved using Backtracking/Branch and Bound. Answer the following: 1) What is the order of growth of the Backtracking solution?

2) What is the order of growth of the Branch and Bound solution?

3) If all partial solutions are feasible, what is the maximum number of pruned nodes? When does this case happen?

4) If all partial solutions are feasible, what is the minimum number of pruned nodes? When does this case happen?

5) Which is better Backtracking or Branch and Bound? Why?

Solving recurrence

How to solve the recursion:

$ T(n) = \begin{cases} T(n/2) + O(1), & \text{if $ n$ is even} \ T(\lceil n/2 \rceil) + T(\lfloor n/2 \rfloor) + O(1), & \text{if $ n$ is odd} \end{cases} $

I think T(n) is O(logn). Can somebody show a proof?

Different way of solving boolean satisfiability

I know there are commonly used boolean satisfiability problem-solving methods (such as DPLL etc.). As far as I know, such methods mostly depend on converting boolean expression into the CNF or DNF form. However, I’m trying to solve the given expressions “as-is”, without conversion to other forms. So, for boolean expression, I’m building a tree, consisting of many logical gates (AND, OR, NOT, XOR, VAR, CONST) and trying to propagate the right values by applying boolean laws.

step 1 – Common boolean laws

In such an approach there are some trivial steps like if AND has a value of 1 then the inputs must be both 1 and 1, or if XOR has a value of 1, and one input is 0, then the second input must be 1, etc. This step I call “propagation of values”, and it is performed until there are no values to resolve.

step 2 – Resolution by conflict

The second step is to assign value to the random gate, without a value assigned/resolved yet, name it X temporarily, then do “propagation of values”, and observe if in the expression tree some conflicts occurred. By conflict, I mean illegal state of the gate like AND(1,1)->0. So if the conflict occurred, this tells us that value assigned to the X gate is illegal, so I can assign the opposite value, and get back to the first step. This step I call “resolution by conflict”.

step 3 – Resolution by commons

The third step is to assign a value of 0 to the random gate, without a value assigned/resolved yet. Perform “propagation of values”, collect results (the assigned values of gates from the tree), in other words, do a “snapshot” of the tree after assignment. Then back to the original state before this step, assign a value of 1 to the same gate, perform propagation of values, then collect the second “snapshot”, then back to the original state again. Now, if the snapshots are both conflict-free, we could look at values that are both the same in the first and the second snapshot. These values are independent of the initial gate assignment and valid, so we can safely assign them to their gates. This step I call “resolution by commons”.

These steps were tested by me (program in C++) and worked pretty well.

Question

So, the question is, what other kinds of steps can apply for this kind of resolution / BSAT solving? I’m asking because there are still some problems that with these steps I can’t go further (algorithm with these steps “see” that is nothing to do more).

If something is unclear, please let me know in the comments. Also if such an approach is known I’ll be thankful for references or links to resources about the topic.

Solving NP problems : analogy between the SAT problem and the shortest path problem

in this 2minute-long video https://www.youtube.com/watch?v=TJ49N6WvT8M (pulled from a free udacity course on algorithms/theoretical computer sciences),

whose purpose is to show how a SAT problem can be solved as the shortest path problem would be,

I understand that

  • there are n path paterns as n different boolean variables belonging to an “AND group of clauses”. “Clause” being “an OR group” composed of conditions over some of the n variables.
    example : of a SAT problem being “find values of x1, x2, x3 such that clause1 AND clause2 AND clause3 is true”, with clause1 being x1 OR x2 (is true), clause2 being x1 OR NOT(x3), etc.
  • m added vertices to force the m clauses to be ensured while having an unique “local” shortest path (within a local pattern)

But the thing I don’t understand is why each pattern has to have 3(m+1) vertices? Why 2 diamonds per pattern isn’t sufficient ?

Thank you for enlighting me

Solving a modified Travelling Salesman Problem(TSP)

I am trying to solve a modified version of the TSP. In my version, multiple visits to a city are allowed, as long as the path is the shortest, and also, only subset of the cities are compulsory to visit, as in, you can go through other cities to visit all the subset cities if path is shorter, but if not, the other cities can be ignored. For simplicity, starting city is fixed. I know approx. solutions for the traditional TSP, but, I have trouble solving this one. A naive approach can be to find all possible combinations of the subset cities and check which has the shortest total path length, but that solution will have a n^2 complexity for trying each combination, plus the complexity for finding shortest path for each two cities. So, what should I use to solve this problem?

Solving a peculiar recurence relation

Given recurrence:

$ T(n) = T(n^{\frac{1}{a}}) + 1$ where $ a,b = \omega(1)$ and $ T(b) = 1$

The way I solved is like this (using change of variables method, as mentioned in CLRS):

Let $ n = 2^k$

$ T(2^k) = T(2^{\frac{k}{a}}) + 1$

Put $ S(k) = T(2^k)$ which gives $ S(\frac{k}{a}) = T(2^{\frac{k}{a}})$

$ S(k) = S(\frac{k}{a}) + 1$

Now applying Master’s Theorem,

$ S(k) = \Theta(log_2(k))$

$ T(2^k) = \Theta(log_2(k))$

$ T(n) = \Theta(log_2log_2(n))$

I believe my method is incorrect. Can anyone help ?

Thanks