Solving a maze with DFS vs “wall following”

Consider a problem where a “robot cleaner” is placed on a room modeled as a grid. Each cell in the grid can be empty or blocked and all accessible cells are connected, meaning, all empty cells will be accessible by the robot regardless of its starting position.

We are told that the robot cleaner can only take one of four actions:

  • move forward
  • turn left (90 degrees, without moving)
  • turn right (90 degrees, without moving)
  • clean the current cell in the grid

We are asked to design an algorithm for the robot to clean the entire room.


My question is: Can this problem be framed as a maze solving problem? I mention this because a common strategy (if the maze is simply connected) for maze solving is to be a “wall follower” (e.g. always try right, or always try left), and I wonder if wall following would work here.

More generally, why would “wall following” be a good strategy for either problem? Isn’t it enough to do DFS (with backtracking) even if we pick an arbitrary order of directions that are “left to explore” from each grid position? (or explore directions at a given position in any order?)

I Want a solving for this algarithem,palace?

A Maximization Problem P of n items has a Bounding function B. A brute force/exhaustive search solution for P has an order of growth of n 2 5 n . A solution is a vector (x 1 ,x 2 ,…x n ), where x i =- 2,-1, 0, 1 or 2, i=1,n. P is to be solved using Backtracking/Branch and Bound. Answer the following: 1) What is the order of growth of the Backtracking solution?

2) What is the order of growth of the Branch and Bound solution?

3) If all partial solutions are feasible, what is the maximum number of pruned nodes? When does this case happen?

4) If all partial solutions are feasible, what is the minimum number of pruned nodes? When does this case happen?

5) Which is better Backtracking or Branch and Bound? Why?

Solving recurrence

How to solve the recursion:

$ T(n) = \begin{cases} T(n/2) + O(1), & \text{if $ n$ is even} \ T(\lceil n/2 \rceil) + T(\lfloor n/2 \rfloor) + O(1), & \text{if $ n$ is odd} \end{cases} $

I think T(n) is O(logn). Can somebody show a proof?

Different way of solving boolean satisfiability

I know there are commonly used boolean satisfiability problem-solving methods (such as DPLL etc.). As far as I know, such methods mostly depend on converting boolean expression into the CNF or DNF form. However, I’m trying to solve the given expressions “as-is”, without conversion to other forms. So, for boolean expression, I’m building a tree, consisting of many logical gates (AND, OR, NOT, XOR, VAR, CONST) and trying to propagate the right values by applying boolean laws.

step 1 – Common boolean laws

In such an approach there are some trivial steps like if AND has a value of 1 then the inputs must be both 1 and 1, or if XOR has a value of 1, and one input is 0, then the second input must be 1, etc. This step I call “propagation of values”, and it is performed until there are no values to resolve.

step 2 – Resolution by conflict

The second step is to assign value to the random gate, without a value assigned/resolved yet, name it X temporarily, then do “propagation of values”, and observe if in the expression tree some conflicts occurred. By conflict, I mean illegal state of the gate like AND(1,1)->0. So if the conflict occurred, this tells us that value assigned to the X gate is illegal, so I can assign the opposite value, and get back to the first step. This step I call “resolution by conflict”.

step 3 – Resolution by commons

The third step is to assign a value of 0 to the random gate, without a value assigned/resolved yet. Perform “propagation of values”, collect results (the assigned values of gates from the tree), in other words, do a “snapshot” of the tree after assignment. Then back to the original state before this step, assign a value of 1 to the same gate, perform propagation of values, then collect the second “snapshot”, then back to the original state again. Now, if the snapshots are both conflict-free, we could look at values that are both the same in the first and the second snapshot. These values are independent of the initial gate assignment and valid, so we can safely assign them to their gates. This step I call “resolution by commons”.

These steps were tested by me (program in C++) and worked pretty well.

Question

So, the question is, what other kinds of steps can apply for this kind of resolution / BSAT solving? I’m asking because there are still some problems that with these steps I can’t go further (algorithm with these steps “see” that is nothing to do more).

If something is unclear, please let me know in the comments. Also if such an approach is known I’ll be thankful for references or links to resources about the topic.

Solving NP problems : analogy between the SAT problem and the shortest path problem

in this 2minute-long video https://www.youtube.com/watch?v=TJ49N6WvT8M (pulled from a free udacity course on algorithms/theoretical computer sciences),

whose purpose is to show how a SAT problem can be solved as the shortest path problem would be,

I understand that

  • there are n path paterns as n different boolean variables belonging to an “AND group of clauses”. “Clause” being “an OR group” composed of conditions over some of the n variables.
    example : of a SAT problem being “find values of x1, x2, x3 such that clause1 AND clause2 AND clause3 is true”, with clause1 being x1 OR x2 (is true), clause2 being x1 OR NOT(x3), etc.
  • m added vertices to force the m clauses to be ensured while having an unique “local” shortest path (within a local pattern)

But the thing I don’t understand is why each pattern has to have 3(m+1) vertices? Why 2 diamonds per pattern isn’t sufficient ?

Thank you for enlighting me

Solving a modified Travelling Salesman Problem(TSP)

I am trying to solve a modified version of the TSP. In my version, multiple visits to a city are allowed, as long as the path is the shortest, and also, only subset of the cities are compulsory to visit, as in, you can go through other cities to visit all the subset cities if path is shorter, but if not, the other cities can be ignored. For simplicity, starting city is fixed. I know approx. solutions for the traditional TSP, but, I have trouble solving this one. A naive approach can be to find all possible combinations of the subset cities and check which has the shortest total path length, but that solution will have a n^2 complexity for trying each combination, plus the complexity for finding shortest path for each two cities. So, what should I use to solve this problem?

Solving a peculiar recurence relation

Given recurrence:

$ T(n) = T(n^{\frac{1}{a}}) + 1$ where $ a,b = \omega(1)$ and $ T(b) = 1$

The way I solved is like this (using change of variables method, as mentioned in CLRS):

Let $ n = 2^k$

$ T(2^k) = T(2^{\frac{k}{a}}) + 1$

Put $ S(k) = T(2^k)$ which gives $ S(\frac{k}{a}) = T(2^{\frac{k}{a}})$

$ S(k) = S(\frac{k}{a}) + 1$

Now applying Master’s Theorem,

$ S(k) = \Theta(log_2(k))$

$ T(2^k) = \Theta(log_2(k))$

$ T(n) = \Theta(log_2log_2(n))$

I believe my method is incorrect. Can anyone help ?

Thanks

An Exact Method for Solving Small Instances of XORSAT

I have a couple of small instances for XORSAT for which I am to design and implement an exact method. However, there are a few catches. It is guaranteed that there always exists and answer but I need to find the one solution with the minimal (non-zero) number of TRUE variables.

As I understand, XORSAT can apparently be solved in polynomial time through the Gaussian Elimination method but I simply cannot apply the special conditions of this problem (the minimal number of TRUE variables) and I’m wondering if I could potentially get some help on that.

The instances of XORSAT that I have are not completely random, they all consist of a number of variables that if XOR’d together will always yield FALSE. I.e., all of the clauses are in the form of $ x_1 \oplus x_2 \oplus … \oplus x_i = 0$ which would translate into the equation $ x_1 + x_2 + … + x_i = 0\space(mod 2)$ .

At first, I tried to apply a greedy method, taking the two most repeated variables in all of the clauses and forcing the value TRUE on them, then propagating their values through all of the clauses and repeating the same method on the remaining unsatisfied clauses but that was fundamentally flawed.

Any help is tremendously appreciated.

solving a problem with induction

the original question is the following

prove that $ 2·\sum_{i=0}^{n-1} 2^{i} = 3^n-1$ for all n $ \geq$ 1

I know that i have to prove by induction and have succesfully done the base case, my IH is the following:

Assume $ 2·\sum_{i=0}^{n-1} 2^{i} = 3^n-1$ holds for all n $ \geq$ 1 to prove $ 2·\sum_{i=0}^{n} 2^{i} = 3^{n+1}-1$

then I did this: $ 2·\sum_{i=0}^{n} 2^{i} = 2·\sum_{i=0}^{n-1} 2^{i} +3^n$

then by my IH: $ 2·\sum_{i=0}^{n-1} 2^{i} +3^n = 3^n-1+3^n $

after this I struggle to continue, I know I have to find a way to rewrite it to $ 3^{n+1}-1$ and I know this is equal to $ 3·3^n-1$ but I don’t see how to get from the one to the other. Anyone who can help?