Does DFS have better constants/complexity than Backtracking on a Graph?

I came to know through some examples that DFS and Backtracking aren’t exactly the same ( A misconception I had since a long time). So now my question is, since Backtracking visits nodes backwards step by step while DFS on a graph may directly jump some nodes backwards, is DFS a faster algorithm? If so, only in terms of constants or complexity wise?

The class of grammars recognizable by backtracking recursive-descent parsers

It is easy to show that there exists a grammar that can be parsed by a recursive-descent parser with backtracking but is not an $ \text{LL}(k)$ grammar (consider any grammar with a production featuring two alternatives starting with $ k$ occurrences of the same terminal).

My question is the following: Is there an identifiable strict superset of $ \bigcup_{k \in \mathbb{N}} \text{LL}(k)$ grammars that can be parsed by a backtracking recursive-descent parser, regardless of complexity?

If yes, is the maximal strict superset also identifiable?

Backtracking – How do explicit constraints depend on problem instance in backtracking?

I was reading the Chapter on Backtracking in Fundamentals of Computer Algorithms by Horowitz and Sahani, and I read the following line.

The explicit constraints depend on the particular instance I of the problem being solved.

I don’t understand why this statement is true? I have an understanding that the set S (to which all the elements of the n-tuple belong) would be the same for all instances of the problem, but this statement suggests that it depends on the problem instance.

Backtracking vs Branch-and-Bound

I’m getting a bit confused about the three terms and their differences: Depth-First-Search (DFS), Backtracking, and Branch-and-Bound.

What confuses me:

  • Stack Overflow: Difference between ‘backtracking’ and ‘branch and bound’, Abhishek Dey: “Backtracking is [always] used to find all possible solutions” and “[Branch and Bound] traverse[s] the tree in any manner, DFS or BFS”.
  • Branch-and-Bound uses DFS or BFS, but usually BFS. At the same time, they say that B&B uses a queue which would mean that BFS is done. So this source seems to be inconsistent with itself.
  • Constrained optimization: “Constraint optimization can be solved by branch and bound algorithms. These are backtracking algorithms […]”
  • *

Here is what I think they are. As it is a question about terminology where I already have an idea what the answer could be, I expect sources.

Concrete and a bit smaller questions:

  1. If we use other tree traversals than DFS (e.g. BFS), can it still be Backtracking?
  2. If we use other tree traversals than BFS (e.g. DFS), can it still be B&B?
  3. If we have a constraint satisfaction problem (CSP) and not a constraint optimization problem (COP), can it still be B&B?
  4. If we have a COP and not a CSP, can it still be Backtracking?
  5. Is B&B a special Backtracking algorithm (or vice versa)?

Depth-First Search

Depth-First-Search (DFS) is a way to traverse a graph:

def dfs(node):     yield node     for child in node.children:         yield from dfs(child) 


The following graph would be traversed in the order A, B, D, H, E, C, F, I, G

    A    / \   B   C  / \  /\ D  E F  G |    | H    I 

Breadth-First Search

BFS is another way to traverse a graph. For the example graph, the BFS traversal is [A, B, C, D, E, F, G, H, I]


Backtracking is a general concept to solve discrete constraint satisfaction problems (CSPs). It uses DFS. Once it’s at a point where it’s clear that the solution cannot be constructed, it goes back to the last point where there was a choice. This way it iterates all potential solutions, maybe aborting sometimes a bit earlier.


Branch-and-Bound (B&B) is a concept to solve discrete constrained optimization problems (COPs). They are similar to CSPs, but besides having the constraints they have an optimization criterion. In contrast to backtracking, B&B uses Breadth-First Search.

One part of the name, the bound, refers to the way B&B prunes the space of possible solutions: It gets a heuristic which gets an upper bound. If this cannot be improved, a sup-tree can be discarded.

Besides that, I don’t see a difference to Backtracking.

How to generate all combinations given an array of elements using backtracking?

Given an array, generate all combinations

For example:

Input: {1,2,3}

Output: {1}, {2}, {3}, {1,2}, {2,1}, {1,3}, {3,1}, {2,3}, {3,2}, {1,2,3}, {1,3,2}, {2,1,3}, {2,3,1}, {3,1,2}, {3,2,1}

I am practicing Backtracking algorithms and I think I understand the general idea of backtracking. You are essentially running a DFS to find the path that satisfies a condition. If you hit a node that fails the condition, exit the current node and start at the previous node.

However, I am having trouble understanding how to implement the traverse part of the implicit tree.

My initial idea is to traverse down the left most path which will give me {1}, {1,2}, {1,2,3}. However, once I backtrack to 1, how do I continue adding the 3 to get {1,3} and {1,3,2} afterwards? Even if I have 2 pointers, I would need it to point to the 2 to eventually get {1,3,2}.

Am I approaching this problem correctly by drawing this implicit tree and trying to code it? Or is there another approach I should take?

I am not looking for code to solve this, rather I am looking for some insight on solving these kinds of questions.

enter image description here

How to count number of consistency checks in CSP backtracking algorithm?

Function BACKTRACKING-SEARCH(csp)returns a solution,or failure return BACKTRACK({},csp) Function BACKTRACK(assignment, csp) returns a solution, or failure begin if assignment is complete then return assignment  var = SELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp)  for each value in ORDER-DOMAIN-VALUES(var, assignment, csp) do  if value is consistent with assignment then  add {var = value} to assignment  Result ←BACKTRACK(assignment,csp) If Result ≠ failure then return Result Remove {var=value} and inferences from assignment End if End for Return failure end 

This is the algorithm of backtracking algorithm.How to add number of consistency checks in this algorithm?

Is there any advantage of using an Integer Linear Program over Backtracking in a combinatorial optimization problem?

Is there any advantage of using an Integer Linear Program over Backtracking in a combinatorial optimization problem?

I saw this Gurobi post that uses Integer Linear Programming to solve the traveling salesman problem.

I compared its runtime with a Backtracking algorithm written in python, and the Integer Linear Program seems to be faster?

Is the Integer Linear Program faster because Gurobi is implemented in C++ or is it faster because it uses heuristics, pruning and other optimizations to improve its runtime?

Backtracking in DFA

Is it true that backtracking is allowed in deterministic finite automaton (as mentioned in many comparisons between DFA and NDFA)? If yes, how is it possible when transition in DFA is to a single state?

Backtracking with big inputs

I’m doing a Magic Square problem, and I’m using backtracking tecnique for do it.

So, the magic square asks you for an input which is the size of the square, and you generate this square, and you need to fill the cells with integer numbers, and the sum of the rows and the columns must be exactly this:

how to calculate the magic number, it's: (n*((n*n)+1)/2)

That is the magic number, and you can’t use the same numbers, if you use one number, you can’t use it again.

And the numbers you can use, are between 1 and the magic number itself

So, I don’t have troubles when I try a 3×3 square, but with 5×5 it’s need a lot of time for compute it.

Of course I think about it, the algorithm tries from number 1, to number 65 in this case. And checks some validations, like the number is not already used,the sum of the rows and columns are not greater than 65, and in the last column and last row, the both sums are exactly 65.

So that needs tons of time.

So the question is, backtracking is really a bad idea if the input is big, isn’t?