Why does Mathematica crash at a certain recursion depth?

If I enter

Block[{$  RecursionLimit = 70000}, x = x + 1] 

I get

$  RecursionLimit: Recursion depth of 70000 exceeded during evaluation of 1+x. 

But at $ RecursionLimit = 80000, Mathematica crashes (i.e. goes unresponsive for a little while and then clears all variables). Why is this? Is there some limiting factor that I can increase to go even further?

Arbitrary depth nested for-loops without recursion

Suppose I have an array of n values I want to apply nested for-loops over to an arbitrary depth m.

const array = [1, 2, 3];  // 2-depth for-loop for (const i of array) {   for (const j of array) {     // do the thing   } }  // 3-depth for-loop for (const i of array) {   for (const j of array) {     for (const k of array) {       // do the thing     }   } } 

The obvious solution is to use recursion. In JavaScript/TypeScript, a generator lends itself well here. For an example problem, let’s calculate the probability distribution of the sum of rolling m 6-sided dice.

type Reducer<T, TResult> = (current: T, accumulator?: TResult) => TResult;  function* nestForLoopRecursive<T, TResult>(   array: T[],   depth: number,   reduce: Reducer<T, TResult> ): Generator<TResult> {   for (const value of array) {     if (depth === 1) {       yield reduce(value);     } else {       for (const next of nestForLoopRecursive(array, depth - 1, reduce)) {         yield reduce(value, next);       }     }   } }  function reduceSum(current: number, prev = 0): number {   return current + prev; }  const pips = [1, 2, 3, 4, 5, 6];  interface RollDistribution {   [key: number]: number; }  function rollMDice(m: number): RollDistribution {   const results: RollDistribution = {};    for (const result of nestForLoopRecursive(pips, m, reduceSum)) {     results[result] = results[result] !== undefined ? results[result] + 1 : 1;   }    return results; }  for (let m = 1; m <= 3; m++) {   console.log(`Rolling $  {m} $  {m === 1 ? 'die' : 'dice'}`);   console.log(rollMDice(m));   console.log(); } 


Rolling 1 die { '1': 1, '2': 1, '3': 1, '4': 1, '5': 1, '6': 1 }  Rolling 2 dice {   '2': 1,   '3': 2,   '4': 3,   '5': 4,   '6': 5,   '7': 6,   '8': 5,   '9': 4,   '10': 3,   '11': 2,   '12': 1 }  Rolling 3 dice {   '3': 1,   '4': 3,   '5': 6,   '6': 10,   '7': 15,   '8': 21,   '9': 25,   '10': 27,   '11': 27,   '12': 25,   '13': 21,   '14': 15,   '15': 10,   '16': 6,   '17': 3,   '18': 1 } 

My understanding is that any recursive function can be rewritten iteratively, though it usually requires some augmentation. (For example, an in-order traversal of a binary tree can be done iteratively if you augment each node with two bits and a parent pointer.)

How can I rewrite nestForLoopRecursive() without using a stack or any other recursive data structure? In particular, is it possible to do this in at most O(n lg(m)) space?

Here’s a CodeSandbox with everything needed written in TypeScript. The code yet to be written starts at line 16. Feel free to answer using whatever language you choose, though, including pseudocode.

Time Complexity of Recursive Ray Tracing with respect to Depth

How does depth of ray tracing affect the run time (role in the complexity) of the recursive ray tracing algorithm with reflection and refraction?

How I have calculated it is, for each ray after intersection, it is split to 2 rays (one for reflection and one is the refracted ray), so the complexity with respect to depth would be exponential time ~O(2^D) for the ray tracing depth D. And for the image resolution of M*N, the complexity would be O(M.N.2^D).

Would you confirm these results, or am I missing something?

Iterative Depth First Search for cycle detection on directed graphs

I found this pseudocode on Wikipedia, and looks very elegant and intuitive:

L ← Empty list that will contain the sorted nodes while exists nodes without a permanent mark do     select an unmarked node n     visit(n)  function visit(node n)     if n has a permanent mark then         return     if n has a temporary mark then         stop   (not a DAG)      mark n with a temporary mark      for each node m with an edge from n to m do         visit(m)      remove temporary mark from n     mark n with a permanent mark     add n to head of L 

I am trying to write an iterative version for this using 3 marks (UNVISITED, VISITED, PROCESSED), but the lack of tail recursion really bothers me.

How should I approach this?

Cover interval with minimum sum intervals – DP recursion depth problem

I have just found the official solutions online (have been looking for them for a while, but after posting this I quickly found it), and I’m currently trying to understand it. As I can tell, it uses a much simpler DP, which only uses an O(N) array.

This year I attended a programming competition, where we had a few interesting problems. I had trouble with some of them, and since the next round approaches, I want to clear them up.

The problem in a nutshell
We are given N weighted intervals, and a [A,B] interval. We have to cover the [A,B] interval with the given ones, in such a way, that we minimize the overall weight sum (and then the number of required intervals). We need to print the sum, the number of intervals, and then the intervals themselves. If we can not cover [A, B], we need to report that as a special value (-1).

First thoughts
If we sort the intervals by begin time, then we can do simple 0-1 Knapsack-like DP and solve the problem. Also, if the priorities were swapped (minimize count THAN sum), a simple greedy would do it.

The limits
Basically, all interval starts and ends are in the range 1-1 000 000, and N<=100 000. All intervals lie within [A,B].

My approach
I wrote a recursive algorithm like the 0-1 Knapsack one in python, that also stored the last selected interval – thus allowing the recovering of the selection list from the DP array later. It was a (current_interval, last_covered_day) -> (cost, last_selected_interval, last_covered_day')-like function. I used a dict as a DP array, as a regular array that big would have violated memory constraints and filling it fully would also increase runtime (at least that’s what I thought – but a 1000000 * 100000 array would certainly would!). I wrote the function as a recursive one so it would not fill in the entire DP array and be faster & more memory-efficient.

The problem with this
Simply, I got RecursionError: maximum recursion depth exceededs on larger datasets – 100k deep recursion was simply too much. I read since on GeeksForGeeks that it should be possible to increase it, but I am still not confident that it would be safe. My recursive function is also not tail-call optimizatiable, so that would also not work.

So my questions
Is there a way of solving this problem without DP? If no, is filling in a full table an option with those high limits? Maybe we can come up with a different DP approach that does not use such big tables? Is it safe to just increase recursion depth limits with this kinds of problems?

Circuit depth of computing the continued fractions of a rational number

If you want to convert a rational number into its continued fraction, what is the circuit depth of this process, in terms of the total number of bits of input?

I was reading through some notes which mentioned that the work being done while computing the continued fraction is basically the same as the work being done while computing a GCD. Are their circuit depths similar?


Node depth in randomly built binary search tree

It can be proved that randomly built binary search trees of size $ n$ are of depth $ O(log(n))$ and it is clear that level $ k$ has at most $ 2^{k}$ nodes (root’s level is 0).

I have an algorithm that traverses every path in the tree (beginning at the root) but stops after traversing $ k$ nodes (the parameter $ k$ is a configurable parameter which is independent of the size of the tree $ n$ ).

For any tree $ T$ with $ depth(T) > k$ , the algorithm will miss some nodes of the tree. I would like to bound the probability of my algorithm missing a large number of nodes.

Formalizing it: let $ T$ be a randomly built binary search tree of $ n$ nodes. I would like to calculate the probability for a node to have depth larger than $ k$ , as a function of $ n$ and $ k$ .

Making graph acyclic by removing back edges in depth first and breadth traversal

I came across following points:

  1. Removing all back edges produced by DFS makes the graph acyclic.

  2. For a directed graph, the absence of back edges with respect to a BFS tree implies that the graph is acyclic.

There was no explanation given for first point. But explanation for second point was as follows:

It is true that the absence of back edges with respect to a DFS tree implies that the graph is acyclic. However, the same is not true for a BFS tree. There may be cross edges which go from one branch of the BFS tree to a lower level of another branch of the BFS tree. It is possible to construct a cycle using such cross edges (which decrease the level) and using forward edges (which increase the level).

However I am unable to get the explanation. I believe back edges are essential for forming cycles as can be seen in below image (black edges are breadth first tree edges, green is back edge, red is cross edge and dashed edges form cycle):

enter image description here

As we can see, removing back edge can lead to disconnecting cycle.

Q1. So how solution says we can form cycle only with the help of cross and tree edges?

Q2. If 2nd fact is indeed correct, then how the same does not apply to fact 1, that is why we cannot prepare cycle without back edges in depth first traversal.

Is there any intuitive way to see the validity of these statements and answer my questions?

Size, depth, and time of circuits

Can polynomial depth circuits (with, let’s say AND, OR, and NOT gates) be simulated in polynomial time? Also, what can we say about decision problems having a polynomial time algorithm, ie, in $ P$ ? Do they always have polynomial sized circuits (but not circuits of polynomial depth)? What’s an equivalent complexity class for polynomial depth circuits?