Time Complexity of Recursive Ray Tracing with respect to Depth

How does depth of ray tracing affect the run time (role in the complexity) of the recursive ray tracing algorithm with reflection and refraction?

How I have calculated it is, for each ray after intersection, it is split to 2 rays (one for reflection and one is the refracted ray), so the complexity with respect to depth would be exponential time ~O(2^D) for the ray tracing depth D. And for the image resolution of M*N, the complexity would be O(M.N.2^D).

Would you confirm these results, or am I missing something?

Iterative Depth First Search for cycle detection on directed graphs

I found this pseudocode on Wikipedia, and looks very elegant and intuitive:

L ← Empty list that will contain the sorted nodes while exists nodes without a permanent mark do     select an unmarked node n     visit(n)  function visit(node n)     if n has a permanent mark then         return     if n has a temporary mark then         stop   (not a DAG)      mark n with a temporary mark      for each node m with an edge from n to m do         visit(m)      remove temporary mark from n     mark n with a permanent mark     add n to head of L 

I am trying to write an iterative version for this using 3 marks (UNVISITED, VISITED, PROCESSED), but the lack of tail recursion really bothers me.

How should I approach this?

Cover interval with minimum sum intervals – DP recursion depth problem

READ ME FIRST:
I have just found the official solutions online (have been looking for them for a while, but after posting this I quickly found it), and I’m currently trying to understand it. As I can tell, it uses a much simpler DP, which only uses an O(N) array.

Story:
This year I attended a programming competition, where we had a few interesting problems. I had trouble with some of them, and since the next round approaches, I want to clear them up.

The problem in a nutshell
We are given N weighted intervals, and a [A,B] interval. We have to cover the [A,B] interval with the given ones, in such a way, that we minimize the overall weight sum (and then the number of required intervals). We need to print the sum, the number of intervals, and then the intervals themselves. If we can not cover [A, B], we need to report that as a special value (-1).

First thoughts
If we sort the intervals by begin time, then we can do simple 0-1 Knapsack-like DP and solve the problem. Also, if the priorities were swapped (minimize count THAN sum), a simple greedy would do it.

The limits
Basically, all interval starts and ends are in the range 1-1 000 000, and N<=100 000. All intervals lie within [A,B].

My approach
I wrote a recursive algorithm like the 0-1 Knapsack one in python, that also stored the last selected interval – thus allowing the recovering of the selection list from the DP array later. It was a (current_interval, last_covered_day) -> (cost, last_selected_interval, last_covered_day')-like function. I used a dict as a DP array, as a regular array that big would have violated memory constraints and filling it fully would also increase runtime (at least that’s what I thought – but a 1000000 * 100000 array would certainly would!). I wrote the function as a recursive one so it would not fill in the entire DP array and be faster & more memory-efficient.

The problem with this
Simply, I got RecursionError: maximum recursion depth exceededs on larger datasets – 100k deep recursion was simply too much. I read since on GeeksForGeeks that it should be possible to increase it, but I am still not confident that it would be safe. My recursive function is also not tail-call optimizatiable, so that would also not work.

So my questions
Is there a way of solving this problem without DP? If no, is filling in a full table an option with those high limits? Maybe we can come up with a different DP approach that does not use such big tables? Is it safe to just increase recursion depth limits with this kinds of problems?

Circuit depth of computing the continued fractions of a rational number

If you want to convert a rational number into its continued fraction, what is the circuit depth of this process, in terms of the total number of bits of input?

I was reading through some notes which mentioned that the work being done while computing the continued fraction is basically the same as the work being done while computing a GCD. Are their circuit depths similar?

notes

Node depth in randomly built binary search tree

It can be proved that randomly built binary search trees of size $ n$ are of depth $ O(log(n))$ and it is clear that level $ k$ has at most $ 2^{k}$ nodes (root’s level is 0).

I have an algorithm that traverses every path in the tree (beginning at the root) but stops after traversing $ k$ nodes (the parameter $ k$ is a configurable parameter which is independent of the size of the tree $ n$ ).

For any tree $ T$ with $ depth(T) > k$ , the algorithm will miss some nodes of the tree. I would like to bound the probability of my algorithm missing a large number of nodes.

Formalizing it: let $ T$ be a randomly built binary search tree of $ n$ nodes. I would like to calculate the probability for a node to have depth larger than $ k$ , as a function of $ n$ and $ k$ .

Making graph acyclic by removing back edges in depth first and breadth traversal

I came across following points:

  1. Removing all back edges produced by DFS makes the graph acyclic.

  2. For a directed graph, the absence of back edges with respect to a BFS tree implies that the graph is acyclic.

There was no explanation given for first point. But explanation for second point was as follows:

It is true that the absence of back edges with respect to a DFS tree implies that the graph is acyclic. However, the same is not true for a BFS tree. There may be cross edges which go from one branch of the BFS tree to a lower level of another branch of the BFS tree. It is possible to construct a cycle using such cross edges (which decrease the level) and using forward edges (which increase the level).

However I am unable to get the explanation. I believe back edges are essential for forming cycles as can be seen in below image (black edges are breadth first tree edges, green is back edge, red is cross edge and dashed edges form cycle):

enter image description here

As we can see, removing back edge can lead to disconnecting cycle.

Q1. So how solution says we can form cycle only with the help of cross and tree edges?

Q2. If 2nd fact is indeed correct, then how the same does not apply to fact 1, that is why we cannot prepare cycle without back edges in depth first traversal.

Is there any intuitive way to see the validity of these statements and answer my questions?

Size, depth, and time of circuits

Can polynomial depth circuits (with, let’s say AND, OR, and NOT gates) be simulated in polynomial time? Also, what can we say about decision problems having a polynomial time algorithm, ie, in $ P$ ? Do they always have polynomial sized circuits (but not circuits of polynomial depth)? What’s an equivalent complexity class for polynomial depth circuits?

Learning in depth of webdsign to near professional level, need advice.

I've been running a small web hosting business for years, mostly as a hobby. When my full time work took hold of my life lets say about 3 years ago, I stopped any study I had on HTML, CSS.

So now with that being said, I still do know some of the raw basics, but with all the information out there I'm a little overwhelmed.

  • So to start off I'm going to state my hardships, to give people and idea where I stand. Being a single father of 2 working full time has added a lot of stress…

Learning in depth of webdsign to near professional level, need advice.

RGBD alignment without explicit transformation between RGB and depth

I have a set of RGB-D scans of the same scene, corresponding camera poses, intrinsics and extrinsics for both color and depth cameras. RGB’s resolution is 1290×960, depth’s is 640×480.

Depth is pretty accurate, but still noisy, the same with camera poses. However, both intrinsics and extrinsics are not very accurate: for example, intrinsics does not introduce any distortion terms, while images are definitely distorted. Extrinsics tell that both cameras take place exactly at the same point, since they are just identity matrices. It could be decent approximation though.

When I try to compute a point cloud from a frame by downsampling RGB, unprojecting depth, transforming it with RGB extrinsics and projecting with RGB’s intrinsics, it looks like RGB frame is shifted by a few cm from depth – for example, an object’s border could be colored as background.

One might assume that such artifacts could be eliminated by aligning depth to RGB with improved extrinsics. Is there a way to estimate more accurate transformation between RGB and depth in such scenario?

I’m familiar with pretty relevant question, but there you have required Rt matrix for both cameras.

In case of well-performing monocular depth estimation, one could align two point clouds (computed from depth and estimated from RGB) directly and determine required transformation.

Am I missing something and there is well-known approach for this problem?