# Complexity of “Fast Poisson Disk Sampling in Arbitrary Dimensions”

I came across the paper Fast Poisson Disk Sampling in Arbitrary Dimensions which gives an algorithm for generating Poisson disk points in $$\mathbb{R}^n$$. It’s claimed that the algorithm is linear time. However, I wonder if this assumes the dimension n is constant.

My understanding of the analysis is that to generate $$N$$ points requires $$2N-1$$ iterations, and each iteration is constant time. I agree with the $$2N-1$$ iterations, but I’m not sure about each iteration being constant time.

In each iteration, they sample a small constant $$k$$ points, and then check whether or not these points are near to any of the previously generated points using a background grid to assist. However, isn’t the number of neighboring cells to check exponential in $$n$$? It seems like the runtime of the given algorithm using the background grid should be something like $$k N \exp(n)$$, (or $$kN^2$$ if you just check the other points directly).

Have I misinterpreted the algorithm or the analysis? I get that for a fixed $$N$$ it will be constant (assuming $$k$$ is constant). Is that the standard assumption when saying an algorithm is “linear time”? To me this seems a bit misleading because they mention dimension in the title. If you go to high enough dimension and use the “constant time” algorithm, the constant will be exponential in $$n$$ which means the algorithm is not “fast” in “arbitrary dimensions”.

As an afterthought, does it even make sense for $$k$$ to be constant. If you don’t increase $$k$$ exponentially with $$n$$ then the “density” (number of points per volume) of the points will go to zero (although this is more of a “modeling” choice than an analysis choice).