Analogue of the topology-computability correspondence for computational complexity

There is an interesting correspondence between notions of topology and notions of computability theory originating from the ingenious idea of Dana Scott to identify computable functions with continuous functions (in fact it can perhaps be traced back to Brouwer).

Complexity theory can be seen as a refinement of computability theory: we are interested in not only whether a problem is solvable or not but also the efficiency with which it is solvable. Taking this view of complexity theory as a refinement of computability theory, I wonder if there is any research on analogous mathematical structures to provide efficiency-sensitive denotational semantics to programming languages.

The point of topology is that it is distance-blind which is in accord with the efficiency-blind approach in computability theory. So it seems to me that an efficiency-sensitive denotational structure should be a topological space with a notion of distance, i.e., a metric space or something like that.

Has someone worked this out or is this a nonsensical question? If it is the latter, please explain why.

Time Complexity: Why $n^n$ grows fater than $n!$

Seeing the title, u will probably like to give your explanation as

$ n!=n\times (n-1)\times (n-2)\times (n-3)\times\cdots\times 1$

where as,

$ n^n$ = $ n\times n \times n\times n \times\cdots\times n$ $ \text{($ n$ -times)}$

but consider one thing, if we do $ \log(n!)$ then it comes out to be $ O(n\log n)$

enter image description here

on the other hand, if we do $ \log(n^n)$ it also comes out to be $ O(n\log n)$ so, asymptotically aren’t they equal?

Can algorithms of arbitrarily worse complexity be systematically created?

We’ve all seen this: Hierarchy of time complexities

Can we get worse?

Part 1: Can mathematical operations of increasing orders of growth be generated, with or without Knuth’s up-arrow notation?

Part 2: If they can, can algorithms of arbitrary complexities be systematically generated?

Part 3: If such algorithms can be generated, what about programs implementing those algorithms?

Add an element to an interface without increasing the complexity of the layout

I need to implement a text and a switch to a login interface. This is what I made:

enter image description here

The problem is that I think it is a bit inappropriate, considering the layout and colors chosen in the interface. How would you improve it? Can it be a solution to close it inside a rectangular shape and move it to the center of screen?

Time Complexity for Nearest Neighbor Searches in kd-trees

Nearest neighbor searches in kd-trees run in logarithmic time, as shown by Friedman et al. However, I have some difficulty to fully understand the proof.

In order to calculate the average number of buckets examined by the k-d tree searching algorithm described above, it is necessary to calculate the average number of buckets overlapped by the region $ S_m(X_q)$ .

$ S_m(X_q)$ is the smallest ball centered at $ X_q$ that exactly contains the $ m$ points closest to $ X_q$ .

I don’t get why only the regions overlapping $ S_m(X_q)$ are examined. Consider the following example, where we want to compute the black point that is closest to the orange point $ X_q$ . $ S_m(X_q)$ is the green circle in this case, so according to the proof the algorithm should only search both lower buckets.

enter image description here

However, the searching algorithm will find as first candidate solution the black point in the lower right region. Then, it will also search regions that intersect the blue circle, in particular the upper right region.

So, isn’t it too restricted to compute only the buckets that intersect the green circle?

Upper bound for runtime complexity of LOOP programs

Recently I learned about LOOP programs, which always terminate and have the same computational power as primitive recursive functions. Furthermore primitve recursive functions can (as far as I understood) compute anything that isn’t growing faster than $ Ack(n)$ .

Is this implying that the upper bound runtime complexity for LOOP programs is $ O(Ack(n))$ ? And are there functions similar to Ackermann’s function, which can’t be computed by primitive recursive functions, but grow slower than $ Ack(n)$ ?

(sorry for spelling and grammar)