$O(1)$ time, $O(1)$ state random access Brownian motion?

I would like to generate discrete samples $ 0 = B(0), B(1), \ldots, B(T)$ of a Brownian motion $ B : [0,T] \to \mathbb{R}^d$ . It is possible to get $ O(\log T)$ time random access into a consistent sequence of samples $ B(k)$ by constructing the path in tree fashion. We choose $ B(T) = \sqrt{T} Z_0$ where $ Z_0$ is a standard Gaussian, then let $ B(T/2) = B(T)/2 + \frac{\sqrt{T}}{2}Z_1$ where $ Z_1$ is an independent standard Gaussian, construct $ B$ for the intervals $ [0,T/2]$ and $ [T/2,T]$ independently conditional on $ B(0), B(T/2), B(T)$ , and so on recursively.

Since any particular $ B(k)$ constructed in this fashion only depends on $ O(\log T)$ Gaussian random values, this gives us $ O(\log T)$ time random access into a consistently sampled sequence of Brownian motion as long as we have random access random numbers. The only state we need is the random seed. As long as we use the same seed, evaluations of $ B(k)$ for different $ k$ will be consistent, in that their joint statistics will be the same as if we had sampled Brownian motion in sequential fashion.

Question: Is it possible to do better than $ O(\log T)$ , while preserving the $ O(1)$ state requirement (only the seed must be stored)?

My sense is no, and that it would easy to prove if I found the right formalization of the question, but I don’t know how to do that.

When binary search algorithm will be $O(1)$

I had an exam today, and there was a question that says:

->Based on the definition of binary search algorithm, discuss when the algorithm time will be $ O(1)$ and when it will be $ O(log(n))$ .

I’m not sure about my answer, but I answered randomly said that, it can be $ O(1)$ when the element I am searching for is at the middle of the array, and it will be $ O(log(n))$ when the element is at the end of the array.

I know my answers might be trivial, but how can I answer a question like this?

Does the fact that the amortized cost of each extract-max operation is $O(1)$ mean that the entire sequence can be processed in $O(n)$ time

Consider an ordinary binary max-heap data structure with n elements that supports insert and extract-max in $ O(log(n))$ worst-case time.

Q) Consider a sequence of n extract-max operations performed on a heap H that initially contains n elements. Does the fact that the amortized cost of each extract-max operation is $ O(1)$ mean that the entire sequence can be processed in $ O(n)$ time? Justify your answer.

Ans: No, A sequence of n EXTRACT_MAX will cost $ O(nlogn)$ . This is because in a Heap the leaves are almost on the same level and the number of leaves in a Heap is $ O(n/2) = O(n)$ . Calling EXTRACT_MAX each time removes the maximum element from the root and replace it with a new second maximum element and decrement a leaf. Thus, decrementing all the leaves (which are of $ O(n)$ ) will take logn time each as all leaves are almost in the same level totally giving a time complexity of $ O(nlogn)^{1/4}$ .

does this make sense?

Terminology for worst-case N-complexity on $O(1)$ insert after amortisation

Normally, when discussing amortisation and worst-case complexity, amortisation negates the worst-case scenarios, and the BigO describes the average for the operation (the way it’s used in interviews nowadays).

For example, when an insertion of an element requires reallocation and copying of the array, increasing the size in two, it is still said that insertion is $ O(1)$ , not $ O(n)$ ; the worst-case performance is rarely mentioned. Compare this with sorting algorithms, where the worst case could be $ O(n^2)$ , whereas the average case could be $ O(n\log{}n)$ , and this is always specified and discussed.

  • What is the terminology to express this N-level complexity for the $ O(1)$ insertion that requires reallocation with copy of $ O(N)$ elements?

  • What is the alternative terminology for expressing an optimisation approach, with delayed copy, where the worst-case complexity remains constant-time, provided the operation to allocate new chunk of memory itself is constant. In other words, where you keep both arrays, and do a lazy-copy approach.

Specifically, both of the above could be described as amortisation. But I’m having trouble finding out the absolutely correct terminology for this nuanced problem, as how do you distinguish between these two different scenarios in describing amortisation?!