Enumerating points on the integer lattice, within a sphere, sorted by angle, in O(1) space

Inspired by this StackOverflow question: https://stackoverflow.com/questions/63346135

(it was not clearly presented, and got closed)

Let’s say I wanted to enumerate all the 3D points on the integer lattice, within a sphere, in order of the angle between the vector to the point and the up vector (say z).

Could I do this in O(1) space efficiently?

All I can find is:

Remembering last point (init at (0,0,0)). (O(1) memory) while true     init best dot product to 0     going through all 3D points (three nested for's with radius range)         if this point has better dot product but least than last point             keep this point as best     if best dot product is still 0, exit      pick best point as current point (this is where the listing occurs)     update last point to best point 

Not only is this absolutely slow, it also needs the use of integer math for dot product and length so that numerical precision doesn’t mess with symmetrical points and it would also need a tweak to guarantee symmetrical points are listed in a known order.

Is there any good algorithms that would apply here?

What factors of the integer dataset being sorted can I change, in order to compare two sorting algorithms?

I am comparing two comparison and binary data structure based sorting algorithms, the Tree Sort, and the Heap Sort. I am measuring the time taken for both algorithms to sort an increasing size of an integer dataset. However, I am wondering if there are any other variables which I can modify, for example standard deviation, in the integer dataset itself that would be of any benefit to my comparison.

Theoretical lower bound of finding number of occurrences of a target integer in a sorted array


Given a sorted array of integers and a target integer, let’s say we want to find the number of occurrences of the target integer.

It is well-known that a binary search can give time complexity $ O(\lg n) $ where $ n$ is the size of the array. For example, given an array $ [1,2,3,3,4,5]$ and a target $ 3,$ the algorithm should return $ 2$ since there are two copies of $ 3$ in the array.

Question: Is there a faster algorithms which give time complexity less than $ P(\lg n)?$ Otherwise, is there a proof to prove that $ O(\lg n)$ is the theoretical lower bound?

How to uniformly sample a sorted simplex

I am looking for an algorithm to uniformly generate a descending array of N random numbers, such that the sum of the N numbers is 1, and all numbers lie within 0 and 1. For example, N=3, the random point (x, y, z) should satisfy:

x + y + z = 1 0 <= x <= 1 0 <= y <= 1 0 <= z <= 1 x >= y >= z 

My guess is all i have to do is uniformly sample a simplex (Uniform sampling from a simplex), and then sort the elements. But i’m not sure whether the result sampling algorithm is uniform.

Also, rejection sampling is not ideal for me, because i’ll use this for high dimension.

Thanks!

Time Complexity of a Naive Solution to Merge K Sorted Arrays

There is a leetcode question about merging k sorted arrays. I would like to be able to explain the time complexity of the following naive solution:

function mergexsSortedArrays(xs) {   if (xs === null || xs.length === 0) {     return [];   }    let l1 = xs[0];    for (let i = 1; i < xs.length; i++) {     let l2 = xs[i];      l1 = merge(l1, l2);   }    return l1; }   /* This is simply for completeness; the relevant code is above /* function merge(l1, l2) {   const ans = [];    let l1HeadIdx = 0;   let l2HeadIdx = 0;    while (l1HeadIdx < l1.length && l2HeadIdx < l2.length) {     if (l1[l1HeadIdx] < l2[l2HeadIdx]) {       ans.push(l1[l1HeadIdx]);       l1HeadIdx++;     } else {       ans.push(l2[l2HeadIdx]);       l2HeadIdx++;     }   }    while (l1HeadIdx < l1.length) {     ans.push(l1[l1HeadIdx]);     l1HeadIdx++;   }    while (l2HeadIdx < l2.length) {     ans.push(l2[l2HeadIdx]);     l2HeadIdx++;   }    return ans; }   

Let’s say that k is the number of elements in the input array. To simplify the math, we will assume that each sorted array has length n.

Within the for loop, we are running the merge algorithm. On the first iteration, l1 will have length n and l2 will have length n, so the merge algorithm will be do 2n work. In the second iteration, l1 will be 2n and l2 will be n, so merge will do 3n work. As our result, the amount of work that is being done in our for loop will be 2n + 3n + 4n ... (k - 1) n. If we expand this work a bit, it would be n + 2n + 3n ... k(n), and this can be re-written as n * (1 + 2 + 3 ... k); the inner sum has a closed-form formula of [k * (k + 1)] / 2, which is essentially an O(k^2), and then we add the n to get a final time complexity of O(n(k^2)).

Is this correct? Or have I gone off the rails?

Linear algorithm to measure how sorted an array is

I’ve just attended an algorithm course, in which I’ve seen many sorting algorithms performing better or worse depending on how much the elements of an array are sorted already. The typical example are quicksort, performing in $ O(n^2)$ time, and mergesort which operates in linear time on sorted arrays. Vice versa, quicksort performs better in case we are dealing with an array sorted from the highest to the lowest value.

My question is if there is a way to measure in linear time how sorted the array is, and then decide which algorithm is better to use.

Sorting almost sorted array

Encountered this question but I couldn’t solve with the complexity they solved it:

Suppose I have an array that the first and last $ \sqrt[\leftroot{-2}\uproot{2}]{n} $ elements has $ \frac{n}{5}$ swapped pairs, and the middle $ n – 2\sqrt[\leftroot{-2}\uproot{2}]{n}$ elemnts are sorted. What is the complexity of sorting the unsorted array?

They claim in the answer that sorting array with $ I$ swaps is $ O(n\log{\frac{n}{I}})$ .Why?

How can merging two sorted arrays of N items require at least 2N – 1 comparisons in every case?

The HW question, on page 362 of Data Structures and Algorithms in C++: Fourth Editionby Mark Allen Weiss, reads as follows:

Prove that merging two sorted arrays of N items requires at least 2 * N – 1 comparisons. You must show that if two elements in the merged lists are consecutive and from different lists, then they must be compared

If we are trying to find the lower bound then wouldn’t the number of comparisons be N, not 2 * N – 1? If the largest element is some array Ais smaller than the smallest element in some array B then at most you would only have to do N comparisons since after all of A‘s elements are placed in the merged array then the remaining elements in B‘s can simply be appended in to the merged array.

The lower bound has to be something that is true for every possible N and every possible combination of elements in both arrays, right? 2 * N – 1 seems more like it would be the upper bound since there is no way there can be more comparisons than that.

Note: I am not asking for an answer to the HW question itself as I know that is deprecated. I am confused about the implied assertion the question is making about the lower bound

Pagination on grouped / multi sorted lists

I’m currently working on a design for data heavy lists. User research showed, that users would group / multi sort those lists. this would result in a list with small header rows in between for every sort parameter.

So far so good. I’m now having a hard time, as those lists have a pagination. I would now also need to show the grouped rows over all pages which might take the user out of context…

any thoughts on how to solve the problem?

Cobo