Is there an algorithm to determine which face of an n-dimensional hypercube is closest to a given point in \$O(n\log(n))\$?

Given a point in N-dimensional space, I’d like to be able to determine which face of an N-dimensional hypercube of edge length 1 that the point is closest to.

In the 2-dimensional case it’s fairly trivial, you simply split the square along its diagonals:

``if (x < y) then     if (x + y < 0) then         // Side 1     else         // Side 2 else     if (x + y < 0) then         // Side 3     else         // Side 4  ``

In 3-dimensions, this becomes more complex; each face creates a ‘volume’ of points that are closest to it in the shape of a square based pyramid.

Visualisation of the 6 planes that form the 6 pyramids

Of course, given a point, it’s possible to determine which side of the 6 planes it lands on and using that information you can determine which face of the cube is closest. However this would involve running 6 separate checks.

Moving this into higher dimensions, a similar algorithm can be run on hypercubes, however, as the number of faces on a n-cube is $$2^{n-2}{n \choose 2}$$, this quickly becomes computationally very expensive.

However, theoretically a perfect algorithm could cut the search space in half with every check, discarding half the faces each time.

This would give this hypothetical algorithm a runtime of $$O(\log_2(2^{n-2}{n \choose 2}))$$ which can be simplified, if my rate of growth calculations worked out, to $$O(n\log(n))$$

Is my logic correct here; can/does such an algorithm exist?

What PHB weapon would be closest to a Patta (Gauntlet Sword)?

I found an image of various historical weapons and one caught my eye. It is a Patta from India between the 15th and 18th centuries. It is between 10 and 44 inches in length and is a slashing weapon. Obviously based on the construction it would be a one handed weapon and would not have the option of being versatile. I’m thinking it would be closest to a short sword but i’m not sure. What weapon from the PHB would this be closest to for determining damage die and weight. Would this also count as an exotic weapon for things like the gladiator background?

Closest non-Homebrew class to Potions Master / Salve-Maker?

I’ve been searching for a character build where it focuses on Potions crafting. To be specific, a character that would be able to craft potions as an (in-combat) action, and where the potions crafted made by me have better effects — i.e. Healing Potions don’t have to be consumed, it can be thrown towards an ally and the effects would take effect as usual (e.g. 2d4+2HP — but maybe -1HP because of the collision with the bottle and the character).

But of course without Homebrew, that’s pretty much just a preference. I’ve seen the Salve Master here.

Do you guys think there’d be something as close to what I’m trying to build? (A merchant-like character that has a knock for concoctions and potions) It doesn’t have to be an actual separate class. I’m also open to build suggestions that stick with just the 5e PHB details.

Approach for algorithm to find closest 3-D object in a list of many similar objects to a given test case

Lets say I have a list of many (10s of thousands – millions) objects, and each of these objects has a given number of 3-D vertices (my current implementation uses 8 vertices each, but this number can be reduced if it causes a very significant increase in performance). These vertices are currently stored as floats from 0-255, but this range can also be changed if need be, assuming it will not reduce accuracy too drastically. Also, I can store these objects in any data structure that would be beneficial for this algorithm.

I am given another such object, also with the same number (8) 3-D vertices, but of which in general it must be assumed that none of the vertices are common with any vertices included in the list of stored previous objects.

With all of this in mind, I need an algorithm that will return an object from that list that is optimally close to the test case object (close being defined in the normal, euclidean distance, sense). By optimally close, I mean that it does not have to be the global optimum if this will greatly increase performance, although if there is a quick algorithm that will always return the global optimum i would love to hear it.

A reversible algorithm to find the closest match between 2 arrays

So I’m looking for a bit of an abstract algorithm and I’d appreciate any references to read up on. This is a bit tough to explain but I’ll try my best.

Suppose we have 2 `int` arrays-

A -> `{1, 7, 10}`

B -> `{3, 11, 57}`

I’d like the sort the second array in such a way that for each index of the 2 arrays, the corresponding element pairs have the least deviation in value. For the example arrays A and B, we’d need to sort B like so => `{3, 57, 11}`.

The catch is that it needs to be reversible. So if I was given the modified B and the original A, I’d need to figure out the original B. So if I was given `{3, 57, 11}` and `{1, 7, 10}` I’d need to get `{3, 11, 57}`.

I’m really questioning the possibility of such an algorithm, but if there are any algo experts here, please link me to places that I can study to get as close as I can or, better yet, provide an explanation in the comments.

Constraints

• Both arrays are of same length
• The first array must remain unchanged, only the second should be sorted accordingly
• There may not be enough optimal elements in the second array to provide a perfect/good match, in this case the least amount of sacrifices will be preferred. For instance, if we had => A -> `{1, 54, 72}` and B-> `{2, 73, 100}`. The preferable sort of the second array would be `{2, 100, 73}` but not `{2, 73, 100}`. This way we completely sacrifice the last element in favor of getting a good match for the second element. This is preferable as long as the number of sacrifices is at a minimum.
• The `int` values will be in range `[0, 256)`. That is, 0 to 256, including 0 but not including 256
• Of course, once again, the algorithm must be reversible.

Also my original usecase requires me to use arrays of tuples (i.e `{(2, 3, 7), (128, 132, 92)}`) so I guess if I do find an algo, I’ll need to do it twice but that’s ok. As you may have already guessed, this is for pixel data manipulation.

Find closest points in a polygon

I have a 2D polygon defined by a list of $$n$$ points: $$A$$, $$B$$, $$C$$… These points are sorted in clockwise order. Example:

I would like to find the most performant algorithm to detect all points which are close together (distance < $$x$$).In the above example, it is $$(H, I)$$ and $$(B, F)$$. I could compare each points with all others points and check their distance => complexity $$O(n^2)$$. Is it possible to have a better performance knowning that points are already sorted in clockwise order ?

Dijkstra’s algorithm chooses the closest unvisited node to visit next, what would happen if we didn’t do this?

Why is it necessary to always choose the closest vertex to the source to visit next?

Say instead of picking the closest vertex I pick the furthest. Can someone give an example of a graph where this would give the wrong answer?

Finding the closest point in a set in o(n) time

Suppose I have a set S which is a set of points in a 3D plane. Given point P, is it possible to construct a data structure A such that the closest point to P in A can be found in o(n) time? My assumption would be to use a Kd-tree or an octree, which I think would take O(log_3(n)) to find. Any help would be greatly appreciated!

Linearithmic solution to finding closest pairs in an array of N elements

I am reading Algorithms 4ed by Sedgewick and Wayne. I came across this algorithm design question that asks the following:

Write a program that given an array of N integers, finds a closest pair: two values whose difference is no greater than the difference of any other pair (in absolute value). The running time of the program should be linearithmic in the worst case.

I wrote an implementation of this algorithm in javascript and ran a few tests. So far, it looks like the algorithm is correct and also linearithmic. But, I am not very good at proving correctness of algorithms or, in analyzing their time complexity (ammortized). If anyone can help me answer the following it would be great:

1. Is the algorithm (added later) correct?
2. Is the amortized time complexity linearithmic (i.e., N*lgN)?

The algorithm is given below:

``function binarySearch(key, start, arr) {     let a = arr[start-1]; // start >= 1     let b = arr[start];     let lo = start+1;     let hi = arr.length-1;     let getMid = () => Math.floor((lo+hi)/2);     let getDiff = (a, b) => Math.abs(a-b);       let mid;     let diff;     let ldiff = getDiff(a, b);     let hidx = start;     while(lo < hi) {         mid = getMid();         diff = getDiff(a, arr[mid]);         if(diff < ldiff) {             ldiff = diff;             hidx = mid;             hi = mid-1;         }         else {             lo = mid+1;         }     }     return { highIndex: hidx, leastDiff: ldiff }; }  function closestPair(arr) {     // returns the nearest, closest pair     let ldiff = null;      let hidx;     let lidx;     for(let i=0; i<arr.length-1; ++i) {         let { highIndex, leastDiff } = binarySearch(ldiff, i+1, arr);         console.log(`hi=\$  {highIndex} low=\$  {i} ld=\$  {leastDiff}`);         if(ldiff === null || leastDiff < ldiff) {             ldiff = leastDiff;             hidx = highIndex;             lidx = i;         }      }     if(ldiff !== null && lidx !== null && hidx !== null) {         return { leastDiff: ldiff, lowIndex: lidx, highIndex: hidx };     }     else null; } ``

To test the algorithm, I had the following tests setup:

``if(!module.parent) {     let arrs = [         [1, 7, 13, 5, 19, 27, 20, 39, 40], // 2; 19-20/39-40          [3, 2, 10, 6, 9, 5],         [-2, 9, 5, 25, 13, -10, -25]     ];     for(let arr of arrs) {         let s = arr.sort(ascComparator);         console.log(`sorted arr: \$  {s.toString()}`);         let cp = closestPair(s);         console.log(cp);     } }   ``

The output on the console for running the tests were as follows:

``sorted arr: 1,5,7,13,19,20,27,39,40 hi=1 low=0 ld=4 hi=2 low=1 ld=2 hi=3 low=2 ld=6 hi=4 low=3 ld=6 hi=5 low=4 ld=1 hi=6 low=5 ld=7 hi=7 low=6 ld=12 hi=8 low=7 ld=1 { leastDiff: 1, lowIndex: 4, highIndex: 5 } sorted arr: 2,3,5,6,9,10 hi=1 low=0 ld=1 hi=2 low=1 ld=2 hi=3 low=2 ld=1 hi=4 low=3 ld=3 hi=5 low=4 ld=1 { leastDiff: 1, lowIndex: 0, highIndex: 1 } sorted arr: -25,-10,-2,5,9,13,25 hi=1 low=0 ld=15 hi=2 low=1 ld=8 hi=3 low=2 ld=7 hi=4 low=3 ld=4 hi=5 low=4 ld=4 hi=6 low=5 ld=12 { leastDiff: 4, lowIndex: 3, highIndex: 4 } ``

K Closest Points to Origin C# solution deemed too slow by Leetcode.com

LeetCode challenge

I know I could simply use the built in .Sort or .Order by to solve this question in 1 line. However, for interviews I would need to show the whole workings of my solution.

I am using a min-heap to calculate and store the distances from the origin. Then I simply return the K number of distances.

My code fails due to the “Time limit exceeded” error, indicating my code is too slow. However, it does return the correct answer.

Is there a more efficient way of using a min-heap? Or should my code be using a completely different algorithm and data structure?

``public class Solution {     public int[][] KClosest(int[][] points, int K)     {         var lists = new int[K][];          Heap head = new Heap();          for (int i = 0; i < points.Length; i++)         {             int a = points[i][0];             int b = points[i][1];             var testMin = a * a + b * b;             var newNode = new Node             {                 val = testMin,                 index = points[i]             };             head.Add(newNode);         }          for (int i = 0; i < K; i++)         {             lists[i] = head.Pop().index;         }          return lists;     } }  class Heap {     public Node head;     public Node tail;      public void Add(Node newNode)     {         if (head == null)         {             head = newNode;             tail = newNode;         }         else         {             newNode.parent = tail;             tail.next = newNode;             tail = tail.next;         }         HeapifyUp();     }      void HeapifyUp()     {         Node curr = tail;         while (curr.parent != null)         {             if (curr.val < curr.parent.val)             {                 var tVal = curr.val;                 var tIndex = curr.index;                  curr.val = curr.parent.val;                 curr.index = curr.parent.index;                  curr.parent.val = tVal;                 curr.parent.index = tIndex;             }             curr = curr.parent;         }     }      public Node Pop()     {         var tempHead = new Node         {             index = head.index,             val = head.val         };          // Set head value to tail value         head.index = tail.index;         head.val = tail.val;          // Deference tail         if (tail.parent != null)         {             tail.parent.next = null;             tail = tail.parent;         }         else         {             tail = null;             head = null;         }          HeapifyDown();          return tempHead;     }      void HeapifyDown()     {         Node curr = head;         while (curr != null && curr.next != null)         {             if (curr.val > curr.next.val)             {                 var tVal = curr.val;                 var tIndex = curr.index;                  curr.val = curr.next.val;                 curr.index = curr.next.index;                  curr.next.val = tVal;                 curr.next.index = tIndex;             }             curr = curr.next;         }     } }  class Node {     public int[] index;     public int val;     public Node next;     public Node parent; } ``