2D Finding an algorithm to check pizza toppings positions

Using unity 3D I am creating a 2D pizza game and I am stuck at finding an algorithm to solve this problem which is Detecting if the pizza is half and half taking into consideration that the user can put the toppings in all rotations shown in the picture and lot more possible distributions.

I used Physics2D.OverlapAreaAll to get position of ingredients on the pizza and I tried getting the sumX and sumY of coordinates of all topping A and sumX and sumY of all topping B and adding A.sumX + B.sumX and A.sumY + B.sumY and if the 2 totals are between 0 and 1 then A and B are on opposite sides but the bad distribution of toppings in the second pic is also accepted by my algorithm. The toppings must be spread like in the 1st pic

I need some easier way to detect the correct distribution of ingredients maybe using collisions or something.

if (sumX > -ErrLvl && sumX < ErrLvl && sumY > -ErrLvl && sumY < ErrLvl)               {                 Debug.Log("APPROVED HALF-HALF PIZZA");             }             else                 Debug.Log("BAD HALF-HALF PIZZA"); 

Correct distribution

Bad distribution

Why decision tree method for lower bound on finding a minimum doesn’t work

(Motivated by this question. Also I suspect that my question is a bit too broad)

We know $ \Omega(n \log n)$ lower bound for sorting: we can build a decision tree where each inner node is a comparison and each leaf is a permutation. Since there are $ n!$ leaves, the minimum tree height is $ \Omega(\log (n!)) = \Omega (n \log n)$ .

However, it doesn’t work for the following problem: find a minimum in the array. For this problem, the results (the leaves) are just indices of the minimum element. There are $ n$ of them, and therefore the reasoning above gives $ \Omega(\log n)$ lower bound, which is obviously an understatement.

My question: why does this method works for sorting and doesn’t work for minimum? Is there some greater intuition or simply "it just happens" and we were "lucky" that sorting has so many possible answers?

I guess the lower bound from decision tree makes perfect sense: we do can ask yes/no questions so that we need $ O(\log n)$ answers: namely, we can use binary search for the desired index. My question still remains.

finding the combinatorial solutions of series and parallel nodes

I have n nodes, and I want to find the (non duplicate) number of possible ways in which these nodes can be combined in series and parallel, and also enumerate all the solutions. For example, for n=3, there are 19 possible combinations.

 0 (0, 1, 2)  1 (0, 2, 1)  2 (1, 2, 0)  3 (1, 0, 2)  4 (2, 0, 1)  5 (2, 1, 0)  6 [0, 1, 2]  7 [0, (1, 2)]  8 [0, (2, 1)]  9 (0, [1, 2]) 10 ([1, 2], 0) 11 [1, (0, 2)] 12 [1, (2, 0)] 13 (1, [0, 2]) 14 ([0, 2], 1) 15 [2, (0, 1)] 16 [2, (1, 0)] 17 (2, [0, 1]) 18 ([0, 1], 2) 

In the notation above, a series combination is denoted by (..) and a parallel combination is denoted by [..]. Duplicates are removed, for example [0,1,2] is the same as [1,2,0] since all of them are happening in parallel so the order does not matter here.

Can you give me an algorithm for this, or if any such algorithm already exists, then point me to it?

(I tried googling for a solution, but did not hit any relevant answer, maybe I was entering the wrong keywords.)

Note: for a sequential-only solution, the answer is easy, it is n!, and the enumeration of the solutions is also easy. But when parallelism (especially non duplicates) is added to the problem, it gets very complex.

Finding all partitions of a grid into k connected components

I am working on floor planing on small orthogonal grids. I want to partition a given $ m \times n$ grid into $ k$ (where $ k \leq nm$ , but usually $ k \ll nm$ ) connected components in all possible ways so that I can compute a fitness value for each solution and pick the best one. So far, I have the fitness evaluation at the end of the algorithm, no branch-and-bound or other type of early-termination, since the fitness computation requires the complete solution.

My current approach to listing all possible grid partitions into connected components is quite straight forward and I am wondering what optimizations can be added to avoid listing duplicate partitions? There must be a better way than what I have right now. I know the problem is NP, but I would at like to push my algorithm from brute-force to a smart and efficient approach.


For better visualization and description I will reformulate the task to an equivalent one: paint the grid cells using $ k$ colors so that each color builds a single connected component (with respect to 4-neighborhood) and of course all grid is completely painted.

My approach so far:

  1. Generate all seed scenarios. A seed scenario is a partial solution where each color is applied to a single cell only, the remaining cells are yet empty.
  2. Collect all possible solutions for each seed scenario by expanding the color regions in a DFS manner.
  3. Filter out duplicate solutions with help of a hash-table.

Seed scenarios

I generate the seed scenarios as permutations of $ k$ unique colors and $ mn-k$ void elements (without repetition of the voids). Hence, the total number is $ (nm)! / (mn-k)!$ For example, for a $ 1 \times 4$ grid and colors $ {0, 1}$ with void denoted as $ \square$ the seed scenarios are:

  • $ [0 1 \square \square]$
  • $ [0 \square 1 \square]$
  • $ [0 \square \square 1]$
  • $ [1 0 \square \square]$
  • $ [1 \square 0 \square]$
  • $ [1 \square \square 0]$
  • $ [\square 0 1 \square]$
  • $ [\square 0 \square 1]$
  • $ [\square 1 0 \square]$
  • $ [\square 1 \square 0]$
  • $ [\square \square 0 1]$
  • $ [\square \square 1 0]$

Seed growth / multicolor flood-fill

I assume the painting to be performed in a fixed ordering of the colors. The seed scenario always comes with the first color set as the current one. New solutions are generated then either by switching to the next color or by painting empty cells by the current color.

//PSEUDOCODE buffer.push(seed_scenario with current_color:=0); while(buffer not empty) {     partial_solution := buffer.pop();     if (partial_solution.available_cells.count == 0)         result.add(partial_solution);     else     {         buffer.push(partial_solution.nextColor()); //copy solution and increment color         buffer.pushAll(partial_solution.expand()); //kind-of flood-fill produces new solutions     } } 

partial_solution.expand() generates a number of new partial solutions. All of them have one additional cell colored by the current color. It examines the current region boundary and tries to paint each neighboring cell by the current color, if the cell is still void.

partial_solution.nextColor() duplicates the current partial solution but increments the current painting color.

This simple seed growth enumerates all possible solutions for the seed setup. However, a pair of different seed scenarios can produce identical solutions. There are indeed many duplicates produced. So far, I do not know how to take care of that. So I had to add the third step that filters duplicates so that the result contains only distinct solutions.


I assume there should be a way to get rid of the duplicates, since that is where the efficiency suffers the most. Is it possible to merge the seeds generation with the painting stage? I started to thing about some sort of dynamic programming, but I have no clear idea yet. In 1D it would be much easier, but the 4-connectivity in a 2D grid makes the problem much harder. I tried searching for solutions or publications, but didn’t find anything useful yet. Maybe I am throwing in the wrong keywords. So any suggestions to my approach or pointers to literature are very much appreciated!


I found Grid Puzzle Split Algorithm, but not sure if the answers can be adapted to my problem.

Finding the middle point of the “most populated” area in a set of points?

I’m working on a game-related application, and I’m trying to find the middle point of the most populated area in my map.


Positions (format [x, y]) :  [48, 49] [51, 50] [49, 50] [51, 49] [49, 48] [130, 150] [129, 148]  Excepted output : [50, 50] or something close enough like [49, 51], [51, 50] 

To create this algorithm I’ve access to all entities position (X/Y) I’ve tried by creating a position using X average and Y average but it’s not what i’m looking for (using example values output would have been [75, 75] or something like this and not [50, 50] as excepted)

Here is an example image:
Red dot: Entities
Green dot: Position i’m looking for
enter image description here

Thanks for reading and for your help!

Finding Smallest Frontier for Graphs of bounded “width”

Let $ G$ be a graph and $ X=x_1,x_2,…,x_n$ be an permutation/ordering of the vertex set of $ G$ . We then let $ S_i = \{x_j:j\le i\}$ , and $ F_i$ be the number vertices $ v\in S_i$ that are adjacent to some vertex $ u(v) \not\in S_i$ . We finally define $ F$ to be a list of values $ F_i$ sorted from largest to smallest. e.g. if $ F_1=2,F_2=1,F_3=6, F_4=2$ we’d have $ F = 6,2,2,1$ (we caution that in reality $ F_{i+1}-F_i\le 1$ so the sequence features in the example could not occur)

In general, finding $ X$ such that $ F$ is lexicographically minimal is a task which I’d assume is NP-Hard.

However, letting $ \mathcal{G}_{k,t}$ denote the family of graphs $ G$ such that the vertex set of $ G$ is partitioned in to $ t$ parts $ V_1,\dots,V_t$ such that $ V_i \le k$ for all $ i$ , and $ |a-b|\ge 2$ implies there is no edge $ (u,v)$ in $ G$ where $ u\in V_a$ and $ v\in V_b$ .

For fixed $ k$ , and given $ G\in \ mathcal{G}_{k,t}$ , is there an algorithm that finds $ X$ such that $ F$ is lexicographically minimal, whose worst case run time is polynomial in $ t$ ?

Finding the twiddle factors for FFT algorithm

I am trying to calculate the twiddle factors for the FFT algorithm and the iFFT algorithm, but i am unsure if i have it correctly calculated and was hoping some one could tell me if i have gone wrong as currently i get the wrong output for my FFT and i believe the twiddle factors might be the reason.

This is my code (in C#) to calculate them:

For _N = 4 and _passes = log(_N)/log(2) = 2

        //twiddle factor buffer creation         _twiddlesR = new Vector2[_N*_passes]; //inverse FFT twiddles         _twiddlesF = new Vector2[_N*_passes]; //forward FFT twiddles                  for (int stage = 0; stage < _passes; stage++)         {             int span = (int)Math.Pow(2, stage); // 2^n              for (int k = 0; k < _N; k++) // for each index in series             {                 int arrIndex = stage * _N + k; // get index for 1D array                                  // not 100% sure if this is correct for theta ???                 float a = pi2 * k / Math.Pow(2,stage+1);                  //inverse FFT has exp(i * 2 * pi * k / N )                 Vector2 twiddle = new Vector2(Math.Cos(a), Math.Sin(a));                  //forward FFT has exp(-i * 2 * pi * k/ N ) which is the conjugate                 Vector2 twiddleConj = twiddle.ComplexConjugate();                  /*this ternary checks if the k index is top wing or bottom wing                 the bottom wing requires -T top wing requires +T*/                  float coefficient = k % Math.Pow(2, stage + 1) < span ? 1 : -1;                  _twiddlesR[arrIndex] = coefficient * twiddle;                 _twiddlesF[arrIndex] = coefficient * twiddleConj;             }         } 

My debug data:

For inverse FFT twiddles:

First pass 1 + 0i 1 + 0i 1 + 0i 1 + 01 Second pass: 1 + 0i 0 + i 1 + 0i 0 + i 

For forward FFT twiddles:

First pass 1 + 0i 1 + 0i 1 + 0i 1 + 01 Second pass 1 + 0i 0 - i 1 + 0i 0 - i 

I am not convinced i have it right, but i am unsure what i have got wrong. Hoping some one who has a better understanding of this algorithm can spot my math error.

Finding an MST with one adding and removing vertex operation

I am facing the following problem: Given an undirected complete Euclidean weighted graph $ G(V, E)$ and its MST $ T$ . I need to remove an arbitrary vertex $ v_i \in V(G)$ , and given a vertex $ v_j \notin V(G)$ , I have to calculate the MST of $ G^{‘}((V(G) \backslash \{v_i\})\cup\{v_j\}, (E\backslash\{(v_i, v_k): v_k \in V(G)\})\cup\{(v_k, v_j): v_k \in V(G^{‘})\})$ , i.e, the graph $ G$ with the vertex $ v_j$ (and its respective edges) and without the vertex $ v_i$ (and its respective edges). To solve this, we can apply some well-know MST algorithms, such as Prim’s, Kruskal’s, Borukva’s algorithm. Neverthless, if we do this we would not use the already existing MST $ T$ , i.e., we would calculate a new whole MST. So, I would like to know if there is any way to reuse the existing MST $ T$ .

There are two a similar question to this here (with edges, considering only the removing of them), and here (with vertex, considering only the adding of them).

Finding $l$ subsets such that their intersection has less or equal than $k$ elements NP-complete or in P?

I have a set $ M$ , subsets $ L_1,…,L_m$ and natural numbers $ k,l\leq m$ .

The problem is:

Are there l unique indices $ 1\leq i_1,…,i_l\leq m$ , such that

$ \hspace{5cm}\left|\bigcap_{j=1}^{l} L_{i_{j}}\right| \leq k$

Now my question is whether this problem is $ NP$ -complete or not. What irritates me is the two constraints $ l $ and $ k$ because the NP-complete problems that were conceptually close to it that I took a look on (set cover, vertex cover) only have one constraint respectively that also appears in this problem.

I then tried to write a polynomial time algorithm that looks at which of the sets $ L_1,…,L_m$ share more than $ k$ elements with other sets but even if all sets would share more than $ k$ elements with other this wouldn’t mean that their intersection has more than $ k$ elements…

This question kind of comes close but in it there is no restriction on the amount of subsets to use and the size of the intersection should be exactly $ k$ , but maybe this could be useful anyways.

Can somebody further enlighten me ?

Fastest Algorithm for finding All Pairs Shortest Paths on Sparse Non-Negative Graph

As discussed here Johnson’s Algorithm can be used to solve the APSP-Problem in $ O(V^2\log V + VE)$ in stead of $ O(V^3)$ for Floyd-Warshall. However, Johnsons Algorithm does quite a bit of work for the case that some weights are negative. Is there a faster way to compute this given that all weights are positive?