Efficient Data Structure for Closest Euclidean Distance

The question is inspired by the following UVa problem: https://onlinejudge.org/index.php?option=onlinejudge&Itemid=99999999&category=18&page=show_problem&problem=1628.

A network of autonomous, battery-powered, data acquisition stations has been installed to monitor the climate in the region of Amazon. An order-dispatch station can initiate transmission of instructions to the control stations so that they change their current parameters. To avoid overloading the battery, each station (including the order-dispatch station) can only transmit to two other stations. The destinataries of a station are the two closest stations. In case of draw, the first criterion is to chose the westernmost (leftmost on the map), and the second criterion is to chose the southernmost (lowest on the map). You are commissioned by Amazon State Government to write a program that decides if, given the localization of each station, messages can reach all stations.

The naive algorithm of course would build a graph with stations as vertices and calculate the edges from a given vertex by searching through all other vertices for the closest two. Then, we could simply run DFS/BFS. Of course, this takes $ O(V^2)$ time to construct the graph (which does pass the test cases). My question, though, is if we can build the graph any faster with an appropriate data structure. Specifically, given an arbitrary query point $ p$ and a given set of points $ S$ , can we organize the points in $ S$ in such a way that we can quickly find the two closest points in $ S$ to $ p$ (say, in $ \log V$ time?).

Compute Structure Tensor

As i know, Structure Tensor is:

enter image description here

How i compute $ I_x^2, I_y^2, I_xI_y$ . I used Hadamard product for compute $ I_x^2, I_y^2, I_xI_y$ . But if i use Hadamard product i will get $ det(M) = 0$ . I’m stuck in here for too long. Can anyone explain for me?. Thank you.

How to decide for a database structure for a financial accounting app (keeping in mind scaling)?

We are building a financial accounting application for users to manage single and multiple companies under them. The user can be an accountant with n number of companies under it or a single company itself. We are trying to understand how the database for such an application needs to be designed.

Functionality:

  1. The ability of an accountant to see all open invoices across all the companies he is handling.

  2. The ability to archive datasets of companies when they leave us.

  3. The ability to fetch data from multiple companies under one accountant to generate reports.

Database structure:

There are three possible database structures but we need to know which one best suits us:

  1. Have a parent database that holds all accounts and company information. Every company getis its own database to handle and store all transactions.

  2. have a single DB to hold all users and company profile data and every individual company gets its own set of tables to store transactions.

  3. Have a single DB that holds all the transaction data of all companies in a single table called transactions.

We are trying to understand which DB architecture suits us the best. I have MySQL/MariaDB in mind(solely because data is all relational) but if you think other databases would be better, i would definitely like to know more about it.

Which is the best URL structure for Targeting Multiple city keywords (keyword also included in domain name)?

Suppose I have keyword in Domain Name (exampleseo.com here SEO is the keyword) and I want to rank my website on following keywords “SEO Company in Delhi / SEO Agency in Delhi / SEO Service in Delhi / SEO Consultancy in Delhi” .

Here Delhi change with City Name like Mumbai, Kolkata, Jaipur, Noida etc. I have Multiple city options in my website.

Which is the best URL Structure?

  1. exampleseo.com/location/delhi

  2. exampleseo.com/delhi

  3. exampleseo.com/seo-delhi

  4. exampleseo.com/delhi-seo

  5. exampleseo.com/seo-company-delhi

  6. exampleseo.com/seo-agency-delhi
  7. exampleseo.com/seo-service-delhi
  8. exampleseo.com/seo-consultancy-delhi

Constructing a data structure supporting prioritized key lookup

so this is more or less a shot in the dark as I am feeling stuck. Maybe some of you have an idea which helps.

Here is the problem description (pseudo formal):

I want to have a structure $ T = \{ \hat{x_1}, \hat{x_2}, … \}$ with $ \hat{x_i} = (p_i, k_i, v_{k_i})$ .

$ p_i \in \mathbb{N}$ can be interpreted as an associated priority. They can be considered unqiue.

$ k_i \in \mathcal{K}$ a key index, $ d := |\mathcal{K}|$ is not required to be negligibly small, though generally $ d \lt\lt |T|$

$ v_{k_i} \in \mathcal{V}^{(k_i)}$ a partial key over some universe associated with the given key index. As a little pace killer, this key may not be hashable. The only requirement is totally ordered.

Considering an array $ y = [v_i], i \in \mathcal{K}$ , the structure should be capable of supporting the following operations efficiently:

$ lookup(T, y) \rightarrow \underset{x_i \in T : y[k_i] = v_{k_i}}{arg max}( p_i )$ , i.e. the node $ \hat{x_j}$ with a partial key matching and the highest associated priority.

$ succ(T, y) \rightarrow lookup(T \backslash \{\hat{x_j}\}, y)$ , i.e. the successor (in terms of priority) of a node $ \hat{x_j}$ matching $ y$

Ultimately I would also like efficient insertion and deletion (by priority).

(For insertion: The selection of $ k_i$ for each node is another point of research and can by chosen at the time of insertion. It is essentially possible that this key index is subject to change if it helps the overall structure. But it is also not required that all key indices in $ \mathcal{K}$ are supported for each node, i.e. one might need to insert a node which is only queryable by a single $ k_i$ .)

The key indices are somewhat causing a headache for me. My research so far includes standard priority search trees and dynamic variations thereof. I also found this very interesting, though very theoretical, paper which takes care of the increased dimensionality (kind of) caused by the d-dimensional keys.

I know that I can construct d-dimensional binary search trees with a query complexity of $ O(d + n log(n))$ . These should definitely yield a gain though I am not capable of mapping the priority problem to it (Ref).

But I guess the complexity can be reduced even further if we consider that fact, that each node is only storing partial keys anyway.

My approach so far is somewhat naive as it simply creates a hash map for each key index in $ \mathcal{K}$ and queries each hash map upon lookup. It then aggregates all the results and sorts them by priority. This works fine for hashable keys but a fallback structure has to be used whenever they are not (I am using binary search trees).

Data structure for finding nearest rectangles from current rectangle

For my project I can have 0-500 squares. The difference in diameter of the squares is 3 times at most. The squares can move at max 3 squares worth a second. About 1 square is added and about 1 square is deleted every second.

Each square gets the positions of the 5 closest squares around it. What data structure is should I be using for this? I’m also going to be doing collision detecting between squares as well.

How does the structure of a Luk-Vuillemin multiplier differ from a Wallace or Dadda multiplier?

I’ve read that Luk-Vuillemin multipliers are similar to Wallace multipliers but partition their inputs in a different way. How exactly does this partitioning work, how does it change the structure of the multiplier, and why does it give it a more regular structure which is preferred for fabrication?

Data structure to query intersection of a line and a set of line segments

We want to pre-process a set of $ n$ line segments in $ S$ into a data structure, such that we can answer some queries: Given a query line $ l$ , report how many line segment in $ S$ dose it intersect.

It is required that the query should be done in $ O(\log{n})$ time. The data structure itself should take up to $ O(n^{2})$ storage and be built in $ O(n^{2}\log{n})$ time. It is suggested that it should be done in dual plane.

I understand that the question may require me to look for the number of double wedge that the a query point is in, but I can’t think of any efficient data structure to report such information. Any suggestions?

This question is basically a homework question from the textbook Computational Geometry by de Berg et al (Question 8.15). I would like to apologize that this question may not be exciting to you.

Data structure for equivalence classes

Let $ E$ be an equivalence relation defined over a set $ S$ . The access to $ E$ is only via queries of the form $ M(s_1,s_2) = 1$ if $ s_1$ and $ s_2$ are in the same class and $ 0$ otherwise. Computing $ M$ is expensive (say, $ O(n^2)$ ).

I am looking for an efficient data structure $ D$ that supports queries of the form “given $ s$ , does $ D$ contain an element $ s’$ in the same equivalence class as $ s$ “?
A naive approach is to search element by element in $ D$ and test, but is there other solution?