## Simple spatial grid for particle system

I am going to simulate a particle system, the particles are infinitesimal points which apply forces on the neighbors and I need a fast way of making proximity checks, they are going to have a maximum check radius but not a minimum and will be almost evenly spaced.

The simulation will be very similar to this: https://youtu.be/SFf3pcE08NM

I thought a spatial grid would be the easiest approach for it and I need it to be as cost efficient as possible to matter how ugly it gets.

Is there any optimization I can make on this code?

Is there a way to compute the optimal cell size from the average distance between particles?

_empty_set = set()   class SpatialHash:     def __init__(self, cell_size=0.1):         self.cells = {}         self.bucket_map = {}         self.cell_size = cell_size      def key(self, co):         return int(co[0] / self.cell_size), int(co[1] / self.cell_size), int(co[2] / self.cell_size)      def add_item(self, item):         co = item.co         k = int(co[0] / self.cell_size), int(co[1] / self.cell_size), int(co[2] / self.cell_size)         if k in self.cells:             c = self.cells[k]         else:             c = set()             self.cell_size[k] = c         c.add(item)         self.bucket_map[item] = c      def remove_item(self, item):         self.bucket_map[item].remove(item)      def update_item(self, item):         self.bucket_map[item].remove(item)         self.add_item(item)      def check_sphere(self, co, radius, exclude=()):         r_sqr = radius * radius         for x in range(int((co[0] - radius) / self.cell_size),                        int((co[0] + radius) / self.cell_size) + 1):             for y in range(int((co[1] - radius) / self.cell_size),                            int((co[1] + radius) / self.cell_size) + 1):                 for z in range(int((co[2] - radius) / self.cell_size),                                int((co[2] + radius) / self.cell_size) + 1):                     for item in self.cells.get((x, y, z), _empty_set):                         if item not in exclude and (item.co - co).length_squared <= r_sqr:                             yield item 

## What is the “spatial information” in convolutional neural network

deep learning research papers always claim that deeper layers of CNN have good “semantic information” but poor “spatial information”. What is the spatial information exactly. Is that some activations in deeper layers?

## spatial element count data structure eg: R-Tree, with filtering

Background: The application is a web map application with a LeafletJS front-end and NodeJS back-end with Postgres database. The application is to be used on both desktops and smartphones/ tablets in the field. The map should visualise ~30K polygons whose number grows by ~3k per year. The data set has insertions and deletions less than once every 12 hours but has queries constantly. At all zooms, a representation of the entire spatial data set should be available. At lower zooms a cluster or heatmap representation is ideal. At high zooms the individual polygons should be visible. The main issue is that the data set count represented by clustering must also be filterable by a finite number of sets each with finite number of options membership (year, type of survey etc.)

A naive implementation would be to calculate the centre/ centroid/ pole-of-inaccessibility of each polygon along with its set options and send these to a standard Leaflet clustering library on the client side to visualise when below a threshold zoom level, and to send the polygons themselves when above a zoom level. The client-controlled filter would iterate through each layer in the cluster or polygon set.

It seems to me that a better cluster would be to build a R-Tree server-side and at each node level include the total child count, then on the client-side each cluster is represented as this child count at the centre of its node’s bounding box. Above the threshold zoom, polygons for that area are also stored in a client-side R-Tree to avoid querying the database for areas that have been traversed more than once.

(A) Is this a sensible data structure and method of representing clusters?

(B) How can it be extended to compute child count of subsets at different levels of zoom, such as calculating the count of every exclusive set at each level? (eg: count of elements in years x1 to x2 and survey type a,b,c, not d)

## Is retrospective inference on a spatial Bayesian network NP-hard?

Cooper (1990) showed that probabilistic inference on general Bayesian networks is NP-hard. However, for many “spatial” Bayesian networks (where nodes represent small regions of spacetime), we can reasonably assume that a node is only influenced by other nodes in its local vicinity. If I directly observe the states of the nodes in a particular region of this spatial Bayesian network, then I can compute all future node states implied by this observation in linear time. But can I also efficiently compute all past states that are retroactively implied by this observation?

Here is a precise, CS-friendly version of this question:

Consider a network with nodes arranged in rows: $$1$$ node in the first row, $$1+d$$ nodes in the second, $$1+2d$$ nodes in the third, and so on, for $$m$$ rows. (This gives a total of $$n = m + (m-1)md/2$$ nodes in the network.) Each node takes a state of either 0 or 1. Additionally, each node $$v$$ has up to $$1+d$$ “parent” nodes and, if $$v$$ is the $$k$$th node in its row, all of these parents must lie among the $$k$$th through $$(k+d)$$th nodes in the subsequent row (when there is such a row). (This is the locality constraint.)

For each node $$v$$, there is a table $$T_v$$ that lists all possible combinations of states of the parents of $$v$$ and, for each combination, assigns a state of 0 or 1 to $$v$$. (We avoid probabilities for simplicity.)

Let $$d\text{-LBAYES}$$ denote the decision problem of determining whether there is an assignment of states to the nodes of the network that assigns state 1 to the node in the first row and is consistent with all $$T_v$$.

Are there $$d$$ for which $$d\text{-LBAYES}$$ is NP-hard?

It seems unlikely that I am the first person to contemplate this. Is a direct proof possible (e.g., by a polynomial-time reduction from a known NP-complete problem)? Or, if not, is there relevant literature?

## Temporal and spatial sympathy when casting Postcognition

The Time 2 spell Postcognition allows a mage to view the past. However there are some confusing factors (emphasis mine):

She can review the past of her current location or any moment in her own past, or that of an object, with flawless clarity. To focus this sense on something or someplace other than the mage’s current physical location, the mage must also use Space 2. Without the use of Space 2, she can do this only for an exact spot in which she was or is. [..] and casting dice pool is modified by temporal sympathy

Scenario: You find a stolen painting in a crime lord’s collection. You know when it was stolen and touching the object you try to view the moment it was taken from the museum to identify the thief.

• If you have never been to the museum and only heard of the theft, you will need Space 2, that is clear. However if you have visited it once (Encountered level sympathy), is this requirement lifted?
• If you need Space 2, are you penalized for both spatial and temporal sympathy (that of the object, here both Intimate, giving you -4)?

## Python 2: Most efficient way to find spatial order from a list of tuples

I have a circle-growth algorithm (line-growth with closed links) where new points are added between existing points at each iteration.

The linkage information of each point is stored as a tuple in a list. That list is updated iteratively.

QUESTIONS:

• What would be the most efficient way to return the spatial order of these points as a list ?

• Do I need to compute the whole order at each iteration or is there a way to cumulatively insert the new points in a orderly manner into that list ?

All I could come up with is the following:

tuples = [(1, 4), (2, 5), (3, 6), (1, 6), (0, 7), (3, 7), (0, 8), (2, 8), (5, 9), (4, 9)]  starting_tuple = [e for e in tuples if e[0] == 0][0] ## note: 'starting_tuple' could be either (0, 7) or (0, 8), starting direction doesn't matter  order = [starting_tuple[0], starting_tuple[1]] ## order will always start from point 0  idx = tuples.index(starting_tuple) ## index of the starting tuple   def findNext():     global idx     for i, e in enumerate(tuples):         if order[-1] in e and i != idx:             ind = e.index(order[-1])             c = 0 if ind == 1 else 1             order.append(e[c])             idx = tuples.index(e)   for i in range(len(tuples)/2):     findNext()  print order 

It is working but it is neither elegant (non pythonic) nor efficient. It seems to me that a recursive algorithm may be more suitable but unfortunately I don’t know how to implement such solution.

Also, please note that I’m using Python 2 and can only have access to full python packages (no numpy).

This question has also been posted on SO.

## Spatial index search that returns sorted results

I need to perform a spatial index search while also sort the results by one of the dimensions.

Backstory:

I have a map with lots of 2D objects and when the user interacts with the map I need to filter only those objects that are inside the screen rectangle and at a visible distance (not bigger than the whole screen, not smaller than a single pixel). To do this, I want to use a 3D R-tree library.

I do not understand the math but I found a library that perfectly suits my needs, so far so good.

But things get complicated. Sometimes I need to draw labels for map objects that are so small they not even visible. The user will see the label and he will zoom to the object. (This is not a regular map). So I filter all objects that are inside the screen rectangle, are smaller than the whole screen and bigger than 0. So far so good. To draw only labels that do not overlap I will use another spatial index tree (2D). But I need to start drawing labels for the big object first, drawing labels for smaller objects only if there is a space on the screen. That means I need the results of the 3D spatial search sorted by the Z distance (or Z size).

The number of results from the spatial search can be huge, simply calling a sort function is not feasible. I could sort all the objects by Z size during initialization, but then won’t know which are inside the screen rectangle. I need to somehow combine the efficiency of the R-tree with the ability to have sorted results.

Is there a variant of R-tree that will return search (intersection) results sorted by one of the dimensions, efficiently?

## How to Fix (when needed) Scanner “Large Area Spatial Crosstalk”

What is it?

“Large Areas Spatial Crosstalk” is the terminology used by the IEC standard for scanners, 61966-8. It refers to the affect of nearby colors shifting the scanned colors towards the neighboring colors.

What problems does it cause?

The effect can produce large differences in the colors of scanned photos which makes it sometimes impossible to scan a photo and get a colorimetrically accurate image file. These aren’t often noticed in complex photos but some are quite susceptible. A scanned picture of a hot air balloon against a bright sky will tend to lighten the brighter sky more than the balloon.

Accurate reproduction is highly desired for, inter alia, precision duplication and retaining historical images.

Here’s a scanned image from a moderately high end, V850 Epson Scanner. It’s large, shifted L* values are labeled underneath each circle. The three, circular, patches on the left were printed the same as the three on the right. The lighter ones were RGB(240,240,240) while the darker ones were RGB(118,118,118). The circles surrounded by white have scanned L*s about 6 higher than the same ones on the right. This shift is from “Large Area Spatial Crosstalk.”

This problem is caused by light reflecting off nearby parts of a document, getting scattered around and bouncing off the translucent, illuminating white rails on either side of a horizontal aperture through which the CCD scan is captured. It’s as if the Lux levels on the right side white surround area were boosted over 20%, the increase in light needed to increase L* from 85.7 to 92.7.

Question: Are their any reflection desktop scanners that have addressed this issue in their design and/or is there any software that corrects or is looking to correct this defect?