I have a spatial graphlike structure. The structure consists of vertices in the 3D space and connecting edges. Are there any algorithms available that would identify the rotational symmetries of these structures? In particular, I’m interested in all the rotational axis along which I can rotate the structure to overlap itself, as well as the degree of rotation. For example, an equilateral triangle (3 vertices + 3 edges) would have one rotational axis perpendicular to its plane, with a degree of 3 and 3 others in its plane, with a degree of 2 each. The closest I could find are molecular packages identifying the symmetry groups of molecular structures. I would prefer not to go that far as I’m only interested in rotational symmetries and not entire group theory description of structures.
Tag: Spatial
Calculating spatial “orderedness” at each position in a binary image: entropy? lacunarity?
Let’s say I have image where all pixel values are either 0 or 1. What I’d like to do is to be able to generate a new image with the same dimensions where each pixel represents how “ordered” the area around the corresponding pixel in the original image is. In particular, I’m looking for “spatial” order: whether or not there is some regularity or pattern in that local area. This could then be used to segment in image into regions of relative order and regions of relative disorder.
For example:
and
are both highly ordered. On the other hand,
probably has varying levels of order within the image but is overall disorded. Finally, an image like
has areas of order (bottom left and to some extent top right) and disorder (rest of the image).
I’ve considered taking some general measure of entropy (like Shannon’s image entropy) and applying it with a moving window across the image, but my understanding is that most measures of entropy do not capture much about the spatial aspects of the image. I’ve also come across the concept of “lacunarity” which looks promising (it’s been used to segment e.g., anthropogenic structures from natural landscapes on the basis of homogeneity) but I’m having a hard time understanding how it works and thus if it’s truly appropriate. Could either of these concepts be made to work for what I’m asking, or is there something else I haven’t considered?
Best way to recreate a GrubHublike spatial database
So I am currently working on a project very similar to GrubHub, UberEats, PostMates, etc. Without getting into too much detail about the project idea let’s say I was just recreating one of these aforementioned platforms since we share the same problem. The problem being how to structure a database that is efficient for querying geographically proximitybased and also being able to take into account other input parameters such as food types, specific food items, etc.
Currently, I am using Firebase’s Cloud Firestore NoSQL database to create “restaurant” objects and storing them using a UID. Obviously, this solution doesn’t scale if say their were a million objects I would be stuck having to fetch all of them and then say calculate the distance between the user and the restaurant’s location. And then after filtering through that I would have to query for other input parameters like I mentioned before.
I’ve never worked with spatial/geographicbased databases so I’m looking for recommendations on how to best store each restaurant object using one value to use as a “key”. As well as recommendations on how to filter it again using “fuzzy” input parameters. And by fuzzy I mean like typing in for example, “tamales” and the database knowing to search not only tamales the food item but also related results like Mexican food in general.
Simple spatial grid for particle system
I am going to simulate a particle system, the particles are infinitesimal points which apply forces on the neighbors and I need a fast way of making proximity checks, they are going to have a maximum check radius but not a minimum and will be almost evenly spaced.
The simulation will be very similar to this: https://youtu.be/SFf3pcE08NM
I thought a spatial grid would be the easiest approach for it and I need it to be as cost efficient as possible to matter how ugly it gets.
Is there any optimization I can make on this code?
Is there a way to compute the optimal cell size from the average distance between particles?
_empty_set = set() class SpatialHash: def __init__(self, cell_size=0.1): self.cells = {} self.bucket_map = {} self.cell_size = cell_size def key(self, co): return int(co[0] / self.cell_size), int(co[1] / self.cell_size), int(co[2] / self.cell_size) def add_item(self, item): co = item.co k = int(co[0] / self.cell_size), int(co[1] / self.cell_size), int(co[2] / self.cell_size) if k in self.cells: c = self.cells[k] else: c = set() self.cell_size[k] = c c.add(item) self.bucket_map[item] = c def remove_item(self, item): self.bucket_map[item].remove(item) def update_item(self, item): self.bucket_map[item].remove(item) self.add_item(item) def check_sphere(self, co, radius, exclude=()): r_sqr = radius * radius for x in range(int((co[0]  radius) / self.cell_size), int((co[0] + radius) / self.cell_size) + 1): for y in range(int((co[1]  radius) / self.cell_size), int((co[1] + radius) / self.cell_size) + 1): for z in range(int((co[2]  radius) / self.cell_size), int((co[2] + radius) / self.cell_size) + 1): for item in self.cells.get((x, y, z), _empty_set): if item not in exclude and (item.co  co).length_squared <= r_sqr: yield item
What is the “spatial information” in convolutional neural network
deep learning research papers always claim that deeper layers of CNN have good “semantic information” but poor “spatial information”. What is the spatial information exactly. Is that some activations in deeper layers?
spatial element count data structure eg: RTree, with filtering
Background: The application is a web map application with a LeafletJS frontend and NodeJS backend with Postgres database. The application is to be used on both desktops and smartphones/ tablets in the field. The map should visualise ~30K polygons whose number grows by ~3k per year. The data set has insertions and deletions less than once every 12 hours but has queries constantly. At all zooms, a representation of the entire spatial data set should be available. At lower zooms a cluster or heatmap representation is ideal. At high zooms the individual polygons should be visible. The main issue is that the data set count represented by clustering must also be filterable by a finite number of sets each with finite number of options membership (year, type of survey etc.)
A naive implementation would be to calculate the centre/ centroid/ poleofinaccessibility of each polygon along with its set options and send these to a standard Leaflet clustering library on the client side to visualise when below a threshold zoom level, and to send the polygons themselves when above a zoom level. The clientcontrolled filter would iterate through each layer in the cluster or polygon set.
It seems to me that a better cluster would be to build a RTree serverside and at each node level include the total child count, then on the clientside each cluster is represented as this child count at the centre of its node’s bounding box. Above the threshold zoom, polygons for that area are also stored in a clientside RTree to avoid querying the database for areas that have been traversed more than once.
(A) Is this a sensible data structure and method of representing clusters?
(B) How can it be extended to compute child count of subsets at different levels of zoom, such as calculating the count of every exclusive set at each level? (eg: count of elements in years x1 to x2 and survey type a,b,c, not d)
Is retrospective inference on a spatial Bayesian network NPhard?
Cooper (1990) showed that probabilistic inference on general Bayesian networks is NPhard. However, for many “spatial” Bayesian networks (where nodes represent small regions of spacetime), we can reasonably assume that a node is only influenced by other nodes in its local vicinity. If I directly observe the states of the nodes in a particular region of this spatial Bayesian network, then I can compute all future node states implied by this observation in linear time. But can I also efficiently compute all past states that are retroactively implied by this observation?
Here is a precise, CSfriendly version of this question:
Consider a network with nodes arranged in rows: $ 1$ node in the first row, $ 1+d$ nodes in the second, $ 1+2d$ nodes in the third, and so on, for $ m$ rows. (This gives a total of $ n = m + (m1)md/2$ nodes in the network.) Each node takes a state of either 0 or 1. Additionally, each node $ v$ has up to $ 1+d$ “parent” nodes and, if $ v$ is the $ k$ th node in its row, all of these parents must lie among the $ k$ th through $ (k+d)$ th nodes in the subsequent row (when there is such a row). (This is the locality constraint.)
For each node $ v$ , there is a table $ T_v$ that lists all possible combinations of states of the parents of $ v$ and, for each combination, assigns a state of 0 or 1 to $ v$ . (We avoid probabilities for simplicity.)
Let $ d\text{LBAYES}$ denote the decision problem of determining whether there is an assignment of states to the nodes of the network that assigns state 1 to the node in the first row and is consistent with all $ T_v$ .
Are there $ d$ for which $ d\text{LBAYES}$ is NPhard?
It seems unlikely that I am the first person to contemplate this. Is a direct proof possible (e.g., by a polynomialtime reduction from a known NPcomplete problem)? Or, if not, is there relevant literature?
Temporal and spatial sympathy when casting Postcognition
The Time 2 spell Postcognition allows a mage to view the past. However there are some confusing factors (emphasis mine):
She can review the past of her current location or any moment in her own past, or that of an object, with flawless clarity. To focus this sense on something or someplace other than the mage’s current physical location, the mage must also use Space 2. Without the use of Space 2, she can do this only for an exact spot in which she was or is. [..] and casting dice pool is modified by temporal sympathy
Scenario: You find a stolen painting in a crime lord’s collection. You know when it was stolen and touching the object you try to view the moment it was taken from the museum to identify the thief.
 If you have never been to the museum and only heard of the theft, you will need Space 2, that is clear. However if you have visited it once (Encountered level sympathy), is this requirement lifted?
 If you need Space 2, are you penalized for both spatial and temporal sympathy (that of the object, here both Intimate, giving you 4)?
 If you must use your own temporal sympathy to the target location, does having access to the actual object help you any?
Python 2: Most efficient way to find spatial order from a list of tuples
I have a circlegrowth algorithm (linegrowth with closed links) where new points are added between existing points at each iteration.
The linkage information of each point is stored as a tuple in a list. That list is updated iteratively.
QUESTIONS:

What would be the most efficient way to return the spatial order of these points as a list ?

Do I need to compute the whole order at each iteration or is there a way to cumulatively insert the new points in a orderly manner into that list ?
All I could come up with is the following:
tuples = [(1, 4), (2, 5), (3, 6), (1, 6), (0, 7), (3, 7), (0, 8), (2, 8), (5, 9), (4, 9)] starting_tuple = [e for e in tuples if e[0] == 0][0] ## note: 'starting_tuple' could be either (0, 7) or (0, 8), starting direction doesn't matter order = [starting_tuple[0], starting_tuple[1]] ## order will always start from point 0 idx = tuples.index(starting_tuple) ## index of the starting tuple def findNext(): global idx for i, e in enumerate(tuples): if order[1] in e and i != idx: ind = e.index(order[1]) c = 0 if ind == 1 else 1 order.append(e[c]) idx = tuples.index(e) for i in range(len(tuples)/2): findNext() print order
It is working but it is neither elegant (non pythonic) nor efficient. It seems to me that a recursive algorithm may be more suitable but unfortunately I don’t know how to implement such solution.
Also, please note that I’m using Python 2 and can only have access to full python packages (no numpy).
This question has also been posted on SO.
Spatial index search that returns sorted results
I need to perform a spatial index search while also sort the results by one of the dimensions.
Backstory:
I have a map with lots of 2D objects and when the user interacts with the map I need to filter only those objects that are inside the screen rectangle and at a visible distance (not bigger than the whole screen, not smaller than a single pixel). To do this, I want to use a 3D Rtree library.
I do not understand the math but I found a library that perfectly suits my needs, so far so good.
But things get complicated. Sometimes I need to draw labels for map objects that are so small they not even visible. The user will see the label and he will zoom to the object. (This is not a regular map). So I filter all objects that are inside the screen rectangle, are smaller than the whole screen and bigger than 0. So far so good. To draw only labels that do not overlap I will use another spatial index tree (2D). But I need to start drawing labels for the big object first, drawing labels for smaller objects only if there is a space on the screen. That means I need the results of the 3D spatial search sorted by the Z distance (or Z size).
The number of results from the spatial search can be huge, simply calling a sort function is not feasible. I could sort all the objects by Z size during initialization, but then won’t know which are inside the screen rectangle. I need to somehow combine the efficiency of the Rtree with the ability to have sorted results.
Is there a variant of Rtree that will return search (intersection) results sorted by one of the dimensions, efficiently?