Migrate MySQL spatial data to PostgreSQL(PostGIS)

We have a web system that uses MySQL spatial databases. We want to migrate from MySQL to PostgreSQL. Our database has geometry and point data types, 21 tables and its size is 1.6GB.

I’ve been looking for methods to do it. I have found some tools that help you to migrate. However most of them do not support spatial data like https://github.com/philipsoutham/py-mysql2pgsql.

I also took a look over https://gis.stackexchange.com/questions/104081/how-to-migrate-spatial-tables-from-mssql-to-postgis. I have just seen this post. I haven’t tried it yet, I don’t know if it could work for MySQL. Furthermore, I’d like to do it using DB managers reather than Qgis or ArcGis.

spatial mapping with realworld texturing hololens 2

For a project I’m trying to texturise the spatial mapping of the hololens 2 with photographes.

I found a pretty good project witch made exactly what I want the problem is that is a Unity 5.6 / Holotoolkit project and I use Unity 2018.4 / MRTK 2.6

So, I’m trying to translate the project.

This is the original work

And what I’v done so far

For now, my problem is the texture that the project seems to merge each time I take a photo. I don’t know if the problem came from the shader, the code witch pass the Texture2DArray to the shader, or the merge when I take the photo.

Spatial Query very slow with radius

I posted this before at Query very slow when using spatial with radius which has a lot of details about my problem but I think i didnt include enough data so I am trying here again with database and the query I am having problem tuning.

Download database and attach Database [SQL Server 2019] (I promise no viruses, its just a .bak file in a zip), also scrubbed it out of info i dont want it out there ūüôā https://1drv.ms/u/s!AuxopE3ug8yWm-QTvSxyiHGIrAlXow?e=R7m20G

I shrank the database so its smaller to download, so you must run the following script to rebuild all indexes

EXECUTE sp_msForEachTable 'SET QUOTED_IDENTIFIER ON; ALTER INDEX ALL ON ? REBUILD;' 

Run query (Non-Indexed View)

DECLARE @p3 sys.geography SET @p3=convert(sys.geography,0xE6100000010C010000209DE44540FFFFFF3F77CA53C0) SELECT l.* FROM GridListings l WHERE  l.Location.STDistance(@p3) < 15000 ORDER BY createddate OFFSET 0 ROWS FETCH NEXT 21 ROWS ONLY 

Or Indexed View

DECLARE @p3 sys.geography SET @p3=convert(sys.geography,0xE6100000010C010000209DE44540FFFFFF3F77CA53C0) SELECT l.* FROM GridListingsIndexed l WHERE  l.Location.STDistance(@p3) < 15000 ORDER BY createddate OFFSET 0 ROWS FETCH NEXT 21 ROWS ONLY 

What I am looking for (I am sorry if it is too much, but I am really desperate for help as lot of my queries are timing out on my app which some take between 6-50 seconds on a server with 32gb ram and 6 vcores (hyper v), the server also does other things but I think there is enough horse power

  1. I use the view above which already has non-expired listings filtered, then I use that view to filter down further listings but right now its slow with expirydate set in view and the radius against the view

  2. Look through my indexes and propose better indexes, improvement suggestions overall.

  3. If all fails, i might have to restore to separating my expired and non expired listings into separate tables, but this becomes a nightmare for maintenance

MySQL performance issue with ST_Contains not using spatial index

We are having what seems to be a fairly large mysql performance issue on trying to run a fairly simple update statement. We have a table(1.8mil) with houses that contains a Lat+Long geometry point column(geo), and then a table(6k) that has a list of schools with a boundary¬†geometry¬†polygon column(boundary). We have spatial indexes on both, we are trying to set the school’s id, that contains the point, to the house table with the update. The update is taking 1 hour and 47 minutes to update 1.6mil records. In other systems I have used in my paste experience, something like that would take just a few minutes. Any recommendations?

I have posted this same question in the GIS SE site as well, as it is very much a GIS & DBA question.

CREATE TABLE houses (   ID int PRIMARY KEY NOT NULL,   Latitude float DEFAULT NULL,   Longitude float DEFAULT NULL,   geo point GENERATED ALWAYS AS (st_srid(point(ifnull(`Longitude`,0),ifnull(`Latitude`, 0)),4326)) STORED NOT NULL,   SPATIAL INDEX spidx_houses(geo) ) ENGINE = INNODB, CHARACTER SET utf8mb4, COLLATE utf8mb4_0900_ai_ci;  CREATE TABLE schoolBound (   ID int PRIMARY KEY NOT NULL,   BOUNDARY GEOMETRY NOT NULL,   reference VARCHAR(200) DEFAULT NULL,   type bigint DEFAULT NULL,   INDEX idx_reference(reference),   INDEX idx_type(type),   SPATIAL INDEX spidx_schoolBound(BOUNDARY) ) ENGINE = INNODB, CHARACTER SET utf8mb4, COLLATE utf8mb4_0900_ai_ci;  
-- type 4 means it's a elementary Update houses hs     INNER JOIN schoolBound AS sb ON ST_Contains(sb.boundary, hs.geo) AND sb.type = 4 SET hs.elementary_nces_code = sb.reference 

The explain seems to show that it is not going to use the spatial index for schoolBound.

+----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+------------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key  | key_len | ref  | rows    | filtered | Extra                                          | +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+------------------------------------------------+ |  1 | SIMPLE      | sb    | NULL       | ALL  | NULL          | NULL | NULL    | NULL |    6078 |    10.00 | Using where                                    | |  1 | UPDATE      | hs    | NULL       | ALL  | spidx_houses  | NULL | NULL    | NULL | 1856567 |   100.00 | Range checked for each record (index map: 0x4) | +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+------------------------------------------------+ 

Detecting rotational symmetries of spatial structures

I have a spatial graph-like structure. The structure consists of vertices in the 3D space and connecting edges. Are there any algorithms available that would identify the rotational symmetries of these structures? In particular, I’m interested in all the rotational axis along which I can rotate the structure to overlap itself, as well as the degree of rotation. For example, an equilateral triangle (3 vertices + 3 edges) would have one rotational axis perpendicular to its plane, with a degree of 3 and 3 others in its plane, with a degree of 2 each. The closest I could find are molecular packages identifying the symmetry groups of molecular structures. I would prefer not to go that far as I’m only interested in rotational symmetries and not entire group theory description of structures.

Calculating spatial “orderedness” at each position in a binary image: entropy? lacunarity?

Let’s say I have image where all pixel values are either 0 or 1. What I’d like to do is to be able to generate a new image with the same dimensions where each pixel represents how “ordered” the area around the corresponding pixel in the original image is. In particular, I’m looking for “spatial” order: whether or not there is some regularity or pattern in that local area. This could then be used to segment in image into regions of relative order and regions of relative disorder.

For example:

ordered1 and ordered2

are both highly ordered. On the other hand, disordered

probably has varying levels of order within the image but is overall disorded. Finally, an image like

mixed

has areas of order (bottom left and to some extent top right) and disorder (rest of the image).

I’ve considered taking some general measure of entropy (like Shannon’s image entropy) and applying it with a moving window across the image, but my understanding is that most measures of entropy do not capture much about the spatial aspects of the image. I’ve also come across the concept of “lacunarity” which looks promising (it’s been used to segment e.g., anthropogenic structures from natural landscapes on the basis of homogeneity) but I’m having a hard time understanding how it works and thus if it’s truly appropriate. Could either of these concepts be made to work for what I’m asking, or is there something else I haven’t considered?

Best way to recreate a GrubHub-like spatial database

So I am currently working on a project very similar to GrubHub, UberEats, PostMates, etc. Without getting into too much detail about the project idea let’s say I was just recreating one of these aforementioned platforms since we share the same problem. The problem being how to structure a database that is efficient for querying geographically proximity-based and also being able to take into account other input parameters such as food types, specific food items, etc.

Currently, I am using Firebase’s Cloud Firestore NoSQL database to create “restaurant” objects and storing them using a UID. Obviously, this solution doesn’t scale if say their were a million objects I would be stuck having to fetch all of them and then say calculate the distance between the user and the restaurant’s location. And then after filtering through that I would have to query for other input parameters like I mentioned before.

I’ve never worked with spatial/geographic-based databases so I’m looking for recommendations on how to best store each restaurant object using one value to use as a “key”. As well as recommendations on how to filter it again using “fuzzy” input parameters. And by fuzzy I mean like typing in for example, “tamales” and the database knowing to search not only tamales the food item but also related results like Mexican food in general.

Simple spatial grid for particle system

I am going to simulate a particle system, the particles are infinitesimal points which apply forces on the neighbors and I need a fast way of making proximity checks, they are going to have a maximum check radius but not a minimum and will be almost evenly spaced.

The simulation will be very similar to this: https://youtu.be/SFf3pcE08NM

I thought a spatial grid would be the easiest approach for it and I need it to be as cost efficient as possible to matter how ugly it gets.

Is there any optimization I can make on this code?

Is there a way to compute the optimal cell size from the average distance between particles?

_empty_set = set()   class SpatialHash:     def __init__(self, cell_size=0.1):         self.cells = {}         self.bucket_map = {}         self.cell_size = cell_size      def key(self, co):         return int(co[0] / self.cell_size), int(co[1] / self.cell_size), int(co[2] / self.cell_size)      def add_item(self, item):         co = item.co         k = int(co[0] / self.cell_size), int(co[1] / self.cell_size), int(co[2] / self.cell_size)         if k in self.cells:             c = self.cells[k]         else:             c = set()             self.cell_size[k] = c         c.add(item)         self.bucket_map[item] = c      def remove_item(self, item):         self.bucket_map[item].remove(item)      def update_item(self, item):         self.bucket_map[item].remove(item)         self.add_item(item)      def check_sphere(self, co, radius, exclude=()):         r_sqr = radius * radius         for x in range(int((co[0] - radius) / self.cell_size),                        int((co[0] + radius) / self.cell_size) + 1):             for y in range(int((co[1] - radius) / self.cell_size),                            int((co[1] + radius) / self.cell_size) + 1):                 for z in range(int((co[2] - radius) / self.cell_size),                                int((co[2] + radius) / self.cell_size) + 1):                     for item in self.cells.get((x, y, z), _empty_set):                         if item not in exclude and (item.co - co).length_squared <= r_sqr:                             yield item