Could a slow website cause visitors from a Facebook ads campaign to bounce?

I have a website with traffic problems. The web site sells a product in Spanish.

I recently created a Facebook ads campaign, and it got 96 visits. But Google Analytics only registered 9 visits. I think my website could be slow and people are bouncing. Could this be the case?

I am using WordPress and the Profitbuiler plugin to create the page.

Spatial Query very slow with radius

I posted this before at Query very slow when using spatial with radius which has a lot of details about my problem but I think i didnt include enough data so I am trying here again with database and the query I am having problem tuning.

Download database and attach Database [SQL Server 2019] (I promise no viruses, its just a .bak file in a zip), also scrubbed it out of info i dont want it out there 🙂 https://1drv.ms/u/s!AuxopE3ug8yWm-QTvSxyiHGIrAlXow?e=R7m20G

I shrank the database so its smaller to download, so you must run the following script to rebuild all indexes

EXECUTE sp_msForEachTable 'SET QUOTED_IDENTIFIER ON; ALTER INDEX ALL ON ? REBUILD;' 

Run query (Non-Indexed View)

DECLARE @p3 sys.geography SET @p3=convert(sys.geography,0xE6100000010C010000209DE44540FFFFFF3F77CA53C0) SELECT l.* FROM GridListings l WHERE  l.Location.STDistance(@p3) < 15000 ORDER BY createddate OFFSET 0 ROWS FETCH NEXT 21 ROWS ONLY 

Or Indexed View

DECLARE @p3 sys.geography SET @p3=convert(sys.geography,0xE6100000010C010000209DE44540FFFFFF3F77CA53C0) SELECT l.* FROM GridListingsIndexed l WHERE  l.Location.STDistance(@p3) < 15000 ORDER BY createddate OFFSET 0 ROWS FETCH NEXT 21 ROWS ONLY 

What I am looking for (I am sorry if it is too much, but I am really desperate for help as lot of my queries are timing out on my app which some take between 6-50 seconds on a server with 32gb ram and 6 vcores (hyper v), the server also does other things but I think there is enough horse power

  1. I use the view above which already has non-expired listings filtered, then I use that view to filter down further listings but right now its slow with expirydate set in view and the radius against the view

  2. Look through my indexes and propose better indexes, improvement suggestions overall.

  3. If all fails, i might have to restore to separating my expired and non expired listings into separate tables, but this becomes a nightmare for maintenance

Multiple aggregations in select slow on postgres

I have a table with columns: id, antenna_id, latitude, longitude. There are two composite indexes on (antenna_id, latitude) and (antenna_id, longitude). When I do a max(latitude) for a specific antenna id(s), the speed is acceptable, but doing a min and max for both latitude and longitude at the same time is very slow.

SELECT version()

PostgreSQL 12.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 9.2.0) 9.2.0, 64-bit

Query

SELECT max(latitude) FROM packets WHERE antenna_id IN (1,2)

Explain

Finalize Aggregate  (cost=439588.11..439588.12 rows=1 width=32)   ->  Gather  (cost=439588.00..439588.11 rows=1 width=32)         Workers Planned: 1         ->  Partial Aggregate  (cost=438588.00..438588.01 rows=1 width=32)               ->  Parallel Index Only Scan using idx_packets_antenna_id_latitude on packets  (cost=0.57..430103.40 rows=3393839 width=7)                     Index Cond: (antenna_id = ANY ('{1,2}'::integer[])) JIT:   Functions: 5   Options: Inlining false, Optimization false, Expressions true, Deforming true 

Duration

[2021-03-06 12:14:38] 1 row retrieved starting from 1 in 4 s 438 ms (execution: 4 s 400 ms, fetching: 38 ms) [2021-03-06 12:14:51] 1 row retrieved starting from 1 in 2 s 590 ms (execution: 2 s 560 ms, fetching: 30 ms) 

The explain looks almost identical for max(longitude), min(latitude) and min(longitude) on their own. Speed is acceptable.

But when I combine the queries

SELECT max(latitude), max(longitude), min(latitude), min(longitude) FROM packets WHERE antenna_id IN (1,2)

Duration

[2021-03-06 09:28:30] 1 row retrieved starting from 1 in 5 m 35 s 907 ms (execution: 5 m 35 s 869 ms, fetching: 38 ms)

Explain

Finalize Aggregate  (cost=3677020.18..3677020.19 rows=1 width=128)   ->  Gather  (cost=3677020.06..3677020.17 rows=1 width=128)         Workers Planned: 1         ->  Partial Aggregate  (cost=3676020.06..3676020.07 rows=1 width=128)               ->  Parallel Seq Scan on packets  (cost=0.00..3642080.76 rows=3393930 width=14)                     Filter: (antenna_id = ANY ('{1,2}'::integer[])) JIT:   Functions: 7   Options: Inlining true, Optimization true, Expressions true, Deforming true 

Question

Why does the second query which does the min and max on the latitude and longitude field not use the indexes? And how can I rewrite the query so that it is faster?

How to avoid alienation by expected but slow mood shift, and still keep players out of spoilers?

I run an MLP campaign. By design, at the start of the campaign what the citizens (PCs) know about the world is true, but it’s not the whole and complete truth, and many issues of the past are either not widely known or just reframed to appear less severe than they are. The campaign revolves around them figuring out How Things Really Are, and becoming ones who keep the surface level of the Utopia running.

And here’s the question. MLP makes people think that they know how things really are. So, over time a player may decide that it’s too dark, or by other means too conflicting with their own vision.

Be it other campaign, we could compare our visions for compatibility beforehand, to make sure that it works.

But this campaign is meant to include perspective shifts; I have a few players that are prone to ‘bleeding’ (and know that!) and/or prefer to stay out of spoilers. The ‘actual state of the world’ has/will have a ‘darker past’; this Utopia is based on a few questionable decisions, and is not as stable as it appears at first. I am afraid of alienating these players, or being met by a reaction of "You asked us to play in the Utopia, and then the mood became totally different". Basically, "I was creating my character for another sort of game, one that you initially described to me; and now it’s a different game, one that I don’t actually like".

How to reduce this risk of alienation, yet still keep the mood of mystery and (classical urban-fantasy) ‘this is deeper than you have thought’, without spoilers?

Slow inserts in mariadb columnstore

I have just started on researching the feasibility of using MariaDB’s columnstore for OLAP, and I find inserts are very slow. This is MariaDB 10.5 on a debian 10 system, just an elderly desktop with 8GB RAM. This is the table, a trigger and the timings:

MariaDB [test]> show create table analytics_test\G *************************** 1. row ***************************        Table: analytics_test Create Table: CREATE TABLE `analytics_test` (   `id` int(11) DEFAULT NULL,   `str` varchar(50) DEFAULT NULL ) ENGINE=Columnstore DEFAULT CHARSET=utf8mb4 1 row in set (0.000 sec)  MariaDB [test]> show create trigger test_trg\G *************************** 1. row ***************************                Trigger: test_trg               sql_mode: STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION SQL Original Statement: CREATE DEFINER=`root`@`localhost` trigger test_trg before insert on analytics_test for each row begin   set new.str=concat('Value: ',new.id); end   character_set_client: utf8   collation_connection: utf8_general_ci     Database Collation: utf8mb4_general_ci                Created: 2021-01-07 11:14:28.81 1 row in set (0.000 sec)  MariaDB [test]> insert into analytics_test set id=1; Query OK, 1 row affected (0.817 sec)  MariaDB [test]> insert into analytics_test set id=2; Query OK, 1 row affected (0.560 sec)  MariaDB [test]> insert into analytics_test set id=3; Query OK, 1 row affected (0.611 sec)  MariaDB [test]> select * from analytics_test; +------+----------+ | id   | str      | +------+----------+ |    1 | Value: 1 | |    2 | Value: 2 | |    3 | Value: 3 | +------+----------+ 3 rows in set (0.085 sec) 

I think .5 sec for a simple insert is very slow – compare to the same table in innodb:

MariaDB [test]> show create table test\G *************************** 1. row ***************************        Table: test Create Table: CREATE TABLE `test` (   `id` int(11) DEFAULT NULL,   `str` varchar(50) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 1 row in set (0.000 sec)  MariaDB [test]> show create trigger test_trg1\G *************************** 1. row ***************************                Trigger: test_trg1               sql_mode: STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION SQL Original Statement: CREATE DEFINER=`root`@`localhost` trigger test_trg1 before insert on test for each row begin   set new.str=concat('Value: ',new.id); end   character_set_client: utf8   collation_connection: utf8_general_ci     Database Collation: utf8mb4_general_ci                Created: 2021-01-07 11:45:07.85 1 row in set (0.000 sec)  MariaDB [test]> insert into test set id=1; Query OK, 1 row affected (0.010 sec) 

Is there anything I need to do in order to make inserts perform better?

GSA works so slow

Hi guys,
I have some own site list. I gathered it with gsa, then delete bad domains and now i have about 1200 domains. I want to use them with gsa. I clicked import target url from file and chose my list. But gsa dont use this list and dont work. What can i do? Please help me. I try to built good backlings to my tier 2

MariaDB (MySQL) slow query when primary key range combined with fulltext index

I’ve a table described below, with two columns – integer primary key and title text – currently holding circa 3 million records. As seen in the metadata below, there’s a BTREE index on integer primary key column, and FULLTEXT index on title column.

MariaDB [ttsdata]> describe records; +------------------+---------------------+------+-----+---------------------+-------------------------------+ | Field            | Type                | Null | Key | Default             | Extra                         | +------------------+---------------------+------+-----+---------------------+-------------------------------+ | id               | int(15) unsigned    | NO   | PRI | NULL                | auto_increment                | | title            | varchar(2000)       | YES  | MUL |                     |                               | +------------------+---------------------+------+-----+---------------------+-------------------------------+  MariaDB [ttsada]> show index from records; +---------+------------+-------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table   | Non_unique | Key_name                | Seq_in_index | Column_name      | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +---------+------------+-------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | records |          0 | PRIMARY                 |            1 | id               | A         |     2798873 |     NULL | NULL   |      | BTREE      |         |               | | records |          1 | title                   |            1 | title            | NULL      |           1 |     NULL | NULL   | YES  | FULLTEXT   |         |               | +---------+------------+-------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ 

I’d like to run the following query:

SELECT SQL_NO_CACHE * FROM records WHERE   id > 1928177 AND   MATCH (title) AGAINST ('+flower' IN BOOLEAN MODE) LIMIT 200 

This query takes more 5 seconds to execute. When I remove the the range part or the fulltext part, in both cases the query executes in circa 100 ms. Below is analysis of individual queries, the last one being the one I want to use.

I’m new to MySQL and DBA in general. I’ve posted EXPLAIN statements but I have no idea how to draw any conclusions from them. I assume that the query is slow because the range filtering happens on data set obtained from full text query.

So my question is: How can I make the query fast?

The 1928177 magic number is something that just happens to be needed.

Query 1

SELECT SQL_NO_CACHE * FROM records WHERE id > 1928177 LIMIT 200 
MariaDB [ttsdata]> explain SELECT SQL_NO_CACHE * FROM records WHERE id > 1928177 LIMIT 200; +------+-------------+---------+-------+---------------+---------+---------+------+--------+-----------------------+ | id   | select_type | table   | type  | possible_keys | key     | key_len | ref  | rows   | Extra                 | +------+-------------+---------+-------+---------------+---------+---------+------+--------+-----------------------+ |    1 | SIMPLE      | records | range | PRIMARY       | PRIMARY | 4       | NULL | 227183 | Using index condition | +------+-------------+---------+-------+---------------+---------+---------+------+--------+-----------------------+ 1 row in set (0.005 sec)  MariaDB [ttsdata]> SELECT SQL_NO_CACHE * FROM records WHERE id > 1928177 LIMIT 200; ... 200 rows in set (0.108 sec) 

Time: 0.108 sec

Query 2

SELECT SQL_NO_CACHE * FROM records WHERE MATCH (title) AGAINST ('+flower' IN BOOLEAN MODE) LIMIT 200 
MariaDB [ttsdata]> explain SELECT SQL_NO_CACHE * FROM records WHERE MATCH (title) AGAINST ('+flower' IN BOOLEAN MODE) LIMIT 200; +------+-------------+---------+----------+---------------+-------+---------+------+------+-------------+ | id   | select_type | table   | type     | possible_keys | key   | key_len | ref  | rows | Extra       | +------+-------------+---------+----------+---------------+-------+---------+------+------+-------------+ |    1 | SIMPLE      | records | fulltext | title         | title | 0       |      | 1    | Using where | +------+-------------+---------+----------+---------------+-------+---------+------+------+-------------+ 1 row in set (0.007 sec)  MariaDB [ttsdata]> SELECT SQL_NO_CACHE * FROM records WHERE MATCH (title) AGAINST ('+flower' IN BOOLEAN MODE) LIMIT 200; ... 200 rows in set (0.138 sec) 

Time: 0.138 sec

Query 3

SELECT SQL_NO_CACHE * FROM records WHERE   id > 1928177 AND   MATCH (title) AGAINST ('+flower' IN BOOLEAN MODE) LIMIT 200 
MariaDB [ttsdata]> explain SELECT SQL_NO_CACHE * FROM records WHERE id > 1928177 AND MATCH (title) AGAINST ('+flower' IN BOOLEAN MODE) LIMIT 200; +------+-------------+---------+----------+---------------+-------+---------+------+------+-------------+ | id   | select_type | table   | type     | possible_keys | key   | key_len | ref  | rows | Extra       | +------+-------------+---------+----------+---------------+-------+---------+------+------+-------------+ |    1 | SIMPLE      | records | fulltext | PRIMARY,title | title | 0       |      | 1    | Using where | +------+-------------+---------+----------+---------------+-------+---------+------+------+-------------+ 1 row in set (0.005 sec)  MariaDB [ttsdata]> SELECT SQL_NO_CACHE * FROM records WHERE id > 1928177 AND MATCH (title) AGAINST ('+flower' IN BOOLEAN MODE) LIMIT 200; ... 200 rows in set (5.627 sec) 

Time: 5.627 sec

Optimizing a slow For loop

I know that this is a "beginner question", and, in fact, I am. I want to improve my code, as it takes really too much time to run. I have already read some other discussions, like here, but I am struggling in translating the simplifications with Table or Do into my example.

The For cycle I want to improve is the following:

zh = 0.4; list = {}; eqdiff = phi''[x] + 2*phi'[x]/x + (2*(zg + zh*Exp[-zh*x/dh])/x + 1)*phi[x] == 0; For[i = 0, i < 10000, i++,      With[{zg = -0.6, dh = i*10^-2},           nsol = Block[{eps = $  MachineEpsilon},           NDSolve[{eqdiff, phi[eps] == 1, phi'[eps] == -(zg + zh)}, phi, {x, eps, 20000},                   WorkingPrecision->MachinePrecision, AccuracyGoal->15, PrecisionGoal->8, MaxSteps->Infinity]]];       AppendTo[list, 1/Evaluate[(15000*phi[15000])^2 + ((15000-Pi/2)*phi[15000-Pi/2])^2 /. nsol[[1]]]];] 

Clearly, this code, written in this way, is highly inefficient. Also, I need to do more of these, with different values for zg inside With, and make some plots out of the lists.

Anyone that can help me with this noob question? Thanks a lot!

PostgreSQL: query slow, planner estimates 0.01-0.1 of actual results

I will preface this by saying I’m not well-versed in SQL at all. I mainly work with ORMs, and this recent headache has brought me to dive into the world of queries, planners, etc..

A very common query is behaving weirdly on my website. I have tried various techniques to solve it but nothing is really helping, except narrowing down the released_date field from 30 days to 7 days. However, from my understanding the tables we’re talking about aren’t very big and PostgreSQL should satisfy my query in acceptable times.

Some statistics:

core_releasegroup row count: 3,240,568

core_artist row count: 287,699

core_subscription row count: 1,803,960

Relationships:

Each ReleaseGroup has M2M with Artist, each Artist has M2M with UserProfile through Subscription. I’m using Django which auto-creates indices on foreign-keys, etc..

Here’s the query:

SELECT "core_releasegroup"."id", "core_releasegroup"."title", "core_releasegroup"."type", "core_releasegroup"."release_date", "core_releasegroup"."applemusic_id", "core_releasegroup"."applemusic_image", "core_releasegroup"."geo_apple_music_link", "core_releasegroup"."amazon_aff_link", "core_releasegroup"."is_explicit", "core_releasegroup"."spotify_id", "core_releasegroup"."spotify_link"  FROM "core_releasegroup"  INNER JOIN "core_artist_release_groups"  ON ("core_releasegroup"."id" = "core_artist_release_groups"."releasegroup_id")  WHERE ("core_artist_release_groups"."artist_id"  IN  (SELECT U0."artist_id" FROM "core_subscription" U0 WHERE U0."profile_id" = 1)  AND "core_releasegroup"."type" IN ('Album', 'Single', 'EP', 'Live', 'Compilation', 'Remix', 'Other')  AND "core_releasegroup"."release_date" BETWEEN '2020-08-20'::date AND '2020-10-20'::date); 

Here’s the table schema:

CREATE TABLE public.core_releasegroup (     id integer NOT NULL,     created_date timestamp with time zone NOT NULL,     modified_date timestamp with time zone NOT NULL,     title character varying(560) NOT NULL,     type character varying(30) NOT NULL,     release_date date,     applemusic_id character varying(512),     applemusic_image character varying(512),     applemusic_link character varying(512),     spotify_id character varying(512),     spotify_image character varying(512),     spotify_link character varying(512),     is_explicit boolean NOT NULL,     spotify_last_refresh timestamp with time zone,     spotify_next_refresh timestamp with time zone,     geo_apple_music_link character varying(512),     amazon_aff_link character varying(620) ); 

I have an index both on type, release_date, and applemusic_id.

Here’s the PostgreSQL execution plan: (notice the estimates)

 Nested Loop  (cost=2437.52..10850.51 rows=4 width=495) (actual time=411.911..8619.311 rows=362 loops=1)    Buffers: shared hit=252537 read=29104    ->  Nested Loop  (cost=2437.09..10578.84 rows=569 width=499) (actual time=372.265..8446.324 rows=36314 loops=1)          Buffers: shared hit=143252 read=29085          ->  Bitmap Heap Scan on core_releasegroup  (cost=2436.66..4636.70 rows=567 width=495) (actual time=372.241..7707.466 rows=32679 loops=1)                Recheck Cond: ((release_date >= '2020-08-20'::date) AND (release_date <= '2020-10-20'::date) AND ((type)::text = ANY ('{Album,Single,EP,Live,Compilation,Remix,Other}'::text[])))                Heap Blocks: exact=29127                Buffers: shared hit=10222 read=27872                ->  BitmapAnd  (cost=2436.66..2436.66 rows=567 width=0) (actual time=366.750..366.750 rows=0 loops=1)                      Buffers: shared hit=15 read=8952                      ->  Bitmap Index Scan on core_releasegroup_release_date_03a267f7  (cost=0.00..342.46 rows=16203 width=0) (actual time=8.834..8.834 rows=32679 loops=1)                            Index Cond: ((release_date >= '2020-08-20'::date) AND (release_date <= '2020-10-20'::date))                            Buffers: shared read=92                      ->  Bitmap Index Scan on core_releasegroup_type_58b6243d_like  (cost=0.00..2093.67 rows=113420 width=0) (actual time=355.071..355.071 rows=3240568 loops=1)                            Index Cond: ((type)::text = ANY ('{Album,Single,EP,Live,Compilation,Remix,Other}'::text[]))                            Buffers: shared hit=15 read=8860          ->  Index Scan using core_artist_release_groups_releasegroup_id_cea5da71 on core_artist_release_groups  (cost=0.43..10.46 rows=2 width=8) (actual time=0.018..0.020 rows=1 loops=32679)                Index Cond: (releasegroup_id = core_releasegroup.id)                Buffers: shared hit=133030 read=1213    ->  Index Only Scan using core_subscription_profile_id_artist_id_a4d3d29b_uniq on core_subscription u0  (cost=0.43..0.48 rows=1 width=4) (actual time=0.004..0.004 rows=0 loops=36314)          Index Cond: ((profile_id = 1) AND (artist_id = core_artist_release_groups.artist_id))          Heap Fetches: 362          Buffers: shared hit=109285 read=19  Planning Time: 10.951 ms  Execution Time: 8619.564 ms 

Please note that the above is a stripped down version of the actual query I need. Because of its unbearable slowness, I’ve stripped down this query to a bare-minimum and reverted to doing some filtering and ordering of the returned objects in Python (which I know is usually slower). As you can see, it’s still very slow.

After a while, probably because the memory/cache are populated, this query becomes much faster:

 Nested Loop  (cost=2437.52..10850.51 rows=4 width=495) (actual time=306.337..612.232 rows=362 loops=1)    Buffers: shared hit=241776 read=39865 written=4    ->  Nested Loop  (cost=2437.09..10578.84 rows=569 width=499) (actual time=305.216..546.749 rows=36314 loops=1)          Buffers: shared hit=132503 read=39834 written=4          ->  Bitmap Heap Scan on core_releasegroup  (cost=2436.66..4636.70 rows=567 width=495) (actual time=305.195..437.375 rows=32679 loops=1)                Recheck Cond: ((release_date >= '2020-08-20'::date) AND (release_date <= '2020-10-20'::date) AND ((type)::text = ANY ('{Album,Single,EP,Live,Compilation,Remix,Other}'::text[])))                Heap Blocks: exact=29127                Buffers: shared hit=16 read=38078 written=4                ->  BitmapAnd  (cost=2436.66..2436.66 rows=567 width=0) (actual time=298.382..298.382 rows=0 loops=1)                      Buffers: shared hit=16 read=8951                      ->  Bitmap Index Scan on core_releasegroup_release_date_03a267f7  (cost=0.00..342.46 rows=16203 width=0) (actual time=5.619..5.619 rows=32679 loops=1)                            Index Cond: ((release_date >= '2020-08-20'::date) AND (release_date <= '2020-10-20'::date))                            Buffers: shared read=92                      ->  Bitmap Index Scan on core_releasegroup_type_58b6243d_like  (cost=0.00..2093.67 rows=113420 width=0) (actual time=289.917..289.917 rows=3240568 loops=1)                            Index Cond: ((type)::text = ANY ('{Album,Single,EP,Live,Compilation,Remix,Other}'::text[]))                            Buffers: shared hit=16 read=8859          ->  Index Scan using core_artist_release_groups_releasegroup_id_cea5da71 on core_artist_release_groups  (cost=0.43..10.46 rows=2 width=8) (actual time=0.003..0.003 rows=1 loops=32679)                Index Cond: (releasegroup_id = core_releasegroup.id)                Buffers: shared hit=132487 read=1756    ->  Index Only Scan using core_subscription_profile_id_artist_id_a4d3d29b_uniq on core_subscription u0  (cost=0.43..0.48 rows=1 width=4) (actual time=0.002..0.002 rows=0 loops=36314)          Index Cond: ((profile_id = 1) AND (artist_id = core_artist_release_groups.artist_id))          Heap Fetches: 362          Buffers: shared hit=109273 read=31  Planning Time: 1.088 ms  Execution Time: 612.360 ms 

This is still slow in SQL terms (I guess?), but much more acceptable. The problem is, even though this is a very common view in my web-app (an often executed query), it is still not kept around in RAM/cache, and so I see these huge response-time spikes too often.

I’ve tried every combination of constructing those queries. My last attempt is to see whether the planner estimations are to blame here, and if they’re fixable. If not, I’ll start considering de-normalization.

Or is there something else I’m missing?