How can get Google to index a URL with a canonical tag?

We have the URL routes with the schema: /ads/:city. The objective is to get URLs that use this route indexed in Google:

  • /ads/joinville
  • /ads/lages
  • /ads/florianopolis

Today, just /ads/lages is indexing as a canonical even though with rel canonical tag.

The canonical tag is: <link rel="canonical" href="https://example.com/ads/florianopolis" />

How we can get Google to index all these URLs?

Massive slowdown after doing an ALTER to change index from int to bigint, with Postgres

I have a table like this:

create table trades (     instrument varchar(20)      not null,     ts         timestamp        not null,     price      double precision not null,     quantity   double precision not null,     direction  integer          not null,     id         serial         constraint trades_pkey             primary key ); 

I wanted to move the id to bigint, so I did:

ALTER TABLE trades ALTER id TYPE BIGSERIAL;

then, after, I did:

ALTER SEQUENCE trades_id_seq AS BIGINT;

and now, pretty much any large query, using the id in the WHERE expression, will be so slow it will timeout.

The database is AWS RDS Postgres.

Could it be a problem with the index itself?


Here is the query:

EXPLAIN (ANALYZE, BUFFERS)  SELECT id, instrument, ts, price, quantity, direction FROM binance_trades WHERE id >= 119655532 ORDER BY ts LIMIT 50; 

and output:

50 rows retrieved starting from 1 in 1 m 4 s 605 ms (execution: 1 m 4 s 353 ms, fetching: 252 ms)

INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('Limit  (cost=0.57..9.86 rows=50 width=44) (actual time=86743.860..86743.878 rows=50 loops=1)'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('  Buffers: shared hit=20199328 read=1312119 dirtied=111632 written=109974'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('  I/O Timings: read=40693.524 write=335.051'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('  ->  Index Scan using idx_extrades_ts on binance_trades  (cost=0.57..8015921.79 rows=43144801 width=44) (actual time=86743.858..86743.871 rows=50 loops=1)'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('        Filter: (id >= 119655532)'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('        Rows Removed by Filter: 119654350'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('        Buffers: shared hit=20199328 read=1312119 dirtied=111632 written=109974'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('        I/O Timings: read=40693.524 write=335.051'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('Planning Time: 0.088 ms'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('Execution Time: 86743.902 ms'); 

The activity on AWS:

enter image description here

it’s a 2 cores, 8gb ARM server. Before I did the alter, the same request was < 1 sec. now, small requests are slow and long ones will timeout.

MySQL 8.0 Full Index Scan in RAM sluggish

I have a MySQL 8.0 table on Amazon RDS. This table contains ~35 columns, most of which are sizable JSON blobs. The primary key is an UNSIGNED INT32. There are 8M+ rows in this table and has a size of 50GB+.

I ran a simple COUNT(*) on this table with no WHERE clause and it took over 20 minutes. I read online that the PRIMARY key BTREE includes at least 20 bytes for each JSON/TEXT/BLOB column and to try creating a separate index on the primary key. I tried this and it slightly improved performance to 10 minutes for a COUNT(*). EXPLAINing the query shows correctly that a Full Index Scan was used on this new index.

I measured the size of this second index to be ~87MB (using the query in this SO answer https://stackoverflow.com/a/36573801). This makes sense to me as 8M rows * 4 bytes = 31MB and the other 56MB is likely overhead. This entire index would easily fit in RAM, and I expect that even a Full Index Scan would complete fairly quickly (in seconds, not 10 minutes).

My AWS console shows that Read IOPS spikes to the maximum when running COUNT(*). So MySQL reads from disk for some reason, despite the second index easily fitting in RAM. Running the same COUNT(*) query again right after the first did not change time taken at all, so it seems unlikely that it was reading the index into RAM (even if it was, the disk is an SSD so 87MB would load quickly).

What’s causing my query to read from disk? How can I improve performance?

enter image description here

How do you ‘do something’ to every element in a list except that one index? (C# Beginners Level Question) [closed]

I created a list of 40 buttons, each of these buttons have an ‘int counter’ that counts up incrementally to 5 whenever pressed.

If I hit button 1, the other button’s counter will reset and become 0, but the button I hit can now increase to 2, 3, 4, 5.

How would you loop the list in a way that doesn’t also reset the button being pressed?

Button itself is a class, and I have a ButtonManager that contains List< Button > Buttons

Query taking extremely long to execute first time (Possibly due to Index Caching?)

I have a rather large table, ~ 3,5b rows and growing. For each row I have a specific ID which I wish to retrieve faster than currently. The current run-time is 5 minutes the first time I exectue the query, but instant for other queries subsequently. The table is approximately 603.628,430MB and the Index space takes up 406.398,570MB.

A sample row is:

documentID          pages   sort_id       word_bbox            page_bbox asfdfdddee23333rtfds    1   1        2030 12 2123 55      0 0 2479 3508 aavfcbu4lobfhlyguicl    1   2        2144 12 2157 45      0 0 2479 3508 

The query I wish to execute is:

SELECT p.documentID , p.pages , convert(integer, REPLACE(p.word_id, 'word_1_', '')) as sort_id , p.word , p.word_bbox , p.page_bbox FROM [MY].[DB].[DOCUMENTS] p with (NOLOCK)  where p.documentID = 'asfdfdddee23333rtfds' 

I suspect that the rather long execution time ~5 minutes is the read operations from the server?

I have created a clustered index on the documentID and a Non-clustered Index as well on the documentID as well (just as a test, I have no reason for this to work).

The execution plan is shown here:

execution plan

Execution plan XML: https://www.brentozar.com/pastetheplan/?id=SJezWACCu

Index Drop for Partial Overlap?

I’m working with a 50mil row table with the following:

  • Clustered Index keyed on [Col1]
  • Nonclustered Index keyed on [Col1], [Col2] (no includes)

It seems like the right call here would be to drop #2 and rebuild #1 keyed on [Col1], [Col2]. Sound right? Would the tuning logic be the same if the NCI still had [Col1] as its first key and then 7 other keys after that (rebuild the CX with 8 keys)?

Ola hallengren SQL Server Index and Statistics Maintenance solution

We are using the Ola hallengren SQL Server Index and Statistics Maintenance solution from the past 6 months in our production system. Script is only used for Update Statistics not Index maintenance. Job used to take about 90-120 mins to complete which was completely normal considering the database size(1.8TB). All of a sudden the job started to take about 5-6hrs to complete from the past couple of weeks. We haven’t made any changes to the system. Each Statistics used to take less than 5 secs before now they take about 60-250 secs to complete. All this happened within a couple of days, not gradually. We are using SQL Server Enterprise edition.

Has anyone experienced this kind of issue before? Any suggestions are greatly appreciated.

Below are the parameters used in SQL job.

EXECUTE dbo.IndexOptimize –@Databases = ‘ALL_DATABASES’, @Databases = ‘User DB’, @FragmentationLow = NULL, @FragmentationMedium = NULL, @FragmentationHigh = NULL, @UpdateStatistics = ‘ALL’, @OnlyModifiedStatistics = ‘Y’, @MAXDOP=2, @Indexes=’ALL_INDEXES’, @LogToTable=’Y’ –,@Indexes=’ALL_INDEXES,-%.dbo.eventhistory,-%.dbo.eventhistoryrgc’

Best Regards, Arun

Dockerized Postgres – No space left on device – creating INDEX while PROCESSING TOC

How to solve a Disk full error in Docker Volume using a Postgresql database.

I’m not sure if the issue is with docker, docker volumes or Postgres configuration.

I have 900GB on my hard drive, but that is not how much is in the Docker Image.

I’m not sure if I’m supposed to change configuration with Postgres or if it is a volume size issue.

These are the different errors that I have gotten.

pg_restore: from TOC entry 5728; 1259 21482844 INDEX index_activities_new mydb could not execute query: ERROR:  could not write to file "base/pgsql_tmp/pgsql_tmp173.0.sharedfileset/0.0": No space left on device  creating INDEX "public.index_activities_new" pg_restore: while PROCESSING TOC:   pg_restore: from TOC entry 5729; 1259 10502259 INDEX index_activities_on_activity_for mydb pg_restore: error: could not execute query: ERROR:  could not extend file "base/2765687/3622617": No space left on device HINT:  Check free disk space. 

I have been gradually working through different ways of using pg_restore

# many indexes in the database fail to create pg_restore -h 127.0.0.1 -U postgres -d "mydb" "mydump.dump"  # when running data only, the restore works pg_restore --section=pre-data --section=data -h 127.0.0.1 -U postgres -d "mydb" "mydump.dump"  # when running for specific TOCs, I sometimes have success and sometimes have out of disk space errors pg_restore -v -L x.txt -h 127.0.0.1 -U postgres -d "printspeak_development" "salescrm.dump"  # contents of x.txt ;5971; 1259 31041251 INDEX public index_account_history_data_tenant_source_account_platform_id salescrm ;6138; 1259 2574937 INDEX public index_action_logs_on_action salescrm 

Container shell disk size

docker exec -it pg13 bash  root@51fe1c96f701:/# df -h Filesystem      Size  Used Avail Use% Mounted on overlay          59G   56G   26M 100% / tmpfs            64M     0   64M   0% /dev tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup shm              64M  1.3M   63M   2% /dev/shm /dev/vda1        59G   56G   26M 100% /etc/hosts tmpfs           2.0G     0  2.0G   0% /proc/acpi tmpfs           2.0G     0  2.0G   0% /sys/firmware 

Database that was restored

I have a created a data base and restored a dump from production.

.dump file was 8GB in size and the imported database is 29GB in size

SELECT pg_size_pretty( pg_database_size('mydb_development') );   pg_size_pretty ----------------  29 GB  

Docker Compose

version: '3'  volumes:            pgdata:  services:   pg13:     container_name: "pg13"     image: "postgres:latest"     environment:       POSTGRES_USER: "postgres"       POSTGRES_PASSWORD: "password"      command: "postgres -c max_wal_size=2GB -c log_temp_files=10240 -c work_mem=1GB -c maintenance_work_mem=1GB"      ports:       - "5432:5432"     volumes:       - pgdata:/var/lib/postgresql/data  

From inside the Docker Image