Massive slowdown after doing an ALTER to change index from int to bigint, with Postgres

I have a table like this:

create table trades (     instrument varchar(20)      not null,     ts         timestamp        not null,     price      double precision not null,     quantity   double precision not null,     direction  integer          not null,     id         serial         constraint trades_pkey             primary key ); 

I wanted to move the id to bigint, so I did:

ALTER TABLE trades ALTER id TYPE BIGSERIAL;

then, after, I did:

ALTER SEQUENCE trades_id_seq AS BIGINT;

and now, pretty much any large query, using the id in the WHERE expression, will be so slow it will timeout.

The database is AWS RDS Postgres.

Could it be a problem with the index itself?


Here is the query:

EXPLAIN (ANALYZE, BUFFERS)  SELECT id, instrument, ts, price, quantity, direction FROM binance_trades WHERE id >= 119655532 ORDER BY ts LIMIT 50; 

and output:

50 rows retrieved starting from 1 in 1 m 4 s 605 ms (execution: 1 m 4 s 353 ms, fetching: 252 ms)

INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('Limit  (cost=0.57..9.86 rows=50 width=44) (actual time=86743.860..86743.878 rows=50 loops=1)'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('  Buffers: shared hit=20199328 read=1312119 dirtied=111632 written=109974'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('  I/O Timings: read=40693.524 write=335.051'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('  ->  Index Scan using idx_extrades_ts on binance_trades  (cost=0.57..8015921.79 rows=43144801 width=44) (actual time=86743.858..86743.871 rows=50 loops=1)'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('        Filter: (id >= 119655532)'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('        Rows Removed by Filter: 119654350'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('        Buffers: shared hit=20199328 read=1312119 dirtied=111632 written=109974'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('        I/O Timings: read=40693.524 write=335.051'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('Planning Time: 0.088 ms'); INSERT INTO "MY_TABLE"("QUERY PLAN") VALUES ('Execution Time: 86743.902 ms'); 

The activity on AWS:

enter image description here

it’s a 2 cores, 8gb ARM server. Before I did the alter, the same request was < 1 sec. now, small requests are slow and long ones will timeout.

Grouping/sorting performance choice between bigint and nvarchar

I want to store a hash-code for a variable-length text field (max 1000 chars) in a database table. The hash-code will be computed and assigned once on insert, and new rows will be inserted very often.

The hash-code will be used mainly for filtering (WHERE), grouping (GROUP BY), and sorting (ORDER BY) in a couple of queries. The database table will hold a few million rows over time, with the probability of identical hash-codes (for identical text) being around 30% (rest being unique).

I have the choice of making the hash-code data type NVARCHAR (SHA1 of text) or BIGINT (converted bytes of SHA1 of text). I think BIGINT will be better in terms of storage space (less pages).

Generally speaking, which of these two data types will be better in terms of performance, considering the operations mentioned above?

Is there a way to store an arbitrarily big BigInt in a bit sequence, only later to convert it into a standard BigInt structure?

I am trying to imagine a way of encoding a BigInt into a bit stream, so that it is literally just a sequence of bits. Then upon decoding this bit stream, you would generate the standard BigInt sort of data structure (array of small integers with a sign). How could you encode the BigInt as a sequence of bits, and how would you decode it? I don’t see how to properly perform the bitwise manipulations or how to encode an arbitrary number in bits larger than 32 or 64. If a language is required then I would be doing this in JavaScript.

For instance, this takes bytes and converts it into a single bit stream:

function arrayOfBytesTo32Int(map) {   return map[0] << 24     | map[1] << 16     | map[2] << 8     | map[3] } 

How would you do that same sort of thing for arbitrarily long bit sequences?