Postgresql: sort by value position in array column, then by secondary order

I’m not quite sure what the best way to phrase this is…

So in my DB there is pillars text array which is basically an enum where providers ordered what values meant the most to their business, from most important to providing that value for their clients, to least important.

I’m using PostGIS to query providers in a specific area, and want to return providers ordered first by the pillar that a client selected they were looking for, then by closest location.

so if the pillars all have values ['a', 'b', 'c', 'd'], in any order depending on what providers selected, and the client selected pillar c

the results of the query would preferably return any/all providers that have pillar c at array index 0 first, ordered by distance to geopoint, then by providers that have pillar c at array index 1 second ordered by distance to client geopoint, then idx 2, then idx 3

I’m really only looking for the top 3 results in all cases, and providers with pillar c at idx 1 would only be needed if there were less than 3 results for index 0

Is this possible to pull off in a single query? or should I just run it with a where clause and check the results length until I have 3 results?

The pillars column is indexed with a gin index btw

How does PostgreSQL store timestamp internally?

In case of MySQL,

TIMESTAMP values are stored as the number of seconds since the epoch (‘1970-01-01 00:00:00’ UTC)`

In case of PostgreSQL with the version less than or equal to 9.6

timestamp values are stored as seconds before or after midnight 2000-01-01

In case of PostgreSQL with the version greater than or equal to 10, there is no explanation about this

I have two questions about the internal logic of PostgreSQL.

  1. Does it still use the same standard as the version 9.6?
  2. Why "midnight 2000-01-01"? Unix epoch starts from 1970-01-01 00:00:00 UTC. J2000 epoch starts from 12 noon (midday) on January 1, 2000.

It seems like only a few systems use 2000-01-01 00:00:00.

Because PostgreSQL provides functions to convert UNIX epoch into the timestamp to_timestamp or vice versa EXTRACT(EPOCH FROM ...), using the standard that is different from UNIX epoch seems like to require additional offset calculations.

PostgreSQL COPY TO doesn’t honor Linux group privileges

I have issues executing COPY to command, receiving below error msg:

ERROR: could not open file "/home/pgsql/TMP/out.txt" for writing: Permission denied

  • Postgres engine is being run as postgres user
  • Postgres user belongs to app group
  • TMP folder has below priviledges set:

drwxrws— 2 pgsql app 23 Mar 28 19:47 TMP

I don’t understand why it doesn’t work…

I did quick check login as postgres user and creating file and this worked:

sudo -i -u postgres

If I change priviledges to 777 all works as expected (I can do copy to command with no issues) So as to me it looks postgres doesn’t somehow honor Linux groups rights

Can someone guide me how to resolve that (777 is not an option to me…)

Error while Running query in postgreSQL from sql server 2016

The below query runs well in sql server 2016:

 select ResellerId, vCompanyName ,x.*  from wlt_tblReseller AS main  outer apply ( select (count (ipkReportTypeId)) AS "count", vReportTypeName                 from wlt_tblReseller _all                inner join wlt_tblClient clients  on clients.ifkParentResellerId = _all.ResellerId               inner join wlt_tblReport_CompanyMaster reporslogs on reporslogs.ifkCompanyId = clients.ClientId               inner join wlt_tblReports_TypeMaster rpt_types on rpt_types.ipkReportTypeId = reporslogs.ifkReportTypeId                where RootResellerId = main.ResellerId                  and rpt_types.bStatus =1                  and bIsStatic =1                  and vReportTypeName  is not null               group by ipkReportTypeId,vReportTypeName             ) AS x  WHERE IsMiniReseller = 0    and ResellerId <> 1     and vReportTypeName is not null  order by  vCompanyName desc  

But when I take it to postgreSQL and change outer apply to LEFT JOIN LATERAL,it does not run and produces the following error:

ERROR: syntax error at or near "WHERE" LINE 9: )AS x WHERE IsMiniReseller = 0 and ResellerId <> 1 and v… SQL state: 42601 Character: 649

What could I be missing?Any help will be much appreciated.

Regards Chris.

Implicitly cast an ISO8601 string to TIMESTAMPTZ (postgresql) for Debezium

I am using a 3rd party application (Debezium Connector). It has to write date time strings in ISO-8601 format into a TIMESTAMPTZ column. Unfortunately this fails, because there is no implicit cast from varchar to timestamp tz.

I did notice that the following works:

SELECT TIMESTAMPTZ('2021-01-05T05:17:46Z'); SELECT TIMESTAMPTZ('2021-01-05T05:17:46.123Z'); 

I tried the following:

  1. Create a function and a cast
CREATE OR REPLACE FUNCTION varchar_to_timestamptz(val VARCHAR)  RETURNS timestamptz AS $  $       SELECT TIMESTAMPTZ(val) INTO tstz; $  $   LANGUAGE SQL; CREATE CAST (varchar as timestamptz) WITH FUNCTION varchar_to_timestamptz (varchar) AS IMPLICIT; 

Unfortunately, it gives the following errors:

function timestamptz(character varying) does not exist

  1. I also tried the same as above but using plpgsql and got the same error.

  2. I tried writing a manual parse, but had issues with the optional microsecond segment which gave me the following

CREATE OR REPLACE FUNCTION varchar_to_timestamptz (val varchar) RETURNS timestamptz AS $  $        SELECT CASE          WHEN $  1 LIKE '%.%'              THEN to_timestamp($  1, 'YYYY-MM-DD"T"HH24:MI:SS.USZ')::timestamp without time zone at time zone 'Etc/UTC'          ELSE to_timestamp($  1, 'YYYY-MM-DD"T"HH24:MI:SSZ')::timestamp without time zone at time zone 'Etc/UTC' END $  $   LANGUAGE SQL; 

Which worked, but didn’t feel correct.

Is there a better way to approach this implicit cast?

PostgreSQL Function – Finding the difference between 2 times

I want to create a simple function in Postgres to find the difference between 2 TIME – not TIMESTAMP. As show below, it accepts 4 parameters: hour, minute, second and expire (hour). In this example I have commented out seconds, just working on minutes.

CREATE OR REPLACE FUNCTION time_diff(hr INT, min INT, sec INT, exp_hr INT) RETURNS INT LANGUAGE plpgsql AS $  $   DECLARE     cur_time    TIME;     expire_time TIME;      diff_interval INTERVAL;     diff INT = 0; BEGIN     cur_time    = CONCAT(hr,  ':',  min, ':', sec) AS TIME; -- cast hour, minutes and seconds to TIME     expire_time = CONCAT(exp_hr, ':00:00') AS TIME;         -- cast expire hour to TIME      -- MINUS operator for TIME returns interval 'HH:MI:SS;     diff_interval = expire_time - cur_time;      diff = DATE_PART('hour', diff_interval);      diff = diff * 60 + DATE_PART('minute', diff_interval);      --diff = diff * 60 + DATE_PART('second', diff_interval);      RETURN diff; END; $  $  ; 

Example: 01:15:00 – 02:00:00 should give me 45 minutes, so I do the following and I get the correct answer.

select * from time_diff(1, 15, 0, 2); 

However, if I do this: 23:15:00 – 01:00:00 – the should give me 105 minutes (60 + 45).

select * from time_diff(23, 15, 0, 1); 

But the result I am getting is -1335. I am trying to work out where I have gone wrong here.

Also I am invoking DATE_PART functions, this seems to be a quite an expensive process in terms of CPU usage. Is there a better way of optimising this function. With the first example I am getting results in 0.007s on 2018 i7 Mac mini. Although I do think this function is quick, but could it be better?


Does the optimized column order for a PostgreSQL table always have variable length types at the end?

There’s a popular and seemingly authoritative blog post called On Rocks and Sand on how to optimize PostgreSQL tables for size to eliminate internal padding by re-ordering their column length. They explain how variable-length types incur some extra padding if they’re not at the end of the table:

This means we can chain variable length columns all day long without introducing padding except at the right boundary. Consequently, we can deduce that variable length columns introduce no bloat so long as they’re at the end of a column listing.

And at the end of the post, to summarize:

Sort the columns by their type length as defined in pg_type.

There’s a library that integrates with Ruby’s ActiveRecord to automatically re-order columns to reduce padding called pg_column_byte_packer. You can see the README in that repo cites the above blog post and in general does the same thing that the blog post describes.

However, the pg_column_byte_packer does not return results consistent with the blog post it cites. The blog post pulls from from PostgreSQL’s internal pg_type.typelen which puts variable-length columns always at the end via an alignment of -1. pg_column_byte_packer gives them an alignment of 3.

pg_column_byte_packer has an explanatory comment:

    # These types generally have an alignment of 4 (as designated by pg_type     # having a typalign value of 'i', but they're special in that small values     # have an optimized storage layout. Beyond the optimized storage layout, though,     # these small values also are not required to respect the alignment the type     # would otherwise have. Specifically, values with a size of at most 127 bytes     # aren't aligned. That 127 byte cap, however, includes an overhead byte to store     # the length, and so in reality the max is 126 bytes. Interestingly TOASTable     # values are also treated that way, but we don't have a good way of knowing which     # values those will be.     #     # See: `fill_val()` in src/backend/access/common/heaptuple.c (in the conditional     # `else if (att->attlen == -1)` branch.     #     # When no limit modifier has been applied we don't have a good heuristic for     # determining which columns are likely to be long or short, so we currently     # just slot them all after the columns we believe will always be long. 

The comment appears to be not wrong as text columns do have a pg_type.typalign of 4 but they’ve also got a pg_type.typlen of -1 which the blog post argues gets the most optimal packing when at the end of the table.

So in the case of a table that has a four byte alignment column, a text column, and a two byte alignment column, pg_column_byte_packer will put the text columns right in between the two. They’ve even got a unit test to assert that this always happens.

My question here is: what order of columns actually packs for minimal space? The comment from pg_column_byte_packer appears to be not wrong as text columns do have a pg_type.typalign of 4, but they’ve also got a pg_type.typlen of -1.

Is MySQL more scalable than PostgreSQL due to the difference in how they handle connections?

I’m trying to decide if either MySQL or PostgreSQL would be more suitable for an application that will get hit by potentially thousands of simultaneous requests at a time.

During research, one fact that stands out is that PostgreSQL forks a new process for each connection, whereas MySQL creates a new thread to handle each connection.

  • Does this mean that MySQL is more efficient than PostgreSQL at handling many concurrent connections?

  • How much of an impact does this difference have on how well both systems scale? Is it something that I should worry about to begin with?

How can I separate it with Json decode and save it in PostgreSQL database with PHP?

I want to do web scraping with PHP. There is a json data in the URL, I want to pull this data and save it to the postgreSQL database. This is the code:

<?php  $  ch = curl_init(); $  url = "";  curl_setopt($  ch,CURLOPT_URL, $  url); curl_setopt($  ch,CURLOPT_RETURNTRANSFER, true);  $  resp = curl_exec($  ch);  if($  e = curl_error($  ch)) {     echo $  e; } else{     $  decoded = json_decode($  resp, true);     print_r($  decoded); }    //your database connection here  $  host = "localhost";  $  user = "postgres";  $  password = "****";  $  dbname = "sok";  // Create connection try{     $  this->linkid = @pg_connect("host=$  this->host port=5432 dbname=$  this->dbname user=$  this->user password=$  this->password");     if (! $  this->linkid)     throw new Exception("Could not connect to PostgreSQL server."); } catch (Exception $  e) {     die($  e->getMessage()); } foreach ($  array_data as $  row) { $  sql = "INSERT INTO il_adi (il) VALUES (decoded)"; $  conn->query($  sql); } $  conn->close();   ?> 

I can view the data I have captured in the array in the terminal. How can I save this to the database?

How to list all grants per user/role on PostgreSQL

I’ve run these statements on Postgres CLI (I’m using PostgreSQL v13.1):

CREATE ROLE blog_user; GRANT blog_user TO current_user; 

And I created a function

CREATE FUNCTION SIGNUP(username TEXT, email TEXT, password TEXT) RETURNS jwt_token AS $  $   DECLARE   token_information jwt_token; BEGIN .... END; $  $   LANGUAGE PLPGSQL VOLATILE SECURITY DEFINER; 

Finally I granted a permission:

GRANT EXECUTE ON FUNCTION SIGNUP(username TEXT, email TEXT, password TEXT) TO anonymous; 

I wish to list all grants per user/role in my schema/database. \du and \du+ show basic information, which does not contain info about the grant (execute on function) made recently.