PostgreSQL query on jsonb column is not using indexes

I have the following table and indexes where all those indexes were created to try and find a solution to my issue:

create table persistent_events (     "notificationId"                       text         not null         constraint persistent_events_pkey             primary key,     payload                                bytea        not null,     notification                           jsonb );   create index metadatadescriptionidx     on persistent_events (((notification -> 'metadata'::text) ->> 'description'::text));  create index metadataidx     on persistent_events ((notification ->> 'metadata'::text));  create index metadataidxgin     on persistent_events using gin ((notification -> 'metadata'::text));  create index metadatadescriptionidxgin     on persistent_events (((notification -> 'metadata'::text) -> 'description'::text));  create index metadataidx2     on persistent_events ((notification -> 'metadata'::text));  create index metadatadescriptionidx2     on persistent_events (((notification -> 'metadata'::text) -> 'description'::text));  create index metadataidx3     on persistent_events (jsonb_extract_path(notification, VARIADIC ARRAY ['metadata'::text]));  create index metadatadescriptionidx4     on persistent_events ((jsonb_extract_path(notification, VARIADIC ARRAY ['metadata'::text]) -> 'description'::text));  create index metadatadescriptionidx3     on persistent_events ((jsonb_extract_path(notification, VARIADIC ARRAY ['metadata'::text]) ->>                            'description'::text)); 

The data stored in the notification column is like the following, but the content of notificationData varies a lot.

{     "metadata":     {         "description": "Test event",         "notificationId": "5eaf73ac-c0b1-4e39-86cc-d5cf9f5f33190e"     },     "notificationData":     {         "attributesChangeInfo":         [             {                 "newValue": "host",                 "oldValue": "localhost",                 "attributeName": "something"             }         ]     } } 

If I query with the following statement everything works fine:

SELECT notification, payload FROM persistent_events WHERE ((notification->'metadata'->>'description' = 'Test event')); 

The execution plan is the following and it is using the indexes as expected:

Bitmap Heap Scan on persistent_events  (cost=93.12..15578.79 rows=4735 width=549) (actual time=2.076..2.078 rows=2 loops=1)   Recheck Cond: (((notification -> 'metadata'::text) ->> 'description'::text) = 'Test event'::text)   Heap Blocks: exact=1   ->  Bitmap Index Scan on metadatadescriptionidx  (cost=0.00..91.94 rows=4735 width=0) (actual time=0.845..0.846 rows=2 loops=1)         Index Cond: (((notification -> 'metadata'::text) ->> 'description'::text) = 'Test event'::text) Planning Time: 16.939 ms Execution Time: 2.177 ms 

If I write it with the following statement using jsonb_extract_path it is not using the indexes:

SELECT notification, payload FROM persistent_events,      jsonb_extract_path(notification, 'metadata') metadata0 WHERE ((metadata0->>'description' = 'Test event')); 

Plan:

Nested Loop  (cost=0.00..127054.49 rows=947014 width=549) (actual time=74.733..2566.834 rows=2 loops=1)   ->  Seq Scan on persistent_events  (cost=0.00..103379.14 rows=947014 width=549) (actual time=0.019..1457.983 rows=947014 loops=1)   ->  Function Scan on jsonb_extract_path metadata0  (cost=0.00..0.02 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=947014)         Filter: ((metadata0 ->> 'description'::text) = 'Test event'::text)         Rows Removed by Filter: 1 Planning Time: 0.849 ms JIT:   Functions: 6 "  Options: Inlining false, Optimization false, Expressions true, Deforming true" "  Timing: Generation 1.069 ms, Inlining 0.000 ms, Optimization 8.907 ms, Emission 63.872 ms, Total 73.849 ms" Execution Time: 2772.180 ms 

The problem is that I need to write most queries using jsonb_extract_path and jsonb_array_elements as the json contains different arrays that I need to filter on. Is there any way to have PostgreSQL use the indexes even if I use those two functions?

Why does switching two column values work by simply reassigning the values in T-SQL?

For example, the following query works just as intended in Microsoft SQL Server (T-SQL).

UPDATE Customer SET ContactName = Customer.City, City = Customer.ContactName; 

I would like to know why the above works. I was expecting Customer.City to remain the same. Why does this happen?

Would appreciate if you could provide some sources so that I can read more on these topics.

Thank you.

Updating column value across all SQL server versions

I have a simple table with 3 columns

Installed by – some login name

Installed date – Date when bunch of scripts were run with any changes as part of that package

Version – version number when release was done

I am thinking below.

UPDATE Tablename SET Version = '7.8.1' ; 

In case above is not the correct way, How should I correctly update the version # column when release is done say when some scripts are installed today, version # needs to be updated lets say 7.8.1. Also this needs to work across all versions of SQL server from 2012 to 2019

Creating a new Sortable Column in WordPress Admin

I have a press release category on my WordPress site and there is too much posts being submitted and its taking up space in the All Posts and Published section of my WordPress admin, I’m wondering if there is a way to create a new sortable column where I can hide press releases category from the All Posts and Published column and make it only show in the new sortable column?

Is this possible?

How to remove comment count column in Posts inside the admin dashboard?

I know how to remove from pages:

function remove_pages_count_columns($  defaults) {   unset($  defaults['comments']);   return $  defaults; } add_filter('manage_pages_columns', 'remove_pages_count_columns'); 

But can’t find the answer how to remove for posts.

I have tried:

add_filter('manage_posts_columns', 'remove_posts_count_columns'); 

and

add_filter('manage_post_columns', 'remove_post_count_columns'); 

But none of the above worked.

Any help would be appreciated.

Altering mysql database column causes error in another column

First of all sorry everybody if this question sounds too basic.

I have a WordPress database and a table wp_comments.

In this table, I have two DATETIME fields, whose default value is 0000-00-00 00:00:00.

I need to change this value to CURRENT_TIMESTAMP, so for example I run the following query:

ALTER TABLE `wp_comments` MODIFY `comment_date` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP; 

So, comment_date (NOT comment_date_gmt) is the column to modify.

Now, when I run this query I get the following error:

Error 1067: Invalid default value for comment_date_gmt 

Why is this happening? What could I do?

[UPDATE]

This is the table:

CREATE TABLE `wp_comments` (   `comment_ID` bigint(20) UNSIGNED NOT NULL,   `comment_post_ID` bigint(20) UNSIGNED NOT NULL DEFAULT '0',   `comment_author` tinytext NOT NULL,   `comment_author_email` varchar(100) NOT NULL DEFAULT '',   `comment_author_url` varchar(200) NOT NULL DEFAULT '',   `comment_author_IP` varchar(100) NOT NULL DEFAULT '',   `comment_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',   `comment_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',   `comment_content` text NOT NULL,   `comment_karma` int(11) NOT NULL DEFAULT '0',   `comment_approved` varchar(20) NOT NULL DEFAULT '1',   `comment_agent` varchar(255) NOT NULL DEFAULT '',   `comment_type` varchar(20) NOT NULL DEFAULT 'comment',   `comment_parent` bigint(20) UNSIGNED NOT NULL DEFAULT '0',   `user_id` bigint(20) UNSIGNED NOT NULL DEFAULT '0' ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  ALTER TABLE `wp_comments`   ADD PRIMARY KEY (`comment_ID`),   ADD KEY `comment_post_ID` (`comment_post_ID`),   ADD KEY `comment_approved_date_gmt` (`comment_approved`,`comment_date_gmt`),   ADD KEY `comment_date_gmt` (`comment_date_gmt`),   ADD KEY `comment_parent` (`comment_parent`),   ADD KEY `comment_author_email` (`comment_author_email`(10)); 

MySQL version is 5.7.33

[UPDATE]

I don’t know WHY but I know HOW I solved this issue:

ALTER TABLE `wp_comments` MODIFY `comment_date` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, MODIFY `comment_date_gmt` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP 

This way, by modifying both fields together, I get no errors.

It works, but I’m still quite curious, so if have any idea to share… 🙂

Get datetime based on change in another column using UDF and computed column in TSQL

Given: Given a Microsoft SQL database table Log with multiple columns including these important ones: id (primary key), code (an integer that can take multiple values representing status changes), lastupdated (a datetime field)…

What I need: I need to add a computed column ActiveDate which stores the exact first time when the code changed to 10 (i.e. an active status). As the status keep[s changing in future, this column must maintain the same value as the exact time it went active (thus keeping the active datetime record persistently). This timestamp value should initially begin with a NULL.

My approach I want the activedate field to automatically store the datetime at which the status code becomes 10, but when the status changes again, I want it to remain the same. Since I can’t reference a calculated column from a calculated column, I created a user defined function to fetch the current value of activedate and use that whenever the status code is not 10.

Limitations:

  • I can’t make modifications to the Db or to columns (other than the new columns I can add).
  • This T-SQL script must be idempotent such that it can be run multiple times at anytime in the production pipeline without losing or damaging data.

Here is what I tried.

IF NOT EXISTS (SELECT 1 FROM sys.columns WHERE Name=N'ActiveDate' AND OBJECT_ID = OBJECT_ID(N'[dbo].[Log]'))     /* First, create a dummy ActiveDate column since the user-defined function below needs it */     ALTER TABLE [dbo].[Log] ADD ActiveDate DATETIME NULL      IF OBJECT_ID('UDF_GetActiveDate', 'FN') IS NOT NULL        DROP FUNCTION UDF_GetActiveDate     GO      /* Function to grab the datetime when status goes active, otherwise leave it unchanged */      CREATE FUNCTION UDF_GetActiveDate(@ID INT, @code INT) RETURNS DATETIME WITH SCHEMABINDING AS         BEGIN            DECLARE @statusDate DATETIME            SELECT @statusDate = CASE               WHEN (@code = 10) THEN [lastupdated]               ELSE (SELECT [ActiveDate] from [dbo].[Log] WHERE id=@ID)            END            FROM [dbo].[Log] WHERE id=@ID            RETURN @statusDate         END     GO          /* Rename the dummy ActiveDate column so that we can be allowed to create the computed one */     EXEC sp_rename '[dbo].[Log].ActiveDate', 'ActiveDateTemp', 'COLUMN';      /* Computed column for ActiveDate */     ALTER TABLE [dbo].[Log] ADD ActiveDate AS (        [dbo].UDF_GetActiveDate([id],[code])     ) PERSISTED NOT NULL      /* Delete the dummy ActiveDate column */     ALTER TABLE [dbo].[Log] DROP COLUMN ActiveDateTemp;      print ('Successfully added ActiveDate column to Log table') GO 

What I get: The following errors

  • [dbo].[Log].ActiveDate cannot be renamed because the object participates in enforced dependencies.
  • Column names in each table must be unique. Column name ‘ActiveDate’ in table ‘dbo.Log’ is specified more than once.

Is my approach wrong? Or is there a better way to achieve the same result? Please help.

MySQL Alter Table Drop Column INPLACE NoLock – Until which point the column being dropped is accessible?

I am trying to run MySQL Alter INPLCAE command to drop few columns from a very large 90GB table. While the Alter is running I am able to run the Select statement on the same table to ensure that the table is not locked.

MySQL Version 5.7 with innodb

Questions:

  1. While the alter command is running with in place algorithm and nolock, up to what point the data can be accessed in the columns being dropped? e.g. at the point when the columns are almost being dropped? I need to make this change in prod so need to make sure of this.

  2. Can the application still update the table while the alter to drop columns is running? Currently the columns are stored and after dropping them we will be converting them Virtual.

  3. Will there be any downtime at all, I read somewhere that the table will be locked shortly at the end, correct me if I am wrong.

Thanks!