My Custom Post Type AJAX Query is Returning no posts – why?

When I run this query on posts_type => "posts" it works just fine. But when I add my custom post type ‘post_type’ => ‘case-studies’, I get no results.

The custom post type has a custom taxonomy set up of ‘case_study_categories’. Can anyone spot what is wrong? I’ve been looking at this for 3 hours and just don’t understand why it isn’t working.


<?php  function ajax_filter_posts_scripts() {   // Enqueue script   wp_enqueue_script('afp_script', get_template_directory_uri() . '/js/ajax/filter-posts.js', array('jquery'), null, false);    wp_localize_script( 'afp_script', 'afp_vars', array(         'afp_nonce' => wp_create_nonce( 'afp_nonce' ), // Create nonce which we later will use to verify AJAX request         'afp_ajax_url' => admin_url( 'admin-ajax.php' ),       )   ); } add_action('wp_enqueue_scripts', 'ajax_filter_posts_scripts', 100);  // Script for getting posts function ajax_filter_get_posts() {    // Verify nonce   if( !isset( $  _POST['afp_nonce'] ) || !wp_verify_nonce( $  _POST['afp_nonce'], 'afp_nonce' ) )     die('Permission denied');    $  data = $  _POST['data'];   $  taxonomy = $  data['taxonomy'];   $  posts_per_page = $  data['posts_per_page'];      $  args = array(     'post_type' => 'case-studies',     'posts_per_page' => $  posts_per_page, );    // If taxonomy is not set, remove key from array and get all posts   if( $  taxonomy ) {     $  args['category_name'] = $  taxonomy;   }    $  query = new WP_Query( $  args );   $  max = $  query->max_num_pages;?>    <?php if ( $  query->have_posts() ) :      // Used to count the posts and compare to max to hide and show load more button;     $  index = 1;      while ( $  query->have_posts() ) : $  query->the_post();       $  index++;       $  featured_img_url_medium = get_the_post_thumbnail_url(get_the_ID(),'medium_large');       $  id = get_the_ID();       $  category = get_the_category();       $  category_name = $  category[0]->cat_name;?>       <a href="<?= get_permalink(); ?>" title="Read - <?php the_title(); ?>" class="dynamic-blogs__card card">         <div class="news-image-container">           <div class="hover-read-more">             <div class="text">               <i class="fal fa-chevron-circle-right"></i>               <p>Read Now</p>             </div>           </div>           <img class="card-image lazy" src="<?= $  featured_img_url_medium; ?>" alt="<?php the_title(); ?>" loading="lazy">         </div>          <div class="card-text-container">           <?php if( $  category_name ) {             echo '<p class="card-category">' . $  category_name . '</p>';           } ?>           <?= the_title('<h3 class="">', '</h3>'); ?>         </div>       </a><!-- Card END -->      <?php endwhile; ?>     <?php if($  index <= $  max ): ?>       <div class="dynamic-blogs__load-more py">         <div class="btn -ghost js-tax-filter" number="12">Load More</div>       </div>     <?php endif;  ?>   <?php else: ?>     <h2>No posts found</h2>   <?php endif;    die(); }  add_action('wp_ajax_filter_posts', 'ajax_filter_get_posts'); add_action('wp_ajax_nopriv_filter_posts', 'ajax_filter_get_posts');  ?>  

How to optimise this IN query?

Lets say i have this schema Resource(resource_mapping_id:Uuid, reaource_id: uuid, node_id: varchar, date: date, available: boolean, resource_start: varchar, resource_end: varchar)

So i have the composite key formed on (resource_mapping_id, resource_id, node_id, date, resource_start, resource_end)

Note: the node is is also uuid stored as text. Now I have these 2 queries:

update resource set available = :value where resource_id=:somevalue and date=:somedate and resource_start=:sometime and resource_end=:sometime


select * from resource where resource_id In (:resourceidlist) and date in (:dates) and node_id in (:nodeIds)

This table contains huge number of records, you can say around 500 million or so..

So whenever i hit these queries bia my Java application through jpa, they made the cpu utilisation of the database spiked upto 100%.

So after doing analysis, I created an index say

Index(resource_id, node_id, date)

Which in turn fixed the issue with the update query, even when it runs in parallel threads, the cpu never spiked up even a bit.

But now coming to the select statement, i was having issues when the parameters went high. So i batched them, i mean in a batch x no. of node ids, resource ids and dates can be processed, even though, with 100(note, size of all parameters is same, if i tel 100, all total its 300) parameters, it spiked up the cpu and ther other threads go into a waiting state!

How to resolve this issue? Should I change my query or something? Ot should I make any other change or create further index only for this situation based? Please help me.

I am using postgres v13.

PostgreSQL query on jsonb column is not using indexes

I have the following table and indexes where all those indexes were created to try and find a solution to my issue:

create table persistent_events (     "notificationId"                       text         not null         constraint persistent_events_pkey             primary key,     payload                                bytea        not null,     notification                           jsonb );   create index metadatadescriptionidx     on persistent_events (((notification -> 'metadata'::text) ->> 'description'::text));  create index metadataidx     on persistent_events ((notification ->> 'metadata'::text));  create index metadataidxgin     on persistent_events using gin ((notification -> 'metadata'::text));  create index metadatadescriptionidxgin     on persistent_events (((notification -> 'metadata'::text) -> 'description'::text));  create index metadataidx2     on persistent_events ((notification -> 'metadata'::text));  create index metadatadescriptionidx2     on persistent_events (((notification -> 'metadata'::text) -> 'description'::text));  create index metadataidx3     on persistent_events (jsonb_extract_path(notification, VARIADIC ARRAY ['metadata'::text]));  create index metadatadescriptionidx4     on persistent_events ((jsonb_extract_path(notification, VARIADIC ARRAY ['metadata'::text]) -> 'description'::text));  create index metadatadescriptionidx3     on persistent_events ((jsonb_extract_path(notification, VARIADIC ARRAY ['metadata'::text]) ->>                            'description'::text)); 

The data stored in the notification column is like the following, but the content of notificationData varies a lot.

{     "metadata":     {         "description": "Test event",         "notificationId": "5eaf73ac-c0b1-4e39-86cc-d5cf9f5f33190e"     },     "notificationData":     {         "attributesChangeInfo":         [             {                 "newValue": "host",                 "oldValue": "localhost",                 "attributeName": "something"             }         ]     } } 

If I query with the following statement everything works fine:

SELECT notification, payload FROM persistent_events WHERE ((notification->'metadata'->>'description' = 'Test event')); 

The execution plan is the following and it is using the indexes as expected:

Bitmap Heap Scan on persistent_events  (cost=93.12..15578.79 rows=4735 width=549) (actual time=2.076..2.078 rows=2 loops=1)   Recheck Cond: (((notification -> 'metadata'::text) ->> 'description'::text) = 'Test event'::text)   Heap Blocks: exact=1   ->  Bitmap Index Scan on metadatadescriptionidx  (cost=0.00..91.94 rows=4735 width=0) (actual time=0.845..0.846 rows=2 loops=1)         Index Cond: (((notification -> 'metadata'::text) ->> 'description'::text) = 'Test event'::text) Planning Time: 16.939 ms Execution Time: 2.177 ms 

If I write it with the following statement using jsonb_extract_path it is not using the indexes:

SELECT notification, payload FROM persistent_events,      jsonb_extract_path(notification, 'metadata') metadata0 WHERE ((metadata0->>'description' = 'Test event')); 


Nested Loop  (cost=0.00..127054.49 rows=947014 width=549) (actual time=74.733..2566.834 rows=2 loops=1)   ->  Seq Scan on persistent_events  (cost=0.00..103379.14 rows=947014 width=549) (actual time=0.019..1457.983 rows=947014 loops=1)   ->  Function Scan on jsonb_extract_path metadata0  (cost=0.00..0.02 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=947014)         Filter: ((metadata0 ->> 'description'::text) = 'Test event'::text)         Rows Removed by Filter: 1 Planning Time: 0.849 ms JIT:   Functions: 6 "  Options: Inlining false, Optimization false, Expressions true, Deforming true" "  Timing: Generation 1.069 ms, Inlining 0.000 ms, Optimization 8.907 ms, Emission 63.872 ms, Total 73.849 ms" Execution Time: 2772.180 ms 

The problem is that I need to write most queries using jsonb_extract_path and jsonb_array_elements as the json contains different arrays that I need to filter on. Is there any way to have PostgreSQL use the indexes even if I use those two functions?

PLPGSQL store columns of a 2D query output into array variables

In some step of a plpgsql function I need to store a 2D query result into array variables.

The following code does the job for scalars (but fails with arrays):

SELECT col_a, col_b FROM my_table WHERE col_c = condition INTO var_a, var_b; 

The following does the job for ONE column and ONE array variable but not more than that:

SELECT ARRAY(     SELECT col_a     FROM my_table     WHERE col_c > condition ) INTO arr_a; 

How could I store multiple rows from col_a, b, c, d… Into their respective array variables without having to do a separate query for each column? Like in the first code example but for multiple rows and arrays.

Thanks for your time.

How come Min function in this query statement print name rather than integers

This is a question on

Pivot the Occupation column in OCCUPATIONS so that each Name is sorted alphabetically and displayed underneath its corresponding Occupation. The output column headers should be Doctor, Professor, Singer, and Actor, respectively. Note: Print NULL when there are no more names corresponding to an occupation.

Input Format The OCCUPATIONS table is described as follows:

Column Type Name String Occupation String Occupation will only contain one of the following values: Doctor, Professor, Singer or Actor.

Sample Input  Name    Occupation Samantha    Doctor Julia   Actor Maria   Actor Meera   Singer Ashely  Professor Ketty   Professor Christeen   Professor Jane    Actor Jenny   Doctor Priya   Singer 


Sample Output

Jenny Ashley Meera Jane Samantha Christeen Priya Julia NULL Ketty NULL Maria 

Explanation The first column is an alphabetically ordered list of Doctor names. The second column is an alphabetically ordered list of Professor names. The third column is an alphabetically ordered list of Singer names. The fourth column is an alphabetically ordered list of Actor names. The empty cell data for columns with less than the maximum number of names per occupation (in this case, the Professor and Actor columns) are filled with NULL valu

SET @r1=0, @r2=0, @r3 =0, @r4=0; SELECT MIN(Doctor), MIN(Professor), MIN(Singer), MIN(Actor) FROM (SELECT CASE Occupation WHEN 'Doctor' THEN @r1:=@r1+1                        WHEN 'Professor' THEN @r2:=@r2+1                        WHEN 'Singer' THEN @r3:=@r3+1                        WHEN 'Actor' THEN @r4:=@r4+1 END        AS RowLine,        CASE WHEN Occupation = 'Doctor' THEN Name END AS Doctor,        CASE WHEN Occupation = 'Professor' THEN Name END AS Professor,        CASE WHEN Occupation = 'Singer' THEN Name END AS Singer,        CASE WHEN Occupation = 'Actor' THEN Name END AS Actor        FROM OCCUPATIONS ORDER BY Name) AS t GROUP BY RowLine; 

My doubt how come MIN(DOCTOR) print names This is from ..Here in this tutorials it comes as number

Please give me a solid answer .I am in the process of learning mysql

MariaDB: No database selected error on some query

I have a php code that rise a query against a MariaDB (using MariaDB 10.5.11 on debian 11) table; I use php-mysql prepared queries for this task as reported in the code below:

if($  this->dbcon->begin_transaction() === false) {         $  this->errNum = -1;         $  this->errText = "Unable to start transaction: " . $  this->dbcon->errno . " - " . $  this->dbcon->error;         return false; }  try {     $  query = file_get_contents("recursivelyRemoveShares.sql");   // (1) If replaced with a SELECT works fine!      if($  query === false) {         $  this->errNum = -1;         $  this->errText = "Unable to read query (0)";         return false;     }      $  stmt = $  this->dbcon->prepare($  query);      // Err 1046: No database selected     if($  stmt === false) {         $  this->errNum = -1;         $  this->errText = "Unable to prepare statement: " . $  this->dbcon->errno . " - " . $  this->dbcon->error;         return false;     }          $  stmt->bind_param("s", $  uuid);     $  stmt->execute();      // Commit transaction     $  this->dbcon->commit(); } catch (Exception $  ex) {     // Rollback transaction if something goes wrong     $  this->dbcon->rollback();          $  this->errNum = $  this->dbcon->errno;     $  this->errText = $  this->dbcon->error;     return false; } 

When running $ stmt = $ this->dbcon->prepare($ query); the database raise an Err 1046: No database selected; however I did some other operations before that executed successfully, using the same DB connection.

This is the query I read with file_get_contents:

DELETE FROM `shares` WHERE `itemuuid` in (   WITH RECURSIVE files_paths (id, parent) AS   (     SELECT uuid, parentuuid       FROM core_data       WHERE uuid = ?     UNION ALL     SELECT e.uuid, e.parentuuid       FROM files_paths AS ep JOIN core_data AS e         ON = e.parentuuid   )   SELECT id FROM files_paths ) 

Note that is a recursive CTE query.

If I replace the $ query with a SELECT query, all the code runs correctly (no error 1046 raisen).

Any help or idea is appreciated.

How to build a SQL query where certain fields can be null or have value?

I need to do a SQL query to a custom WordPress database table where certain columns are nullable. I wrote this static method:

    public static function does_name_exist( $  name ) {      global $  wpdb;      $  table = $  wpdb->prefix . self::DB_TABLE;      if ( is_string( $  name ) ) {         $  query = $  wpdb->prepare( "SELECT `id` FROM `$  table` WHERE `given_name` = %s", $  name );     } elseif ( is_array( $  name ) ) {         $  query = $  wpdb->prepare(             "SELECT `id` FROM `$  table` WHERE `name_prefix` = %s AND `given_name` = %s AND `additional_name` = %s AND `family_name` = %s AND `name_suffix` = %s",             empty( $  name['name_prefix'] ) ? null : $  name['name_prefix'],             empty( $  name['given_name'] ) ? null : $  name['given_name'],             empty( $  name['additional_name'] ) ? null : $  name['additional_name'],             empty( $  name['family_name'] ) ? null : $  name['family_name'],             empty( $  name['name_suffix'] ) ? null : $  name['name_suffix']         );     } else {         return false;     }      $  id = $  wpdb->get_var( $  query );      if ( null !== $  id ) {         return (int) $  id;     } else {         return false;     }  } 

The problem is when a value in $ name array is empty the query compare NULL with equal instead of IS operator.

Example of wrong SQL query:

SELECT * FROM `wp_recipients` WHERE `name_prefix` = NULL AND `given_name` = 'John' AND `additional_name` = NULL AND `family_name` = 'Smith' AND `name_suffix` = NULL  

Example of correct SQL query:

SELECT * FROM `wp_recipients` WHERE `name_prefix` IS NULL AND `given_name` = 'John' AND `additional_name` IS NULL AND `family_name` = 'Smith' AND `name_suffix` IS NULL 

How can I solve the problem?

Query posts(CPT, pages , hierarchical) by Ancestor ID

so i have a custom post type with hierarchical structure. Looks like this:

top level 1    - child        -- grand child       -- grand child    - child       -- grand child top level 2    - child        -- grand child       -- grand child    - child       -- grand child top level 3 

and so on.

I need a query with top level id as a base and query through all child and grand child pages of that top level page?

I know ther is post_parent arg, but it wont query grand childs… I know I can second query childs for grand childs… but i dont want that. I need 1 query with all the other parameters, and with proper pagination count. any ways to achive that?