Why is OR statement slower than UNION

Database version: Postgresql 12.6

I have a table with 600000 records.

The table has columns:

  • name (varchar)
  • location_type (int) enum values: (1,2,3)
  • ancestry (varchar)

Indexes:

  • ancestry (btree)

The ancestry column is a way to build a tree where every row has an ancestry containing all parent ids separated by ‘/’

Consider the following example:

id name ancestry
1 root null
5 node ‘1’
12 node ‘1/5’
22 leaf ‘1/5/12’

The following query takes 686 ms to execute:

SELECT * FROM geolocations WHERE EXISTS (    SELECT 1 FROM geolocations g2    WHERE g2.ancestry =        CONCAT(geolocations.ancestry, '/', geolocations.id) ) 

This query runs in 808 ms seconds:

SELECT * FROM geolocations WHERE location_type = 2 

When combining both queried with an OR it takes around 4 seconds 475 ms to finish if it ever finishes.

SELECT * FROM geolocations WHERE EXISTS (    SELECT 1 FROM geolocations g2    WHERE g2.ancestry =        CONCAT(geolocations.ancestry, '/', geolocations.id) ) OR location_type = 2 

Explain:

[   {     "Plan": {       "Node Type": "Seq Scan",       "Parallel Aware": false,       "Relation Name": "geolocations",       "Alias": "geolocations",       "Startup Cost": 0,       "Total Cost": 2760473.54,       "Plan Rows": 582910,       "Plan Width": 68,       "Filter": "((SubPlan 1) OR (location_type = 2))",       "Plans": [         {           "Node Type": "Index Only Scan",           "Parent Relationship": "SubPlan",           "Subplan Name": "SubPlan 1",           "Parallel Aware": false,           "Scan Direction": "Forward",           "Index Name": "index_geolocations_on_ancestry",           "Relation Name": "geolocations",           "Alias": "g2",           "Startup Cost": 0.43,           "Total Cost": 124.91,           "Plan Rows": 30,           "Plan Width": 0,           "Index Cond": "(ancestry = concat(geolocations.ancestry, '/', geolocations.id))"         }       ]     },     "JIT": {       "Worker Number": -1,       "Functions": 8,       "Options": {         "Inlining": true,         "Optimization": true,         "Expressions": true,         "Deforming": true       }     }   } ] 

While combining them with a union takes 1 sec 916 ms

SELECT * FROM geolocations WHERE EXISTS (    SELECT 1 FROM geolocations g2    WHERE g2.ancestry =        CONCAT(geolocations.ancestry, '/', geolocations.id) ) UNION SELECT * FROM geolocations WHERE location_type = 2 

Explain

[   {     "Plan": {       "Node Type": "Unique",       "Parallel Aware": false,       "Startup Cost": 308693.44,       "Total Cost": 332506.74,       "Plan Rows": 865938,       "Plan Width": 188,       "Plans": [         {           "Node Type": "Sort",           "Parent Relationship": "Outer",           "Parallel Aware": false,           "Startup Cost": 308693.44,           "Total Cost": 310858.29,           "Plan Rows": 865938,           "Plan Width": 188,           "Sort Key": [             "geolocations.id",             "geolocations.name",             "geolocations.location_type",             "geolocations.pricing",             "geolocations.ancestry",             "geolocations.geolocationable_id",             "geolocations.geolocationable_type",             "geolocations.created_at",             "geolocations.updated_at",             "geolocations.info"           ],           "Plans": [             {               "Node Type": "Append",               "Parent Relationship": "Outer",               "Parallel Aware": false,               "Startup Cost": 15851.41,               "Total Cost": 63464.05,               "Plan Rows": 865938,               "Plan Width": 188,               "Subplans Removed": 0,               "Plans": [                 {                   "Node Type": "Hash Join",                   "Parent Relationship": "Member",                   "Parallel Aware": false,                   "Join Type": "Inner",                   "Startup Cost": 15851.41,                   "Total Cost": 35074.94,                   "Plan Rows": 299882,                   "Plan Width": 68,                   "Inner Unique": true,                   "Hash Cond": "(concat(geolocations.ancestry, '/', geolocations.id) = (g2.ancestry)::text)",                   "Plans": [                     {                       "Node Type": "Seq Scan",                       "Parent Relationship": "Outer",                       "Parallel Aware": false,                       "Relation Name": "geolocations",                       "Alias": "geolocations",                       "Startup Cost": 0,                       "Total Cost": 13900.63,                       "Plan Rows": 599763,                       "Plan Width": 68                     },                     {                       "Node Type": "Hash",                       "Parent Relationship": "Inner",                       "Parallel Aware": false,                       "Startup Cost": 15600.65,                       "Total Cost": 15600.65,                       "Plan Rows": 20061,                       "Plan Width": 12,                       "Plans": [                         {                           "Node Type": "Aggregate",                           "Strategy": "Hashed",                           "Partial Mode": "Simple",                           "Parent Relationship": "Outer",                           "Parallel Aware": false,                           "Startup Cost": 15400.04,                           "Total Cost": 15600.65,                           "Plan Rows": 20061,                           "Plan Width": 12,                           "Group Key": [                             "(g2.ancestry)::text"                           ],                           "Plans": [                             {                               "Node Type": "Seq Scan",                               "Parent Relationship": "Outer",                               "Parallel Aware": false,                               "Relation Name": "geolocations",                               "Alias": "g2",                               "Startup Cost": 0,                               "Total Cost": 13900.63,                               "Plan Rows": 599763,                               "Plan Width": 12                             }                           ]                         }                       ]                     }                   ]                 },                 {                   "Node Type": "Seq Scan",                   "Parent Relationship": "Member",                   "Parallel Aware": false,                   "Relation Name": "geolocations",                   "Alias": "geolocations_1",                   "Startup Cost": 0,                   "Total Cost": 15400.04,                   "Plan Rows": 566056,                   "Plan Width": 68,                   "Filter": "(location_type = 2)"                 }               ]             }           ]         }       ]     },     "JIT": {       "Worker Number": -1,       "Functions": 15,       "Options": {         "Inlining": false,         "Optimization": false,         "Expressions": true,         "Deforming": true       }     }   } ] 

My question is, why does postgresql execute the OR query much slower?

Why do two queries run faster than combined subquery?

I’m running postgres 11 on Azure.

If I run this query:

select min(pricedate) + interval '2 days' from pjm.rtprices 

It takes 0.153 sec and has the following explain:

    "Result  (cost=2.19..2.20 rows=1 width=8)"     "  InitPlan 1 (returns $  0)"     "    ->  Limit  (cost=0.56..2.19 rows=1 width=4)"     "          ->  Index Only Scan using rtprices_pkey on rtprices  (cost=0.56..103248504.36 rows=63502562 width=4)"     "                Index Cond: (pricedate IS NOT NULL)" 

If I run this query:

    select pricedate, hour, last_updated, count(1) as N      from pjm.rtprices     where pricedate<= '2020-11-06 00:00:00'     group by pricedate, hour, last_updated     order by pricedate desc, hour 

it takes 5sec with the following explain:

    "GroupAggregate  (cost=738576.82..747292.52 rows=374643 width=24)"     "  Group Key: pricedate, hour, last_updated"     "  ->  Sort  (cost=738576.82..739570.68 rows=397541 width=16)"     "        Sort Key: pricedate DESC, hour, last_updated"     "        ->  Index Scan using rtprices_pkey on rtprices  (cost=0.56..694807.03 rows=397541 width=16)"     "              Index Cond: (pricedate <= '2020-11-06'::date)" 

However when I run

    select pricedate, hour, last_updated, count(1) as N      from pjm.rtprices     where pricedate<= (select min(pricedate) + interval '2 days' from pjm.rtprices)     group by pricedate, hour, last_updated     order by pricedate desc, hour 

I get impatient after 2 minutes and cancel it.

The explain on the long running query is:

    "Finalize GroupAggregate  (cost=3791457.04..4757475.33 rows=3158115 width=24)"     "  Group Key: rtprices.pricedate, rtprices.hour, rtprices.last_updated"     "  InitPlan 2 (returns $  1)"     "    ->  Result  (cost=2.19..2.20 rows=1 width=8)"     "          InitPlan 1 (returns $  0)"     "            ->  Limit  (cost=0.56..2.19 rows=1 width=4)"     "                  ->  Index Only Scan using rtprices_pkey on rtprices rtprices_1  (cost=0.56..103683459.22 rows=63730959 width=4)"     "                        Index Cond: (pricedate IS NOT NULL)"     "  ->  Gather Merge  (cost=3791454.84..4662729.67 rows=6316230 width=24)"     "        Workers Planned: 2"     "        Params Evaluated: $  1"     "        ->  Partial GroupAggregate  (cost=3790454.81..3932679.99 rows=3158115 width=24)"     "              Group Key: rtprices.pricedate, rtprices.hour, rtprices.last_updated"     "              ->  Sort  (cost=3790454.81..3812583.62 rows=8851522 width=16)"     "                    Sort Key: rtprices.pricedate DESC, rtprices.hour, rtprices.last_updated"     "                    ->  Parallel Seq Scan on rtprices  (cost=0.00..2466553.08 rows=8851522 width=16)"     "                          Filter: (pricedate <= $  1)" 

Clearly, the last query has it doing a very expensive gathermerge so how to avoid that?

I did a different approach here:

    with lastday as (select distinct pricedate from pjm.rtprices order by pricedate limit 3)         select rtprices.pricedate, hour, last_updated - interval '4 hours' as last_updated, count(1) as N          from pjm.rtprices         right join lastday on rtprices.pricedate=lastday.pricedate         where rtprices.pricedate<= lastday.pricedate         group by rtprices.pricedate, hour, last_updated         order by rtprices.pricedate desc, hour 

which took just 2 sec with the following explain:

    "GroupAggregate  (cost=2277449.55..2285769.50 rows=332798 width=32)"     "  Group Key: rtprices.pricedate, rtprices.hour, rtprices.last_updated"     "  CTE lastday"     "    ->  Limit  (cost=0.56..1629038.11 rows=3 width=4)"     "          ->  Result  (cost=0.56..105887441.26 rows=195 width=4)"     "                ->  Unique  (cost=0.56..105887441.26 rows=195 width=4)"     "                      ->  Index Only Scan using rtprices_pkey on rtprices rtprices_1  (cost=0.56..105725202.47 rows=64895517 width=4)"     "  ->  Sort  (cost=648411.43..649243.43 rows=332798 width=16)"     "        Sort Key: rtprices.pricedate DESC, rtprices.hour, rtprices.last_updated"     "        ->  Nested Loop  (cost=0.56..612199.22 rows=332798 width=16)"     "              ->  CTE Scan on lastday  (cost=0.00..0.06 rows=3 width=4)"     "              ->  Index Scan using rtprices_pkey on rtprices  (cost=0.56..202957.06 rows=110933 width=16)"     "                    Index Cond: ((pricedate <= lastday.pricedate) AND (pricedate = lastday.pricedate))" 

This last one is all well and good but if my subquery wasn’t extensible to this hack, is there a better way for my subquery to have similar performance to the one at a time approach?

FunctionCompile returns Part::partd: Part specification 151345[[1]] is longer than depth of object

Today, I install mathematica 12.3 on my computer(windows), then I run the FunctionCompile code

cf = FunctionCompile[Function[Typed[arg, "MachineInteger"], arg + 1]] 

but it goes wrong, and return the information

Part::partd: Part specification 151345[[1]] is longer than depth of object 

I test on my another computer, FunctionCompile works well, so what’s happening here?

Build up your traffic! Upload your videos to more than 220 tube sites now!

Upload Your Video To Hundreds Tube Sites In Just Minutes!

Auto creation of profiles on tube sites, Auto and Manual upload of  videos to hundreds of tube sites!

The Tube Sites Submitter is a fast and efficient tool for anyone who needs to upload videos quickly, easily and automatically to hundreds of tube sites in mere minutes. Tube Sites Submitter takes only minutes to create profiles on tube sites and upload your video automatically to the tube sites included in the database which is part of the Tube Sites Submitter.

The Tube Sites Submitter will:
– upload your video to hundreds of tube sites
– register your profile on hundreds of tube sites
– make sure your video is seen by tens of thousands of users
– increase traffic to your sites
– boost your earnings
– save you a lot of time
– make your work more efficient

The Tube Sites Submitter contains a database of tube sites and their rules.
BUY TUBE SITES SUBMITTER NOW! / TRY DEMO ( Watch this tutorial )

– there are more than 200 categories and a video can be uploaded to as many as 6 niche categories
– you can create an unlimited number of profiles
– proxy settings for individual profiles
– upload to hundreds of tube sites
– an easy-to-use password manager for your passwords
– attach descriptions of different size to the same video
– upload videos including the 2257 info
– upload videos including full video data (video duration, file size, format…)
– upload different video types (wmv, mpg, mp4, mov, flv, avi etc, etc.)
– upload videos with tags
– uploading videos with content-type information is fully supported

Start building traffic and making money right now!

Pinch Gesture for Function Other than Zooming- Unity

I’m working on a game in which I want to pinch outwards on a character, which causes the character to split into 2 smaller copies of itself.

The method I’m thinking about using is to have the "pinch out" gesture destroy the game object and simultaneously create two instances of the smaller game object, and have them follow the fingers that pinch out. The action would also be reversible with the "pinch in" function.

My idea would be to do a raycast to detect the two-finger touch on the object (would I need a collider for that?), then use the beginning and ending touch points to determine if it is pinching out or in.

The problem is I am brand new to Unity and C# and have no idea how to write all of this. All of the tutorials for multi-touch gestures have to do with camera zoom, which is not what I am going for.

Can anyone tell me if I’m on the right track with my logic and provide some guidance on writing the code?

Need to make sidebar responsive only when screen width is greater than 1024px

I’m new to WordPress and I want to make my sidebar responsive when the screen width is greater than 1024px. currently, the sidebar becomes responsive when the screen width is greater than 768px.

The website in concern is https://worklifeandmoney.com

I want to achieve this through additional CSS. I’m using the NewsUp theme. Any kind of help will be greatly appreciated. Thank you!

Update: I tried to make it work with flex but couldn’t succeed.

/* sidebar responsiveness */ @media (min-width: 1024px) .col-md-3 {     -ms-flex: 0 0 25%;     flex: 0 0 25%;     max-width: 25%; } 

ColorFunction in ListContourPlot3D raising error when using more than one argument

The documentation of ColorFunction states that in ListContourPlot3D your chosen function gets the arguments x, y, z and f. However, if you actually use any argument other than the first an error is raised in Mathematica 12.2, yet strangely enough the plot is (most of the times) shown as expected.

Consider for example

list1 = Table[{x, y, z, x^2 + y^2 - z^2}, {x, -1, 1, .05}, {y, -1, 1, .05}, {z, -1, 1, .05}]~Flatten~2;  ListContourPlot3D[list1, Contours -> {0.3}, PlotRange -> {{-1, 1}, {-1, 1}, {-1, 1}}, ColorFunction -> Function[{x, y, z, f}, Hue[z]], ColorFunctionScaling -> False] 

This produces the error

Function::fpct: Too many parameters in {x,y,z,f} to be filled from Function[{x,y,z,f},Hue[z]][0.3]. 

But the generated graphics seem OK: ColorFunction in ListContourPlot3D

Is this a bug, or am I doing something wrong here?

Adult Web More than 1000 Videos + Domain with Authority + Tools

Adult Web More than 1000 Videos + Domain with Authority + Tools

  • Created on March 09, 2021
  • URL: http://gayarea.xyz/
  • Moz metrics: PA: 23 | DA: 7 | Links: +23.000
  • Semrush Metrics: Authority Score: 5 | Backlinks: +90,000
  • Domain on GoDaddy until March 2022
  • Theme and Premium Plugins
  • + 1000 Videos

INCLUDES PREMIUM THEMES AND ALL THE PLUGINS YOU NEED TO ADD ADULT VIDEOS FROM THE MOST POPULAR TUBE SITES IN ONE CLICK!

I have not done any SEO work on it…

Adult Web More than 1000 Videos + Domain with Authority + Tools

Do long-lived races reach social maturity later than short-lived races? [closed]

In most long-lived player character races’ descriptions I see comments about how fast they physically mature and when they’re considered adults. What I am wondering is how fast they socially mature. This isn’t explicitly mentioned.

I ask because to me it seems obvious that probably most races in their late twenties have finished maturing socially. Sure, their personality will still change, but that’s different than maturity.

I have two thoughts about quickly races would mature.

  1. In some sense your "social maturity" is an accumulation of all your life experiences. In this way all races would mature at the same speed. (As in, maybe they have more or less experiences but there is nothing special about their race that affects it.)
  2. In another way your "social maturity" has to do with how well developed your brain is. Teenagers and people in their early twenties still do not have fully developed brains so still have not reached social maturity. (It may make sense to call this "mental maturity" but I’ve never heard the phrase and it seems needlessly specific.) — This would imply that races that are super intelligent might mature faster and ones that aren’t might mature slower. (A problem with this view is that it tries to make a standard meaning of what maturity means,

My initial guess is that all races mature socially at the same speed but I’m curious if there are answers in the lore.

Can a high level warlock with Book of Ancient Secrets learn ritual spells higher than 5th level?

Let’s say I’m an 11th level Pact of the Tome Warlock with Book of Ancient Secrets. I’m able to learn spells to cast as rituals from any source, I can transcribe spells up to half my level rounded up, and have just gotten my 6th level Mystic Arcanum.

I do not actually have 6th level slots, but ritual casting doesn’t use spell slots anyway, and I am capable of casting 6th level spells (or at least one). Am I able to transcribe a 6th level ritual spell, or am I capped at learning 5th level rituals?