Is filling up plan cache causes a decrease in space allocated for data cache?

SQL Server uses allocated server memory for different kind of purposes. Two of them are plan cache and data cache which are used to store execution plans and actual data correspondingly.

My question: Do these two caches have different allocated space section in Buffer pool, or in contrary, they have just one section in Buffer pool which they share between each other?

In other words, if plan cache is filling up, does space for data cache is reducing as well?

How do you plan Savage Worlds combats based on PC ability?

Last night I ran my first Savage Worlds combat, to try to get a feel for the system. The combatants were two sword-wielding fighting men against three orcs (extras) and their chieftain (a wild card). I enjoyed a lot of the combat, but when all the orcs had been dropped, all progress seemed to stop. We had endless whiff and ping on both sides. I have already read about reducing whiff and ping in SW, and that advice is good for combats that are underway. My question is about planning combats ahead of time.

Savage worlds doesn’t have a challenge rating or threat level system, which makes sense in many respects, because of the general unpredictability of encounters. Unfortunately, it means that without having experience with the system, it’s hard to guess at what threats will not be boring. In the case of the orc chieftain, for example, his high Parry and Toughness made it impossible for the PCs to damage him without acing out the wazoo. This wasn’t a question of a sure-thing TPK from a massive threat: the PCs, too, had high parry, which made it almost as difficult for them to be hurt. Even when we slowed down the combat to really look at bonuses everyone could work for, we kept whiffing and pinging.

In retrospect, I probably would have done better to replace the chieftain with plain old orc who was a wild card and had a higher Strength and Fighting. He could’ve hit the PCs more easily and been hit more easily himself.

So, my question is: how could I have known that from the start? Is there a simple often-right-enough formula for comparing PC traits to threat traits? Is there other common wisdom on doing this?

Is it safe to share your password security plan with others?

The recent epidemic situation has given me enough time to reconsider my password security seriously. I have devised a detailed plan for how to use elements such as a password manager, 2FA, U2F keys, etc. in conjunction with each other to create the optimal security architecture for my personal use (according to my rather limited knowledge of information security).

Now, the plan grew to such an extent that I decided to write it down as a document, so that I remember how certain parts of it work, why they are designed in a particular way, the weak points and so on. Is it safe to show this plan to, e.g. a friend who is also interested in strengthening their security? What about a hypothetical, extreme version – to share it online?

According to the Kerckhoff’s principle, the security of a system should not depend on its secrecy. That’s what I had in mind when designing my plan. I believe that anyone competent enough to try to break my system would also not be obstructed by the lack of knowledge of the design. Its strength relies on secret keys (and some informed use of MFA), also in agreement with the principle. However, I have seen on this site that sometimes users are scolded if they reveal a lot about how they organise their security in a question.

We can easily find how AES or public-key cryptography work in a few moments. That doesn’t prevent them from being widely used and considered safe. Would the same reasoning apply to my personal scheme?

Cheap Windows Reseller Hosting Start Plan $ 12 / Month: Hostbazzar.com

Hostbaazar offers the cheap windows reseller hosting service starts from 12 / month. So start your own hosting business at an affordable price!

Cheap Windows Reseller Hosting, Shared Hosting, VPS Hosting and Dedicated Hosting are the main Services which dealed by hostbazzar. We offer the best inexpensive Windows reseller hosting that will satisfy customers with fast hosting and 24 hour support via plesk. hostbazzar has been providing best-managed shared hosting / reseller hosting / vps hosting services for years, our goal is to provide the best service at the most affordable price and to achieve these goals we are determined to provide the best and satisfying hosting service to our customers.

Hostbazzar 100% ensures your website is faster, safer and always up to date. And our top priority is to provide customers with affordable, reliable and affordable services to the general public. Hostbazzar cheap windows reseller offers unlimited bandwidth, space and free migration facility with no hidden charges through hosting service. So don’t waste time, visit hostbazzar.com as soon as possible and take advantage of our services and give us the opportunity to serve you.

* Cheap Windows Reseller Hosting Start Plan- @ Only $ 12 / month.

>WR-HB1 :$12/mo

-Unlimited Websites
-Disk Space : 10 GB
-Monthly Bandwidth : 200 GB
-24X7 Live Chat Support

>WR-HB2 :$22/mo

-Unlimited Website
-Disk Space : 25 GB
-Monthly Bandwidth : 500 GB
-24X7 Live Chat Support

>WR-HB3 :$39/mo

-Unlimited Website
-Disk Space : 50 GB
-Monthly Bandwidth : 1000GB
-24X7 Live Chat Support

*Features :-

-Unlimited Sub Domains
-Plesk Control Panel
-Unlimited Mail Boxes
-SSL Support
-E-Commerce Support
-Dream Weaver Compatible
-Unlimited Parked Domains
-Unlimited Auto Responders
-Unlimited MySQL Databases
-Unlimited Forwarders
-Unlimited Mailing Lists
-Zend Optimizer
-DDoS Attack Response
-Subdomain Stats
-Firewall
-Flash Support
-Curl, DomXML, Mod_rewrite

We are sure that our plans are best than the others and full of resources. For more information: https://hostbazzar.com/windows_reseller_hosting.php

Thank you.

DELETE a single row from a table with CASCADE DELETE picks a slow plan… but not always

Schema Summary

A dozen tables related by foreign key to a central table (call it TableA). All tables have a PK that is INT IDENTITY, and Clustered. All the tables’ FKs are indexed.

This looks like a star configuration. TableA has fairly static personal info such as name, & DOB. The surrounding tables each have lists of items about the person in TableA that change or grow over time: for example, a person might have several emails, several addresses, several phone numbers, etc…

In the unusual event that I want to delete from TableA (test data that gets inserted during performance checks, for example), the FKs all have CASCADE DELETE to handle removing all subordinate data lists if they exist in any of the surrounding tables. I have three environments to play with: DEV, QA, and UAT (well, four if you count PROD, but “play” is not something I would want to do to PROD). DEV has about 27 million people in TableA with various counts upward of 30M in the surrounding tables. QA and UAT are only a few hundred thousand rows.

The Problem

The simple “delete from TableA where Id = @Id” takes < 1ms on DEV (the big one) and the execution plan looks fine, lots of tiny thin lines and all index seeks… but here’s the rub: infrequently on DEV, and ALWAYS on QA and UAT, the simple delete takes about 1 second and the plan shows almost all the indexes are being scanned, with big fat lines showing the entire row counts.

Observations

The delete statement is issued by Entity Framework Core running inside an API so I have limited capability to mess with it (making it into a stored procedure, index hinting, using a different predicate, and other ideas…)

Despite all three environments being identical (same script created all three environments), nothing I have done so far has improved QA and UAT, but DEV is usually fast.

When DEV becomes slow, it remains slow until “something” happens. I haven’t figured out what the “something” is, but when it occurs, the performance reverts to fast again and remains that way for days.

If I catch DEV at a slow time, and use SSMS to manually run a delete statement, the plan is fast (<1ms); but the deletes coming from the API use a slow plan (1s). Entity Framework is (as best I can tell) using sp_executesql to run a parameterized “delete from tableA where Id = @Id”. The manual query is “DELETE FROM TableA WHERE Id = 123456789”

The row being deleted is always a recently-added row, meaning that the Id is right at the “top” and probably not within the range of the index statistics (although I speak from a position of profound ignorance on that topic and probably have my wires crossed…)

What I have tried so far

Reading up on FK cascade delete issues, it seems all the FKs need to be indexed, so I did that.

Rebuild (not just Reorg) every index.

Selectively delete the bad plans from the plan cache using DBCC FREEPROCCACHE (plan handle)

Running the excellent tools from Brent Ozar got me checking that the FKs were all is_not_trusted = 0

Looked at these (and other) previous StackExchange questions:1, 2, 3, 4

Of those, I suspect that the last one, with a description of how the cardinality estimator gets confused, might be pointing to the source of the problem, but I need help figuring out what to do next…

The plan shot below (from ssms) shows the slow plan: some of the FK indexes are being scanned (but not all) and there is an excessive memory grant. The fast plan would show all index seeks. The whole plan is at ShowMyPlan Link

I hope someone can point out what I have missed, or what I can investigate further.

Thanks!

enter image description here

Bad Plan

Aurora PostgreSQL database using a slower query plan than a normal PostgreSQL for an identical query?

Following the migration of an application and its database from a classical PostgreSQL database to an Amazon Aurora RDS PostgreSQL database (both using 9.6 version), we have found that a specific query is running much slower — around 10 times slower — on Aurora than on PostgreSQL.

Both databases have the same configuration, be it for the hardware or the pg_conf.

The query itself is fairly simple. It is generated from our backend written in Java and using jOOQ for writing the queries:

with "all_acp_ids"("acp_id") as (     select acp_id from temp_table_de3398bacb6c4e8ca8b37be227eac089 )  select distinct "public"."f1_folio_milestones"."acp_id",      coalesce("public"."sa_milestone_overrides"."team",      "public"."f1_folio_milestones"."team_responsible")  from "public"."f1_folio_milestones"  left outer join      "public"."sa_milestone_overrides" on (         "public"."f1_folio_milestones"."milestone" = "public"."sa_milestone_overrides"."milestone"          and "public"."f1_folio_milestones"."view" = "public"."sa_milestone_overrides"."view"          and "public"."f1_folio_milestones"."acp_id" = "public"."sa_milestone_overrides"."acp_id" ) where "public"."f1_folio_milestones"."acp_id" in (     select "all_acp_ids"."acp_id" from "all_acp_ids" ) 

With temp_table_de3398bacb6c4e8ca8b37be227eac089 being a single-column table, f1_folio_milestones (17 million entries) and sa_milestone_overrides (Around 1 million entries) being similarly designed tables having indexes on all the columns used for the LEFT OUTER JOIN.

When we run it on the normal PostgreSQL database, it generates the following query plan:

Unique  (cost=4802622.20..4868822.51 rows=8826708 width=43) (actual time=483.928..483.930 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.004..0.005 rows=1 loops=1)   ->  Sort  (cost=4802598.60..4824665.37 rows=8826708 width=43) (actual time=483.927..483.927 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Hash Left Join  (cost=46051.06..3590338.34 rows=8826708 width=43) (actual time=483.905..483.917 rows=4 loops=1)               Hash Cond: ((f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.view = (sa_milestone_overrides.view)::text) AND (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text))               ->  Nested Loop  (cost=31.16..2572.60 rows=8826708 width=37) (actual time=0.029..0.038 rows=4 loops=1)                     ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.009..0.010 rows=1 loops=1)                           Group Key: all_acp_ids.acp_id                           ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.006..0.007 rows=1 loops=1)                     ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..12.65 rows=5 width=37) (actual time=0.018..0.025 rows=4 loops=1)                           Index Cond: (acp_id = all_acp_ids.acp_id)               ->  Hash  (cost=28726.78..28726.78 rows=988178 width=34) (actual time=480.423..480.423 rows=987355 loops=1)                     Buckets: 1048576  Batches: 1  Memory Usage: 72580kB                     ->  Seq Scan on sa_milestone_overrides  (cost=0.00..28726.78 rows=988178 width=34) (actual time=0.004..189.641 rows=987355 loops=1) Planning time: 3.561 ms Execution time: 489.223 ms 

And it goes pretty smoothly as one can see — less than a second for the query. But on the Aurora instance, this happens:

Unique  (cost=2632927.29..2699194.83 rows=8835672 width=43) (actual time=4577.348..4577.350 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.001..0.001 rows=1 loops=1)   ->  Sort  (cost=2632903.69..2654992.87 rows=8835672 width=43) (actual time=4577.348..4577.348 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Merge Left Join  (cost=1321097.58..1419347.08 rows=8835672 width=43) (actual time=4488.369..4577.330 rows=4 loops=1)               Merge Cond: ((f1_folio_milestones.view = (sa_milestone_overrides.view)::text) AND (f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text))               ->  Sort  (cost=1194151.06..1216240.24 rows=8835672 width=37) (actual time=0.039..0.040 rows=4 loops=1)                     Sort Key: f1_folio_milestones.view, f1_folio_milestones.milestone, f1_folio_milestones.acp_id                     Sort Method: quicksort  Memory: 25kB                     ->  Nested Loop  (cost=31.16..2166.95 rows=8835672 width=37) (actual time=0.022..0.028 rows=4 loops=1)                           ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.006..0.006 rows=1 loops=1)                                 Group Key: all_acp_ids.acp_id                                 ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.003..0.004 rows=1 loops=1)                           ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..10.63 rows=4 width=37) (actual time=0.011..0.015 rows=4 loops=1)                                 Index Cond: (acp_id = all_acp_ids.acp_id)               ->  Sort  (cost=126946.52..129413.75 rows=986892 width=34) (actual time=4462.727..4526.822 rows=448136 loops=1)                     Sort Key: sa_milestone_overrides.view, sa_milestone_overrides.milestone, sa_milestone_overrides.acp_id                     Sort Method: quicksort  Memory: 106092kB                     ->  Seq Scan on sa_milestone_overrides  (cost=0.00..28688.92 rows=986892 width=34) (actual time=0.003..164.348 rows=986867 loops=1) Planning time: 1.394 ms Execution time: 4583.295 ms 

It effectively has a lower global cost, but takes almost 10 times as much time than before!

Disabling merge joins makes Aurora revert to a hash join, which gives the expected execution time — but permanently disabling it is not an option. Curiously though, disabling nested loops gives an even better result while still using a merge join…

Unique  (cost=3610230.74..3676431.05 rows=8826708 width=43) (actual time=2.465..2.466 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.004..0.004 rows=1 loops=1)   ->  Sort  (cost=3610207.14..3632273.91 rows=8826708 width=43) (actual time=2.464..2.464 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Merge Left Join  (cost=59.48..2397946.87 rows=8826708 width=43) (actual time=2.450..2.455 rows=4 loops=1)               Merge Cond: (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text)               Join Filter: ((f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.view = (sa_milestone_overrides.view)::text))               ->  Merge Join  (cost=40.81..2267461.88 rows=8826708 width=37) (actual time=2.312..2.317 rows=4 loops=1)                     Merge Cond: (f1_folio_milestones.acp_id = all_acp_ids.acp_id)                     ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..2223273.29 rows=17653416 width=37) (actual time=0.020..2.020 rows=1952 loops=1)                     ->  Sort  (cost=40.24..40.74 rows=200 width=32) (actual time=0.011..0.012 rows=1 loops=1)                           Sort Key: all_acp_ids.acp_id                           Sort Method: quicksort  Memory: 25kB                           ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.008..0.008 rows=1 loops=1)                                 Group Key: all_acp_ids.acp_id                                 ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.005..0.005 rows=1 loops=1)               ->  Materialize  (cost=0.42..62167.38 rows=987968 width=34) (actual time=0.021..0.101 rows=199 loops=1)                     ->  Index Scan using sa_milestone_overrides_acp_id_index on sa_milestone_overrides  (cost=0.42..59697.46 rows=987968 width=34) (actual time=0.019..0.078 rows=199 loops=1) Planning time: 5.500 ms Execution time: 2.516 ms 

We have asked the AWS support team, they are still looking at the issue, but we are wondering what could cause that issue to happen. What could explain such a behaviour difference?

While looking at some of the documentation for the database, I read that Aurora favors cost over time — and hence it uses the query plan that has the lowest cost.

But as we can see, it’s far from being optimal given its response time… Is there a threshold or a setting that could make the database use a more expensive — but faster — query plan?

*WordPress Unlimited Hosting plan @ only $ 6/yr – FREE SSL – SINGLE CLICK INSTALLER!!

Hurry up!
Hostpoco.com providing the best quality WordPress hosting service to our clients at the most affordable price.we are offering special hosting plans starting from half dollar per month along with a single click script installer where you can install WordPress in single click also can take backups and can finish upgrading part from there. All WordPress/Application hosting plans come no limit for resources and will be the perfect choice for high traffic blogs or sites…So let’s drive on!

WordPress Hosting Features:
~RAID 10 HDD Storage
~Cheap Shared Hosting
~Unlimited Web Space
~Unlimited Bandwidth
~SINGLE CLICK INSTALLER
~OWN EMAIL ADDRESS
~FREE AUTO SSL
~Unlimited MYSQL DATABASES
~FREE PHP MyAdmin
~FREE AwStats
~FREE Virus Scanner
~DDOS Protection
~99.99% uptime
~Softacolous Supported

The fact that WordPress is 100% free of cost is very beneficial and hence WordPress has become one of the most popular website building platforms. Almost 25% of all online websites are run on this platform. For such useful CMS script we are offering below WordPress hosting plans:

*WP Startup plan starts from @$0.5 /Monthly:
-Single Domain Hosting
-5 Email Accounts
-2 Parked Domains
-0 Addon Domains
-2 MySQL Databases
-5 Sub Domains
-Tier 1 Technical Support

*WP Pro plan starts from @$1 /Monthly:
-Double Domain Hosting
-Unlimited Email Accounts
-Unlimited Parked Domains
-1 Addon Domains
-10 MySQL Databases
-Unlimited Sub Domains
-Tier 3 Technical Support

*WP Premium plan starts from @$3 /Monthly:
-Free Domain
-15 Domain Hosting
-Unlimited Email Accounts
-Unlimited Parked Domains
-14 Addon Domains
-Unlimited MySQL Databases
-Unlimited Sub Domains
-Tier 4 Technical Support

*WP Elite plan starts from @$5 /Monthly:
-Free Domain
-Unlimited Domain Hosting
-Unlimited Email Accounts
-Unlimited Parked Domains
-Unlimited Addon Domains
-Unlimited MySQL Databases
-Unlimited Sub Domains
-Tier 4 Technical Support

For more detailed information about us and the types of services we provide, please visit: https://hostpoco.com/half-dollar-wordpress-hosting.php

Thank You.

Can’t help the engine to choose the correct execution plan

The stuff are pretty complex to share the original code (a lot of routines, a lot of tables), so I will try to summarize.

Environment:

  • SQL Server 2016
  • standard edition

Objects:

  • wide table with the following columns:

    ID BIGINT PK IDENTITY Filter01  Filter02  Filter03  .. and many columns    
  • stored procedure returning visible ID from the given table depending on filter parameters

  • the table has the following indexes:

    PK on ID NCI on Filter01 INCLUDE(Filter02, Filter03) NCI on Filter02 INCLUDE(Filter01, Filter03) 

Basically, in the routine I am creating three temporary tables – each holding current filtering values and then join them with the main table. In some cases, Filter02 values are not specified (so the join with this table is skipped) – the other tables are always joined. So, I have something like this:

SELECT * FROM maintable  INNER JOIN #Filter01Values -- always exists INNER JOIN #Filter02Values -- sometimes skipped INNER JOIN #Filter03Values -- always exists 

So, how the IDs are distributed – in 99% of the cases it will be best to filter by Filter02Value and I guess, because of this, the engine is using the NCI on Filter02 INCLUDE(Filter01, Filter03) index.

The issue is that in the rest 1% the query fails badly:

enter image description here

In green is the Filter02 values table and you can see that filtering on this does not reduce the read rows at all. Then when the filtering by Filter01 is done (in red) about 100 rows are returned.

So, this is happening only when the stored procedure is executed. If I execute its code with these parameters I nice execution plan:

enter image description here

In such case, the engine is filtering by Filter01 first and Filter02 third.

I am building and executing dynamic T-SQL statement and I add OPTION(RECOMPILE) at at the end, but it does not change anything. If I add WITH RECOMPILE on the stored procedure level, everything is fine.

Note, the values in the temporary tables for filtering are not populating in the dynamic-tsql statement. The tables are defined, populated and then the statement is built.

So, my questions are:

  • is the engine building a new plan for my dynamic statement as I have OPTION(recompile) – if yes, why is wrong
  • is the engine using the values populated in my filter02 temporary table to build the initial plan – maybe yes, that’s why it is choosing the wrong plan
  • using recompile on procedure level feels very hard/lazy fix – do you have any ideas how I can assist the engine further and skip this option – new indexes for examples (I have try a lot)