Same query has different execution plans in Prod vs Test

I have a query that runs in Prod every 30 minutes. Up until yesterday it runs in seconds. Suddenly it’s taking 7 minutes.

I copied the table to Test, created the indexes & gathered statistics. It runs in seconds.

In Prod, even after rebuilding indexes and updating statistics on the table with fullscan, it’s still not performing any better. The actual execution plan in prod looks very different to test and is showing actual reads on one part of the query to be 400 million rows (there are only 1.5 million in the table).

I ran it in test with no indexes (~5 seconds) and then with indexes and all run sub-second.

In Prod, I’ve dropped the Primary Key / Clustered index, updated the statistics (update statistics interface.statsload) and rebuilt it and still it takes 8-10 minutes to run.

Also tried dropping the PK and running it again. Actual Plan shows a very thick pipe in one step with about 9 million actual rows on a full scan. When I do a select Count(*) on the table, it only shows 1.5M rows. Why is that ? I’m sure that’s feeding into it somehow.

I’m baffled. Any pointers on where I could start to look for a root cause here maybe ? I can post more info (plans, table columns, indexes) if needed.

Why does adding an index increase the execution time in SQLite?

I’ll just show you an example. Here candidates is a table of 1000000 candidates from 1000 teams and their individual scores. We want a list of all teams and whether the total score of all candidates within each team is within the top 50. (Yeah this is similar to the example from another question, which I encourage you to look at, but I assure you that it is not a duplicate)

Note that all CREATE TABLE results AS ... statements are identical, and the only difference is the presence of indices. These tables are created (and dropped) to suppress the query results so that they won’t make a lot of noise in the output.

------------ -- Set up -- ------------  .open delete-me.db    -- A persistent database file is required  .print '' .print '[Set up]'  DROP TABLE IF EXISTS candidates;  CREATE TABLE candidates AS WITH RECURSIVE candidates(team, score) AS (     SELECT ABS(RANDOM()) % 1000, 1     UNION     SELECT ABS(RANDOM()) % 1000, score + 1     FROM candidates     LIMIT 1000000 ) SELECT team, score FROM candidates;   ------------------- -- Without Index -- -------------------  .print '' .print '[Without Index]'  DROP TABLE IF EXISTS results;  ANALYZE;  .timer ON .eqp   ON CREATE TABLE results AS WITH top_teams_verbose(top_team, total_score) AS (     SELECT team, SUM(score)     FROM candidates     GROUP BY team     ORDER BY 2 DESC     LIMIT 50 ), top_teams AS (     SELECT top_team     FROM top_teams_verbose ) SELECT team, SUM(team IN top_teams) FROM candidates GROUP BY team; .eqp   OFF .timer OFF   ------------------------------ -- With Single-column Index -- ------------------------------  .print '' .print '[With Single-column Index]'  CREATE INDEX candidates_idx_1 ON candidates(team);  DROP TABLE IF EXISTS results;  ANALYZE;  .timer ON .eqp   ON CREATE TABLE results AS WITH top_teams_verbose(top_team, total_score) AS (     SELECT team, SUM(score)     FROM candidates     GROUP BY team     ORDER BY 2 DESC     LIMIT 50 ), top_teams AS (     SELECT top_team     FROM top_teams_verbose ) SELECT team, SUM(team IN top_teams) FROM candidates GROUP BY team; .eqp   OFF .timer OFF   ----------------------------- -- With Multi-column Index -- -----------------------------  .print '' .print '[With Multi-column Index]'  CREATE INDEX candidates_idx_2 ON candidates(team, score);  DROP TABLE IF EXISTS results;  ANALYZE;  .timer ON .eqp   ON CREATE TABLE results AS WITH top_teams_verbose(top_team, total_score) AS (     SELECT team, SUM(score)     FROM candidates     GROUP BY team     ORDER BY 2 DESC     LIMIT 50 ), top_teams AS (     SELECT top_team     FROM top_teams_verbose ) SELECT team, SUM(team IN top_teams) FROM candidates GROUP BY team; .eqp   OFF .timer OFF 

Here is the output

[Set up]  [Without Index] QUERY PLAN |--SCAN TABLE candidates |--USE TEMP B-TREE FOR GROUP BY `--LIST SUBQUERY 3    |--CO-ROUTINE 1    |  |--SCAN TABLE candidates    |  |--USE TEMP B-TREE FOR GROUP BY    |  `--USE TEMP B-TREE FOR ORDER BY    `--SCAN SUBQUERY 1 Run Time: real 0.958 user 0.923953 sys 0.030911  [With Single-column Index] QUERY PLAN |--SCAN TABLE candidates USING COVERING INDEX candidates_idx_1 `--LIST SUBQUERY 3    |--CO-ROUTINE 1    |  |--SCAN TABLE candidates USING INDEX candidates_idx_1    |  `--USE TEMP B-TREE FOR ORDER BY    `--SCAN SUBQUERY 1 Run Time: real 2.487 user 1.108399 sys 1.375656  [With Multi-column Index] QUERY PLAN |--SCAN TABLE candidates USING COVERING INDEX candidates_idx_1 `--LIST SUBQUERY 3    |--CO-ROUTINE 1    |  |--SCAN TABLE candidates USING COVERING INDEX candidates_idx_2    |  `--USE TEMP B-TREE FOR ORDER BY    `--SCAN SUBQUERY 1 Run Time: real 0.270 user 0.248629 sys 0.014341 

While the covering index candidates_idx_2 does help, it seems that the single-column index candidates_idx_1 makes the query significantly slower, even after I ran ANALYZE;. It’s only 2.5x slower in this case, but I think the factor can be made greater if you fine-tune the number of candidates and teams.

Why is it?

Is this a remote code execution vulnerability?

I am planning to evaluate and install a publicly available software.

https://github.com/opensemanticsearch/open-semantic-search

While reviewing the issues on github, there is an issue open which indicates possible remote code execution for Solr with screenshots.

https://github.com/opensemanticsearch/open-semantic-search/issues/285

I have no idea about security vulnerabilities and hoping this is the correct forum to ask experts. Do you think this is a security vulnerability and one should avoid using the software until fixed?

Manipulate auto execution of Javascript since Burp can only see HTTP Requests/responses

There a javascript that is executing in my browser and it is generating session token. (This was a design requirement to the dev team Sesion token is generated on the client side – don’t ask me why lol)

I want to be able to modify the javascript variables during execution (just as if I was using Debug in Netbeans for instance)

I though I’d use burp suite but it only catches request (not the building of a request by Js)

What can I do to do that ?

Also, I thought I’d use browser debugger but strangely, none of the loaded JS seems to be generating the session token. One of the JS just do that and I see it later in burp interceptor.

Any help here ?

What Trusted Execution Environment (TEE) solutions exist for mobile devices?

A trusted execution environment (TEE) provides a way for one to deploy tamper-proof programs on a device. The most prominent example of TEEs seem to be Intel SGX for PCs.

What I wonder is, if there exists an equivalent solution for mobile devices. For example, I want to deploy an arbitrary application on a smartphone that even a malicious OS can’t tamper with. Is there such a solution at the moment?

What are the differences between symbolic execution and SAT solvers?

My understanding is that symbolic execution only deals with specific paths and bad patterns, while SAT solvers, or satisfiability modulo theories in general, provide a much more robust analysis of the program.

Could someone validate the statement above and (briefly) explain the differences between these two formal verification methodologies?

Trusted Execution Environment (TEE) internal API vs. external (client) API

I am studying Trusted Execution Environment (TEE) in Android mobile phone. From reading, I found there are 2 APIs in TEE (isolated OS):

  • Internal API: a programming and services API for Trusted Application (TA) in TEE, cannot be called by any application running in rich OS (Android’s original OS). E.g, internal API provides cryptographic services

  • External API or client API: called by applications running in rich OS, in order to access TA and TEE services.

Assume I want to apply TEE in this way:

  • I have an APP running in rich OS
  • I want to securely store some cryptographic keys of my APP
  • Hence, the keys are stored in TEE
  • The APP in rich OS retrieves the keys from TEE when it needs, and delete from rich OS memory after usage

Please help explain that

  • How the internal & external API should work in above situation.
  • Except the APP in rich OS, do I also need a TA runing in TEE to store & provide the keys?

Changing execution order of Animator so it can blend with Physics

I’m trying to make an effect that a ragdolled character controlled by an animator also affected by the physics collisions. Similar to this game: Crazy Shopping.

The problem is animator controller overrides every change that happens in the Fixed Update or internal physics update (even when the animator update mode set to Animate physics). enter image description here

I think this can be achieved by some how changing the order of execution of the animator so it can happen before physics update. This way physics can affect the animated object. There are solutions like using a second object which contains the ragdoll and in the LateUpdate you can set the positions and rotations of the animated object which works ok but not the thing in my head.

ActiveRagdoll by MetalCore999 is also a really great work which i would love to learn how it works in behind.

How can i achieve this? i don’t even know if my solution would work properly?

Do you have any suggestion or a different way of thinking. I would really appreciate a road map on this.

Can’t help the engine to choose the correct execution plan

The stuff are pretty complex to share the original code (a lot of routines, a lot of tables), so I will try to summarize.

Environment:

  • SQL Server 2016
  • standard edition

Objects:

  • wide table with the following columns:

    ID BIGINT PK IDENTITY Filter01  Filter02  Filter03  .. and many columns    
  • stored procedure returning visible ID from the given table depending on filter parameters

  • the table has the following indexes:

    PK on ID NCI on Filter01 INCLUDE(Filter02, Filter03) NCI on Filter02 INCLUDE(Filter01, Filter03) 

Basically, in the routine I am creating three temporary tables – each holding current filtering values and then join them with the main table. In some cases, Filter02 values are not specified (so the join with this table is skipped) – the other tables are always joined. So, I have something like this:

SELECT * FROM maintable  INNER JOIN #Filter01Values -- always exists INNER JOIN #Filter02Values -- sometimes skipped INNER JOIN #Filter03Values -- always exists 

So, how the IDs are distributed – in 99% of the cases it will be best to filter by Filter02Value and I guess, because of this, the engine is using the NCI on Filter02 INCLUDE(Filter01, Filter03) index.

The issue is that in the rest 1% the query fails badly:

enter image description here

In green is the Filter02 values table and you can see that filtering on this does not reduce the read rows at all. Then when the filtering by Filter01 is done (in red) about 100 rows are returned.

So, this is happening only when the stored procedure is executed. If I execute its code with these parameters I nice execution plan:

enter image description here

In such case, the engine is filtering by Filter01 first and Filter02 third.

I am building and executing dynamic T-SQL statement and I add OPTION(RECOMPILE) at at the end, but it does not change anything. If I add WITH RECOMPILE on the stored procedure level, everything is fine.

Note, the values in the temporary tables for filtering are not populating in the dynamic-tsql statement. The tables are defined, populated and then the statement is built.

So, my questions are:

  • is the engine building a new plan for my dynamic statement as I have OPTION(recompile) – if yes, why is wrong
  • is the engine using the values populated in my filter02 temporary table to build the initial plan – maybe yes, that’s why it is choosing the wrong plan
  • using recompile on procedure level feels very hard/lazy fix – do you have any ideas how I can assist the engine further and skip this option – new indexes for examples (I have try a lot)

Is there a vulnerability other than XSS which can result in client side script execution?

If the intention of attacker is to execute an arbitrary client side script in the context of a web application, is XSS the only possible attack other than compromising the server with an RCE or a sub-resource supply chain attack? I am looking for attacks which can be mitigated by an application owner rather than attacks which the application cannot control.

  • XSS is Cross Site Scripting – Be it reflected, persistent or DOM based.
  • A sub-resource supply chain attack is where you compromise a sub resource such as CSS, javascript, flash objects etc by compromising the supply chain ie; compromising the CDNs, S3 buckets etc or by MITM a subresource loaded over non-https channel.