Disclaimer: I’m not a DBA. I have picked up a few things from this board in the past that I’m building from.
I have a table of google analytics session start times. I have an index on each column. I want to filter for all sessions that were started between two dates. Screenshot below shows the query, and the index.
The query runs quickly but I do not believe it’s using the index based on the Execution plan which both says that there’s a missing index and shows a table scan rather than an index scan:
Is it because of something about the way I’m searching through the datetime? If instead of looking between dates, I set it equal to a date, the execution plan shows it using the index:
But it’s not just this table or datetime. Here’s a different table with an index on a varchar column:
And a simple query on this one also tells me I’m missing the index:
I have created a DML trigger (AFTER INSERT, UPDATE, DELETE) on a table
Trigger’s logic takes about 30 seconds to execute
So if you change even 1 row, it takes about 30 seconds due to trigger execution
Developer asked me "Is there a chance that the trigger could be a fire & forget action?"
I said no, but is it really so ?
Can trigger be executed in "asynchronous" mode?
Application updates couple of rows in few ms, and thinks that transaction is completed, and then trigger is silently executed under the hood ?
I understand that this actually does not look good from consistency point of view, but still, is it possible ?
There are two processes(p1, p2) that may run simultaneously. p2 is a scheduled execution of SPROC while p1 consists of a group of stored procedures that is triggered on the request. Execution of p1 while p2 is in progress can create issues. Only one should run at a time. There are three ways to solve this problem
- When p1 starts check whether p2 is in progress and wait until it completes. p2 can run for more than a day and it might not be a preferable solution
- Kill p2, complete p1 and then restart p2. Killing and restarting p2 isn’t safe due to nature of the sproc
- pause p2 and resume p2 when p1 completes
1. How can I search and stop store procedure being executed?
2. Is there any way I can pause and resume a stored procedure that is being executed?
I am overriding another plugins’ custom registration form with my own custom template (adding additional fields).
On submission, the original plugin triggers:
wc_add_notice( __( 'Your registration has been submitted successfully.', 'woocommerce-wholesale-pro' ) );
This renders in my own custom form submission, however I would like to:
- Re-direct the user back to the home page
- and then trigger the wc_add_notice() hook so that the success notice is displayed
I have completed 1) by adding the following:
However, now I am unable to see the original plugins wc_add_notice() hook be triggered/rendered on submission.
I am unable to grasp WordPress’s flow to achieve this task.
Is this possible? Any help/guidance would be greatly appreciated!
Please let me know if I can clarify anything from my end.
A user just complained he was denied the execution of a procedure. I went to check and verified he had the privileges to execute it. I didn’t change anything (and right now I’m the only one with admin privileges do to so if needed) and after two unsuccessful attempts he tried to run the SP for the third time and it worked.
I have XE configured to catch error messages and it captured twice the error code 229:
The EXECUTE permission was denied on the object ‘storedProcedureName’, database ‘databaseName’, schema ‘schemaName’.
Is there any situation where this behavior is expected?
Microsoft SQL Server 2014 (SP3-CU-GDR) (KB4535288) – 12.0.6372.1 (X64)
I have a query that runs in Prod every 30 minutes. Up until yesterday it runs in seconds. Suddenly it’s taking 7 minutes.
I copied the table to Test, created the indexes & gathered statistics. It runs in seconds.
In Prod, even after rebuilding indexes and updating statistics on the table with fullscan, it’s still not performing any better. The actual execution plan in prod looks very different to test and is showing actual reads on one part of the query to be 400 million rows (there are only 1.5 million in the table).
I ran it in test with no indexes (~5 seconds) and then with indexes and all run sub-second.
In Prod, I’ve dropped the Primary Key / Clustered index, updated the statistics (update statistics interface.statsload) and rebuilt it and still it takes 8-10 minutes to run.
Also tried dropping the PK and running it again. Actual Plan shows a very thick pipe in one step with about 9 million actual rows on a full scan. When I do a select Count(*) on the table, it only shows 1.5M rows. Why is that ? I’m sure that’s feeding into it somehow.
I’m baffled. Any pointers on where I could start to look for a root cause here maybe ? I can post more info (plans, table columns, indexes) if needed.
I’ll just show you an example. Here
candidates is a table of 1000000 candidates from 1000 teams and their individual scores. We want a list of all teams and whether the total score of all candidates within each team is within the top 50. (Yeah this is similar to the example from another question, which I encourage you to look at, but I assure you that it is not a duplicate)
Note that all
CREATE TABLE results AS ... statements are identical, and the only difference is the presence of indices. These tables are created (and dropped) to suppress the query results so that they won’t make a lot of noise in the output.
------------ -- Set up -- ------------ .open delete-me.db -- A persistent database file is required .print '' .print '[Set up]' DROP TABLE IF EXISTS candidates; CREATE TABLE candidates AS WITH RECURSIVE candidates(team, score) AS ( SELECT ABS(RANDOM()) % 1000, 1 UNION SELECT ABS(RANDOM()) % 1000, score + 1 FROM candidates LIMIT 1000000 ) SELECT team, score FROM candidates; ------------------- -- Without Index -- ------------------- .print '' .print '[Without Index]' DROP TABLE IF EXISTS results; ANALYZE; .timer ON .eqp ON CREATE TABLE results AS WITH top_teams_verbose(top_team, total_score) AS ( SELECT team, SUM(score) FROM candidates GROUP BY team ORDER BY 2 DESC LIMIT 50 ), top_teams AS ( SELECT top_team FROM top_teams_verbose ) SELECT team, SUM(team IN top_teams) FROM candidates GROUP BY team; .eqp OFF .timer OFF ------------------------------ -- With Single-column Index -- ------------------------------ .print '' .print '[With Single-column Index]' CREATE INDEX candidates_idx_1 ON candidates(team); DROP TABLE IF EXISTS results; ANALYZE; .timer ON .eqp ON CREATE TABLE results AS WITH top_teams_verbose(top_team, total_score) AS ( SELECT team, SUM(score) FROM candidates GROUP BY team ORDER BY 2 DESC LIMIT 50 ), top_teams AS ( SELECT top_team FROM top_teams_verbose ) SELECT team, SUM(team IN top_teams) FROM candidates GROUP BY team; .eqp OFF .timer OFF ----------------------------- -- With Multi-column Index -- ----------------------------- .print '' .print '[With Multi-column Index]' CREATE INDEX candidates_idx_2 ON candidates(team, score); DROP TABLE IF EXISTS results; ANALYZE; .timer ON .eqp ON CREATE TABLE results AS WITH top_teams_verbose(top_team, total_score) AS ( SELECT team, SUM(score) FROM candidates GROUP BY team ORDER BY 2 DESC LIMIT 50 ), top_teams AS ( SELECT top_team FROM top_teams_verbose ) SELECT team, SUM(team IN top_teams) FROM candidates GROUP BY team; .eqp OFF .timer OFF
Here is the output
[Set up] [Without Index] QUERY PLAN |--SCAN TABLE candidates |--USE TEMP B-TREE FOR GROUP BY `--LIST SUBQUERY 3 |--CO-ROUTINE 1 | |--SCAN TABLE candidates | |--USE TEMP B-TREE FOR GROUP BY | `--USE TEMP B-TREE FOR ORDER BY `--SCAN SUBQUERY 1 Run Time: real 0.958 user 0.923953 sys 0.030911 [With Single-column Index] QUERY PLAN |--SCAN TABLE candidates USING COVERING INDEX candidates_idx_1 `--LIST SUBQUERY 3 |--CO-ROUTINE 1 | |--SCAN TABLE candidates USING INDEX candidates_idx_1 | `--USE TEMP B-TREE FOR ORDER BY `--SCAN SUBQUERY 1 Run Time: real 2.487 user 1.108399 sys 1.375656 [With Multi-column Index] QUERY PLAN |--SCAN TABLE candidates USING COVERING INDEX candidates_idx_1 `--LIST SUBQUERY 3 |--CO-ROUTINE 1 | |--SCAN TABLE candidates USING COVERING INDEX candidates_idx_2 | `--USE TEMP B-TREE FOR ORDER BY `--SCAN SUBQUERY 1 Run Time: real 0.270 user 0.248629 sys 0.014341
While the covering index
candidates_idx_2 does help, it seems that the single-column index
candidates_idx_1 makes the query significantly slower, even after I ran
ANALYZE;. It’s only 2.5x slower in this case, but I think the factor can be made greater if you fine-tune the number of candidates and teams.
Why is it?
I am planning to evaluate and install a publicly available software.
While reviewing the issues on github, there is an issue open which indicates possible remote code execution for Solr with screenshots.
I have no idea about security vulnerabilities and hoping this is the correct forum to ask experts. Do you think this is a security vulnerability and one should avoid using the software until fixed?
I though I’d use burp suite but it only catches request (not the building of a request by Js)
What can I do to do that ?
Also, I thought I’d use browser debugger but strangely, none of the loaded JS seems to be generating the session token. One of the JS just do that and I see it later in burp interceptor.
Any help here ?
A trusted execution environment (TEE) provides a way for one to deploy tamper-proof programs on a device. The most prominent example of TEEs seem to be Intel SGX for PCs.
What I wonder is, if there exists an equivalent solution for mobile devices. For example, I want to deploy an arbitrary application on a smartphone that even a malicious OS can’t tamper with. Is there such a solution at the moment?