I’m working on the problem: Count smaller elements on right side using Set in C++ STL
The solution is to add each element to the set and then to count the elements on the left, the distance function is called.
This is the algo:
1. Traverse the array element from i=len-1 to 0 and insert every element in a set. 2. Find the first element that is lower than A[i] using lower_bound function. 3. Find the distance between above found element and the beginning of the set using distance function. 4. Store the distance in another array Lets say CountSmaller. 4. Print that array
I’m having a hard time to visualize or understand how can distance function be used with a set like structure since internally, the set data is stored as a self balanced tree (Red Black Tree). Whats the concept of distance for a self balancing tree and how does calling distance() give us the count of smaller elements on the right side?
We just upgraded from SQL Server 2008 R2 to SQL Server 2019(Compability lvl 150).
We have two different stored procedures that started failing after the upgrade, with error messages like this:
Msg 8632, Level 17, State 2, Procedure BuildSelfSaleStats, Line 14 [Batch Start Line 4] Internal error: An expression services limit has been reached. Please look for potentially complex expressions in your query, and try to simplify them.
Whats really strange is that this particular stored procedure doesnt take any arguments, and when we simply execute the body of the SQL code in SSMS, it works fine(!?).
What might cause some SQL code that works fine when executed in SSMS, to suddenly start failing when its wrapped in a stored procedure?
It is a common method on mobile applications to allow users to bypass authentication process by verifying a locally stored token (previously authenticated) on device.
This is to strike a balance between usability (avoiding authentication every time) and security.
- Are there any security holes in this process?
- What are measures to be taken to strengthen this method?
There is SQL- script, which generated the Nonclustered Index with Included Column:
CREATE TABLE users ( id INT, firstname VARCHAR(50), surname VARCHAR(50) ); CREATE CLUSTERED INDEX ix_users_id ON users (id); CREATE NONCLUSTERED INDEX ix_users_firstname ON users (firstname) include (surname); SELECT firstname, surname FROM users WHERE firstname = 'John';
If I correctly understood, most of the time, Engine of my SQL Server 2019 will seek Nonclustered Index for the above SELECT query, without touching the Clustered Index. Does that mean the value of
surname column is stored in Leaf Node of the Nonclustered Index? Also, that means the value of
surname is duplicated because it also stored in Clustered Index.
Am I right?
My stored procedure OUT parameter, always return a null value.
Here is sample Table, Trigger and Procedure code.
Values in a table:
id | status 1 | null
create trigger BEFORE_UPDATE_TEST before update on `test` for each row begin call Test_BEFORE_UPDATE_TEST(old.id, @updatedStatus); ## I always get @updatedStatus null/nil if (@updatedStatus is not null and @updatedStatus <> new.status) then set new.status = @updatedStatus; end if; end;
create procedure Test_BEFORE_UPDATE_TEST ( IN id int(5), OUT status enum(‘pass’, ‘fail’) ) begin @status = ‘pass’; END;
What is wrong with this code, as I get unexpected result as null in the value
@updatedStatus, which should be
I looked around following QAs on dba.stackexchange but could’t find solution.
I use MySQLWorkbench in MacOS Catalina and version of MySQL is 8.0.19.
Actually i am preparing for an exam and in the last year exam this que. was been asked. i.e
The maximum decimal integer number that can be stored in memory of 8-bit word processor computer ?
Answer of this que. as given in the answer key is (b). And I have no idea how they arrived at this result.
Acc. to my understanding, we have 8-bits, which is 28 = 256 so 255 should be the maximum integer which we can store.
I need to encrypt daily backups, then upload them to untrusted cloud storage (s3, dropbox, etc.)
I received help on security.se and crypto.se to formulate this approach:
- tar and xz the backup file
- create random 32 byte (symmetric) “session” key (
head -c 32 /dev/urandom)
- encrypt backups using session key
- encrypt session key using my “master” (asymmetric) keypair’s public key
- upload encrypted backup file and encrypted session key
- Every backup has unique symmetric session key
- Only my master keypair’s private key can decrypt session keys
- My private key is stored locally only
- Encryption process is completely automated; no passphrases required
However then I tried to implement this with
gpg and stumbled over some items.
Once I generate a session key, how do I use it? I thought it was supposed to be the passphrase in
gpg --symmetric --passphrase $ SESSION_KEY ..., but apparently that’s not how it’s done.
I did more digging and discovered that gpg does almost everything symmetrically, and that a session key is already generated and included in each encrypted file automatically (in the header). So most of the above is done automatically for me.
So, how do I use the session key (if at all)? I understand the theory, but not how to implement it with
I ripped a few videos from YouTube (using y2mate) about a week ago of guitar lessons from a player named John Redbourne in case they disappear. I saved them on my local hard drive in a folder called “Redbourne Guitar” and the files are named after after the songs, like “Salisbury.mp4” etc.
Anyway, I just watched one of the videos off my hard drive, and lo and behold, when I logged into YouTube, my recommended feed was full of John Redbourne videos. I haven’t searched or done anything online related to John Redbourne since I downloaded the videos. How did YouTube know I watched it?
Using Windows 10, Firefox, and played video with default “Movies and TV” app that comes with Win 10.
I have a native Windows application programmed in .NET Core that needs to call a Web API. I intend to have the user enter credentials periodically, receiving a refresh token from my auth server for convenience. Encrypting the refresh token and saving it to my application’s database is looking like the strongest candidate for storing the refresh token securely, and allowing the finest user control (users can potentially access the application from different machines, making the Windows Vault seem less useful).
Are there established “best practices” when storing refresh tokens, or more directly, are there any contraindications raised by my use case? If so, are there other ways besides saving a cookie, which would couple the application not only to individual logins and machines, but also to browser-based infrastructure?