Should a refresh token be linked to a single access token, and what is the ideal refresh flow?

I’ve been reading about access tokens and refresh tokens, and am implementing it in my own site. Right now, based on an example codebase on GitHub, a refresh token of random characters is created and stored in the database with some details such as the user id and expiry time, and returned alongside the JWT access token. The refresh route accepts both the old access token and refresh token, as well as some other request information (client id and IP), and as long as the refresh token exists in the database and is not expired, is assumed to be valid to grant the user a new access token (which is generated using the payload of the old token) before itself being reissued.

I then read this article, which notes among other things that the refresh route should not require the old token in the request payload (not being tied to a single access token).

Based on this, I have a few questions:

  1. Shouldn’t the refresh token be linked to a specific access token? If my access token has a jti, should I not store this in the database with the refresh token, so that a single refresh token can only be used for a single access token? If a refresh token is stolen, and it is not linked to an access token, this token can be used to generate a new access token regardless of what the old access token looks like. Sure, if it’s being rotated and an expired refresh token is used (i.e. real user attempts to refresh their expired access token), I can detect that the refresh token was breached and invalidate all of the user’s refresh tokens, but until then the attacker will be able to continue requesting new access tokens whenever they expire and have access to the user’s account.
  2. If an access token should not be sent to the refresh route (invalidating question 1), how is the payload for the new access token sourced? Is this retrieved fresh from the database? Should this happen anyway so that any changes made to the database are at most accessTokenTtl stale?
  3. What about the other information stored alongside the refresh token? In the example GitHub repo, the client id and user ip address are stored with the refresh token but not used for anything. Should a refresh token only be valid if the same ip address and client id are provided when a refresh is attempted? What if a user is on their mobile phone, for example, where their ip address can change quite frequently? This defeats the purpose of a long-lasting (weeks/months, not minutes or hours) refresh token.
  4. Is a refresh token of random characters sufficient? Can it be a JWT itself? I wanted to not store the refresh token in plain-text in the database, but in order to find the right token it needs to be stored alongside a unique identifier which is also a part of the payload. I.e. I could make the refresh token a JWT with a jti that is the id of the corresponding database row, and a random payload. This is sent to the user as a normal JWT, but it is hashed using bcrypt before being stored in the database. Then, when the refresh token is being used in the refresh route, I can validate the token provided by the user, grab the jti, find the hashed token in the database, and then compare them using bcrypt as you would with a user’s password.

Are elements of the Hash Table’s backing array Linked Lists from the initial point when using Separate Chaining?

As usual, did quite a research in different books and academic articles, but can’t really get a clear picture.

For the Hashing Collision resolution in Hash Tables, we have one very popular strategy for resolving it, and it’s called Separate Chaining.

I’m aware, that in the Separate Chaining strategy, elements, which end up being collided due to hashed into same particular index, are (or will be becoming) Linked Lists.

One instructor even said so, that:

Elements of the backing array in separate chaining, are linked lists.

My question is following: is the type of backing array Linked List from the moment of creation of Hash Table (during separate chaining strategy implementation), or it gets converted to that array after first collision? because, having Linked Lists as each element of the backing array means, that those Linked Lists, should be a list of the elements, which in turn, are Entries/Buckets of a pair of key-value. This all really consumes a lot of memory and resource, I reckon.

Thank you.

Feeling guilt tripped in linked list

Okay, so I have been doing competitive programming for the past few days in C++. And day before yesterday, I stumbled upon this concept of linked list. Now, what I do while implementing most problems pertaining to linked list is that I store all the values of the linked list nodes in a vector, and do operations on them which has been asked as per the question, then refill the nodes with the newly updated values of the vector. Now, most of the questions can be done by me really easily, as I am pretty comfortable with using vectors now. I just wanted to ask this community if this was a bad approach towards using liked list, and should I stop using it altogether. I will give an example to sort a linked list in (NlogN) time, which I used. Following is a function that sorts a linked list-

ListNode* Solution::sortList(ListNode* A) { vector vect; ListNode* temp=A; while(A!=NULL) { vect.push_back(A->val); A=A->next; } A=temp; sort(vect.begin(), vect.end()); for(int i=0; ival=vect[i]; A=A->next; } return temp; }

How can we correlate linked server queries running to their source instance?

We have multiple versions of SQL Server from 2008R2 (I know) to 2016. There are many linked server queries. I’m trying to build a tool that will tell me the queries executing on a target server that come from linked server queries executing on another source server.


Can I rely on sys.dm_exec_sessions.program_name = ‘Microsoft SQL Server’?

I’ve hesitated to go that route so far and have been relying on looking at the sql text:

dest.text LIKE '%"Tbl[0-9][0-9][0-9][0-9]"%' 


Once I have found the linked server queries running on my target instance, I want to traverse back to the original calling instance. Is there an easy way to do this?

Right now, I’m heading down the path of looking for the target instance in the sql text of queries running on the source host(s):

AND dest.text LIKE '%$  LinkedServerName%' 

(Not the variable expansion – this is a powershell snippet, not straight TSQL)

Still, I may have some interesting logic to truly say, “This query is what is causing the query to execute on my linked server” – not to mention the perf hit of the leading wildcard search into my sql text.


Thank you,


Linked Server from On-Premise to Azure SQL Database

I am using: SSMS-18.4 SQL Server-SQL 2019 CU3 Windows 10

I was able to create a linked server successfully from on-premise SQL 2017 to the Azure SQL database without exposing my password.

--Read the password from text file  DECLARE @password VARCHAR(MAX) SELECT  @password = BulkColumn FROM    OPENROWSET(BULK 'C:\Azure SQL Database - Where is my  SQL Agent\password.txt', SINGLE_BLOB) AS x     --Drop and create linked server IF EXISTS(SELECT * FROM sys.servers WHERE name = N'AzureDB_adventureworks') EXEC master.dbo.sp_dropserver @server=N'AzureDB_adventureworks', @droplogins='droplogins'; EXEC master.dbo.sp_addlinkedserver  @server = N'AzureDB_adventureworks',   @srvproduct=N'',  @provider=N'SQLNCLI',  @datasrc=N'',  @catalog=N'adventureworks';  EXEC master.dbo.sp_addlinkedsrvlogin  @rmtsrvname=N'AzureDB_adventureworks',  @useself=N'False',  @rmtuser=N'taiob',@rmtpassword=@password; GO 

But the password is not getting the correct value. I am getting a login failure.

Some of the error message:

Login failed for user 'taiob'.  (.Net SqlClient Data Provider) Server Name: .\SQL2019  Error Number: 18456  Severity: 14  State: 1  Line Number: 1 

If I hardcode the password it works fine. If I print the variable I can see the value is correct. It is not a firewall issues as I can directly connect from the same SSMS that I am running the code from.

Time complexity of insertion in linked list

Apologies if this question feels like a solution verification, but this question was asked in my graduate admission test and there’s a lot riding on this:

What is the worst case time complexity of inserting $ n$ elements into an empty linked list, if the linked list needs to be maintained in sorted order?

In my opinion, the answer should be $ O(n^2)$ because in every insertion, we will have to insert the element in the right place and it is possible that every element has to be inserted at the last place, giving me a time complexity of $ 1 + 2 + … (n-1) + n = O(n^2)$

However, the solution that I have says that we can first sort the elements in $ O(n \log n)$ and then, we can insert them one by one in $ O(n)$ , giving us an overall complexity of $ O(n \log n)$ .

From the given wording of the question, which solution is more apt? In my opinion, since the question mentions “linked list needs to be maintained in sorted order”, I am inclined to say that we cannot sort the elements beforehand and then insert them in the sorted order.

data structure linked list

Suppose as a computer programmer, you have been assigned a task to develop a program to store the sorted data in ascending order. Initially you have used linked list data structure to store the data but search operation was time consuming then you decided to use BST (Binary Search Tree) but retrieval efficiency is not improved. In such situation, how can you improve the efficiency of search operation for BST? Justify your answer with solid reason.

SQL Server Linked Server Best Practices – Step by Step

Hello and thanks for stopping by. I’m an accidental DBA looking for some guidance in creating Linked Servers the correct way. MS provides complete descriptions of the datatype and descriptions of all of the various parameters for sp_addlinkedserver, sp_addlinkedsrvlogin and sp_serveroption but no guidance on HOW to align the various options as Best Practices for a given situation.

I have examples from the other DBA’s who simply used the ‘sa’ password but my research indicates I should be using bespoke logins tailored to their Linked Server use. The Problem is I’m so far unable to find the right combination and sequence (order of ops) to correctly create all of the parts and pieces resulting in a Linked Server that allows limited communication between two servers.

Goal: Create a Linked Server between a Source server that will allow a job step from the source server to check certain conditions and if TRUE, invoke sp_start_job on Destination server. …and nothing more.

On advice, I’ve created two SQL Auth Logins of the same name/pw on both Source and Destination, both with limited ‘public’ permissions.

I’ve created Linked Servers attempting to map the local login to the remote login (thinking if I got that far, I’d carefully tinker with the permissions of the Destination login to find the permission to allow it to exec sp_start_job).

But so far, my only reward has been a series of failure notices of various types.

There are a TON of online documents explaining what each various proc/param does but I’m having a difficult time finding some sort of over-view explaining how combinations of procs/params lead to different desired outcomes.

I’m hoping for some useful advice, reference to some ‘yet to be discovered’ tutorial or maybe even a Step by Step instruction on how to achieve my goal and develop a little self respect. (so far, this task has done nothing but bruise my ego!)

Thank you for your time.

Fingerprint mismatch only for 32-bit DLL linked statically to FIPS Capable OpenSSL

Appreciate any help on the following.

1) Built OpenSSL Fips Module and then ‘static binaries’ of FIPS capable OSSL which ‘statically link to the windows run-time’. Thus, my application binary (FipsApp.exe) does not depend on OSSL DLLs.

2) Consumed these static binaries namely (libeaycompat32.lib, libeayfips32.lib and ssleay32.lib) into myapp.dll using

3) FipsApp.exe calls function foo() inside myapp.dll which executes FIPS_mode_set() which returns (100:error:2D06B06F:lib(45):func(107): reason (111):/FIPS/FIPS.c:232)


1) On executing 64-bit FipsApp.exe, the FIPS mode gets set and working with 64-bit myapp.dll

2) But on executing 34-bit FipsApp.exe which uses 32-bit myapp.dll with same configuration, FIPS_mode_set() fails with reason 111 (Fingerprint mismatch)


Since above 32-bit myapp.dll did not work, some additional configuration changes were made.

1) ReBuilt 32-bit myapp.dll with above LFLAGS “/DynamicBase:No /Fixed”. Here default base address gets used for myapp.dll

2) ReBuilt 32-bit myapp.dll with base address of 0xFB00000. (OSSL does same thing for FIPS dlls)

3) Checking out following

But 32-bit myapp DLL does always fail with fingerprint mismatch.


How do I get 32-bit myapp.dll working in FIPS mode? FIPS_mode_set() returns (100:error:2D06B06F:lib(45):func(107): reason (111):/FIPS/FIPS.c:232)


delete node from linked list (lang w/GC) -should deleted item ‘next’ be set to null

Regarding: delete a node from a linked list algorithm,in procedural languages with garbage-collection. Should there be a step in the algorithm, of setting the removed-node-next-pointer to null ?

(Several high-school-CS-teachers in my area teach YES, while others teach NO)

What is the right approach?

/thanks ran