Best practice handling external jwt in a server

I’m currently building a mobile app using the Spotify Web Api.

The thing is, I need the mobile app to only get the authorization code and then send it to the server since the server will call the Spotify web api when needed.

So the server is responsible for getting the access token and refresh token and refresh them as needed.

Here’s a diagram of what I’m thinking:

enter image description here

Reading the Spotify Documentation I’ve seen that they also recommend using PKCE when building mobile apps. But I’m not sure how this would work in my case. I would need to generate the code_verifier in the mobile app but also send it to my server.

I guess my question would be, does my flow make sense? Is it ok to store all the user’s access_tokens and refresh_tokens in my DB and use them as needed?

Using the hash of the user’s password for encrypting/decrypting an E2E encryption/decryption key: is this good practice? [migrated]

I am developing a zero-knowledge app, meaning the data is encrypted in the client before it’s transmitted (over SSL) and decrypted after the data is received. If the database is ever compromised, without the user’s decryption keys the attacker knows nothing.

Of course, when the app is hosted on a web server, an attacker could still inject malicious scripts, but that’s always a risk. The idea is that the user data is encrypted by default. As long as no malicious code was added to the client code, the server should not be able to obtain the user data.

The title summarises how I intended to do this, but actually it’s a bit more convoluted:

  • On account registration, a secure random string is generated as (AES) encryption key (could also be private/public key generation here I guess). Let’s call this key K1.
    • All data will be encrypted/decrypted (e.g. using AES) with this key.
  • The plain text password is hashed to create another key. Let’s call this K2 = hash(plain password) (for example using SHA256)
    • K2 is used to encrypt K1 for secure storage of the key in the remote database in the user profile.
    • If the user changes his password, all that needs to be done is re-encrypting K1 with K2 = hash(new password), so not all the data has to be decrypted and re-encrypted.
    • K2 is stored in localStorage as long as the user is authenticated: this is used to decrypt K1 at bootstrap.
  • K2 is hashed again to generate the password that is sent to the API: P = hash(K2) (also using SHA256 for example)
    • This is to prevent that the decryption key K2 (and therefore, K1) can be deduced from the password that the API/database receives.
  • In the API, the password P that is received is hashed again before it is compared/stored in the database (this time with a stronger function such as bcrypt).

My question is: does this mechanism make sense or are there any gaping security holes that I missed?

The only downsides that I see are inherent to zero-knowledge, E2E encrypted apps:

  • Forgotten password = all data is lost (cannot be decrypted). This is why the user is recommended to write down the encryption key K1 after creating the account: then the data can always be recovered.
  • Searching, indexing, manipulating, analysing the data is limited because everything has to be done client-side.

What is the best practice for resetting multi-factor when a user requests to recover their password?

I am in the process of developing a web-application which requires MFA on every login. (On the first login, before you can do anything, you are forced to setup MFA. Due to monetary restrictions and development time restraints, the MFA chosen is a simple TOTP solution, but in the future I may include other providers such as Authy.

In the process of developing the Password Recovery flow, I thought that if someone forgot their password, they most likely forgot/lost their MFA as well. In your experiences, is this assumption correct? What is the “best practice” here? Do I reset the MFA along with the password on password recovery, or do I force the user to authenticate through another method in order to have their MFA reset?

Practice question for security plus I think is wrong. Integrity vs availabilty

So there is the following question on a practice test:

Which service are you addressing by installing a RAID array and load balancer?

A. Confidentiality B. Availability C. Accountability D. Integrity

The correct answer according to practice test is “Integrity”.

Can someone explain this and why it would not be Availability? Not understanding how availability wouldn’t be the correct choice.

Best practice for modeling data that is both general (default) and entity-specific

I have tried searching for good guidance on this already, but without much luck. Still, apologies in advance if this is duplicated elsewhere.

The Problem

In a nutshell, we have external contractors that work on cases for our clients. We already have tables with contractor and client information in our SQL Server database. Going forward we’d like to store billing info in there too. Billing rates can differ for each client and contractor, but usually each client has a general “default” pay rate that applies to most contractors.

Option A

The initial proposal was to create a new table with the following basic design:

clientContractorPay

  • clientID – foreign key to client table
  • contractorID – foreign key to contractor table
  • basePay – pay rate for this client-contractor combination
  • ... – several more (10+ and likely to grow) columns with supplemental pay rate details
  • A unique index to help optimize lookup and also prevent multiple rows for a given client-contractor combination.

Contractor-specific pay rates would naturally be linked to the relevant contractor (and client). General (default) pay for a client would be stored in a row where contractorID is NULL. This is to avoid having to duplicate the same default pay for all contractors that don’t have specific exceptions.

Option B

However, one of our senior devs has strong reservations about Option A. Their main argument is that using NULL in the contractorID column to mean “this is the default pay rate row” is unintuitive and/or confusing. In other words, it’s bad to assign meaning to NULL values.

Their counter proposal was to duplicate these new pay rate columns in the client table. The data stored there would indicate the default pay for each client, while contractor-specific exceptions would still live in the new table above.

What To Do?

It seems clear both proposals would work just fine, but I have my own reservations about the second. Mainly it seems wrong to store the same type of data (client-contractor pay rate details) in multiple places, not to mention more complex logic to read/write this data. I also don’t like duplicating these new columns in both tables, since it would force us to add any future pay rate columns to both tables.

However, I can see my colleague’s point about potentially misusing NULL in this case. At the very least, it’s not immediately obvious that rows with a NULL contractorID contain default pay rates.

It’s been far too long since my database programming courses, so I’m not sure what the current best practice for this type of entity relationship is? I’m open to whatever is best long term, and would appreciate any expert guidance, especially with links to additional resources.

Thank you in advance!

Estimating P in Amdahl’s Law theoretically and in practice

In parallel computing, Amdahl’s law is mainly used to predict the theoretical maximum speedup for program processing using multiple processors. If we denote the speed up by S then Amdahl’s is given by the formula :

S=1/((1-P)+(P/N)

where P is the proportion of a system or program that can be made parallel, and 1-P is the proportion that remains serial. My question is how can we compute or estimate P for a given program ?

More specifically, my question has two parts:

how can we compute P theoretically? how can we compute P in practice? I know my question could be easy but I am learning.

ref : https://www.techopedia.com/definition/17035/amdahls-law

Why is using prepared statements in PHP considered best practice?

Let me first start by stating that I am by no means a webdeveloper, so please do point out if I’m going in the wrong somewhere in my story.

I think most people agree with the idea that using prepared statements effectively stops injections if executed properly. With that said, in order to write prepared statements in PHP, you need to establish a connection with the database in your php file. In other words, if the webserver ever becomes compromised, the account used to establish a connection with the database becomes compromised as well as its basically there inside a php file, allowing your attacker to basically create dumps out of your database. If I were to design an application, I would separate the website and the logic, through some API server or something similar, in order to make sure that the database account isn’t compromised as well.

Why is it that nobody points out what in my eyes looks like an obvious security flaw in PHP? Or is the chance of this being exploited so small that people aren’t even considering the chances that it might happen?

What is the Best Practice to change a secret password with PBDKF2

I read about recommendations about secret key (or password as rfc8018 call it), one of them is change the password from time to time.

I would like to know is there some best practice for this change of password?.

I found this reference in the RFC with the follow information

changing a password changes the key that protects the associated DPK(s). Therefore, whenever a password is changed, any DPK that is protected by the retiring password shall be recovered (e.g., decrypted) using the MK or the derived keying material that is associated with the retiring password, and then re-protected (e.g., encrypted) using the appropriate MK or the derived keying material that is associated with a new password.

I undestood from this text, that I have to re-protect again. But When and by Who this process has to be made? “re-protect” could be a batch process? or is there a better option?

In short, How to carry out this process without reinventing the wheel.

Thank you guys.

Which is better practice: digital password manager or a physical book of passwords? [duplicate]

These days, it’s not uncommon to have dozens and dozens of passwords for various sites and services. If you’re using different passwords for each service it can be basically impossible to hold all the passwords in your memory.

Some people keep a book with their passwords written down, and are occasionally mocked for doing so because if the book is lost or stolen, so are their accounts.

Others keep a digital password manager. These passwords aren’t hashed: a user can log in and see all their saved passwords.

The best solution (assuming we’re only using passwords) is to have unique, strong passwords memorized for each service, but that is implausible for the typical person and the volume of passwords someone needs.

Which of the following is the current best practice for a high number of passwords? Consider that the user needs to access these accounts from potentially several computers.

  1. Use a handful of passwords that you can remember and have several services share a password. (For example, three unique passwords over twelve services)

  2. Use a digital storage mechanism for storing passwords that can be accessed on logging in

  3. Keep a physical book of written passwords. Obviously it can’t be stolen digitally, but can be physically stolen or misplaced, and recovering from a lost book is very hard.

I’m assuming the person in question is keeping everyday information important to them (email, bank account password, so on) but not necessarily being specifically targeted by someone with resources. Any of these practices will probably fail against someone staging a coordinated attack.