Assuming whatever encryption algorithm used was designed to support compression without any information leakage, would there be any reason not to use some custom compression algorithm to add obscurity to security?
Instead of a compression algorithm, what if it were just a simple custom algorithm that mixed the bits or bytes of the input? Would that impact the security at all?
(This is assuming that the implementation of said algorithm is secure against side-channel attacks.)
Why is it a bad idea to encrypt password/salt hash with RSA (or maybe other public-key algorithm) before storing it?
I have read here, that instead of using pepper, it is better to encrypt hashed/salted passwords before storing in the database. Especially with Java, as there’s no library for salt/pepper, but just for salt hashing, and I’m not going to implement my own crypto in any way. Have questions about it:
- Is it true? Will it add security, if db server is on the another physical computer, and encryption keys are sored on the app server’s fs?
- If so, is it ok to use RSA for hash encryption?
- To check password in this case, is it better to read encrypted password from the DB, decrypt it, and then compare it to the hashed/salted one entered by user, or encrypt entered hashed/salted password and then compare with the encrypted value in the database? In this case, will it be the same as using another hash, as encrypted hash is never decrypted?
I find certain APIs (that provide sensitive information) using algorithms like
X25519, on top of the already encrypted SSL connection. This compels us to use libraries like Tink (when we have to use it in Android Apps) that provides such algorithms, though the classes it provides us for using such algorithms is explicitly marked not to be used in production.
Is there any reason this could have use cases when transmitting sensitive information?
I’m thinking of implementing text encryption and decryption in my program using AES using key sharing thanks to Multiparty Computation (the point is that the key should never be provided in full), however, all public libraries that I found offer this option only assuming that one member has the whole key and another plaintext. Has anyone ever dealt with a similar problem?
I am developing a system for storage of medical records. A person could upload image(s) or file(s). Since it is a medical record , it needs to be stored in encrypted form.Also I want that the files or images could only be seen by authorised user. Can you please suggest what standards should I be using (The API is in NodeJS and using S3 for stoarage)?
I need to distribute a shared 256 bit key to 100’s or 1000’s of nodes (I have the public key of each node).
There’s no networking involved – this will all be done by loading a single file on to each node. That file is generated by a “master”.
In some cases, the nodes use 2048-bit RSA keys, and others it is a p521 Elliptic Curve key.
The idea is to create a line for each node in the distributed file encrypting the shared key.
If the node uses EC, then the node’s public EC and the master’s private EC keys are used to generate a symmetric key which is used to encrypt the shared key . The encrypted shared key and signature generated by the master are stored on a line in the file. The node would then loop through each line in the file, use the master’s public EC key and its own private EC key to generate the same symmetric key, decrypt the data, then check the signature. If correct, then that is the shared key.
If the node uses RSA, then the shared key would be encrypted with the node’s public RSA key, and a signature generated by the master and both stored on a line in the file. The node would then loop through each line in the file, use its private RSA key to decrypt the data, then check the signature. If correct, then that is the shared key.
My worry is that does knowing that a single piece of data being encrypted with 1000 different keys give an attacker a significant advantage for deriving a private key?
If you first encrypt a password using a secure key, and then hash the result, and both algorithms are fast, say
sha_256(salt+aes_256(password, secure_key)), would that make the hash expensive to brute-force without making it expensive to generate?
Kind of new to this area so correct me if I am wrong.
Based on my reading self encrypting drive will encrypt and decrypt all data in your disk and this process is totally transparent to the user.
To make use of this feature user would need to set an ATA (HDD) password on the drive or otherwise the self encrypting feature is 100% useless. If a malicious user takes your hard drive and plug into another machine the drive will still be more than happy to decrypt the data for that malicious user. The only way to stop this would be to set an ATA password and lock it down so that the drive will not respond to any command including read/write until its unlocked by the password.
However to leverage this security feature the BIOS must support ATA password. Software based solution won’t work since you can’t even boot the system until the drive is unlocked. But the sad truth is that most mobos doesnt support ATA password which renders this feature completely redundant.
TL;DR: Is there a way to make use of this feature without BIOS ATA support? (My gaming mobo did not come with a TPM header either)
I’m working on a project to encrypt many files with a single password.
The steps I will employ to encrypt the files are:
- user will execute a command similar to
tool --encrypt --recurse directories/to/recurse and-other-files.txt
- the user will be prompted for a password
- two 64 byte crypto random salts and a 16 byte crypto random IV will be generated
- no 2 files will ever use the same salts or IV
- each individual salt will be combined with the password to create to 2 separate argon2id keys
- one key will be 32 bytes long and is used for the AES-256 cipher block
- the other will be 64 bytes long and will be used as the key for a sha-512 hmac
- the resulting encrypted file will be written as
I believe this would result in a reasonably secure, set of encrypted files. My main concern though, is that because of the way that users will use this tool, there is a good chance that they will accidentally encrypt small, easily guessed files.
And since CTR mode doesn’t require padding, anyone with access to the encrypted file will know the length of the plaintext file. It seems that CTR mode is considered secure for files, provided the IV is unique for each encryption run and the file is authenticated.
Is there a chance that the cipher key, HMAC key, or password could be derived through a known plaintext attack from enough small guessable files? Are there any other glaring flaws in my methodology that could leak data?