Are hardware security keys (e.g ones supporting Fido2) “able to protect authentication” even in case of compromised devices?

Correct me if I am wrong, please.

I understand that 2FA (MFA) increases account security in case an attacker obtains a password which might be possible via various ways, e.g. phishing, database breach, brute-force, etc..

However, if the 2FA device is compromised (full system control) which can also be the very same device then 2FA is broken. It’s not as likely as opposed to only using a password but conceptually this is true.

Do hardware security keys protect against compromised devices? I read that the private key cannot be extracted from those devices. I think about protecting my ssh logins with a FIDO2 key. Taking ssh as an example, I would imagine that on a compromised device the ssh handshake and key exchange can be intercepted and the Fido2 key can be used for malicious things.

Additionally: Fido2 protects against phishing by storing the website it is setup to authenticate with. Does FIDO2 and openssh also additionally implement host key verification or doesn’t it matter because FIDO2 with openssh is already asymmetric encryption and thus not vulnerable to MitM attacks?

TLS- Key exchange for session keys. Why?

I have a question about the Key Exchange Algorithm used in TLS process. I have read that the Key Exchange algorithm is used by client and server to exchange session keys. Do the client and server exchange session keys at the end of Handshake process? If they arrive mathematically at the same results for session keys at the end of the process, why would they exchange them?

How to express a type that represents an associative array whose keys determine the type of the value?

I’m fairly new to type systems and theory, so I would appreciate some guidance in a problem that sparked my interest.

I would like to understand what type system features are required so a compiler can enforce that a given key will return a value of the same type as the one the key was associated with in the first place.

A practical version of my problem is to declare a Map in TypeScript that allows a developer experience like in the pseudocode below:

const cache =  new  Map<K,  V>()  cache.set('Foo',  Error('R'))  cache.set('Bar',  1)  cache.get('Foo')  // Return value typed as Error.    cache.get('Bar')  // Return value typed as number.  cache.get('Qux')  // Compilation error. 

What would the type of K and V be?

Handling API keys for client-side app with cloud key vault

I would like to hear about the security implications of my desktop app’s current API usage workflow:

  1. Client-side WPF desktop app connects to Azure Key Vault, a cloud vault, by authenticating via a self-signed certificate packaged and distributed with the app’s installer.
  2. Client app retrieves the API key and the key is assigned to a declared runtime object.
  3. Client app uses the key value to make the required GET requests.
  4. Client app closes with Application.Current.Shutdown().

Not well-versed in security myself, but I wondered:

  • Is distributing self-signed certs a risky practice? Ie. others may create a clone app with it
  • Can others potentially hack into the client during runtime and access the key?
  • Potentials for man-in-the-middle attacks to intercept keys when retrieving from vault?

Keen to hear expert thoughts about the above and other ideas. I can’t think of another way to make the GET request directly from client-side.

How to protect certificates and keys in peer to peer application

I’m making a peer-to-peer cross-platform application (in Java & Kotlin), and I want to encrypt conversations between tens of users, concurrently.

However, due to lack of knowledge in security good practices & protocols and due to this being my first time actually getting into informatics security, I’m sort of confused.

My goal is that every peer connection has a unique session that shares as little as possible to the other peer connections, in order to minimize any risk if one connection proves to be vulnerable. The connections will be implemented using TLSv1.3 or TLSv1.2 (I do not intend to support lower protocols due to the risks involved with using them).

However, I’ve done some rudimentary research on my own, and I cannot wrap my head around the question, is having a keystore and a truststore on the (classloader) directory of my application a security vulnerability? Could it ever be one?

I am aware that keystore stores the private and public key of the server, and truststore stores the public key of the server, which it verifies & uses when contacting the server. How can I protect my keystore’s and truststore’s certificate password, when they must be on the application’s directory? Does it need to be protected, even?

What encryption algorithm should my keystore use? I’m heading for really strong encryption, future proofing as much as possible along with keeping up as much backwards compatibility as I can without reducing the application’s security.

Is there an issue with the certificates being self-signed considering I’m solely using them between peers of the same application?

Considering I’m doing Java, do SSLSockets/SSLServerSockets create a "brand new session" for every new connection, as in, do they reuse the private or public key? Are private keys generated when making a handshake with a client?

Thank you for taking the time in advance, privacy is a really big focus of the application itself.

Find candidate keys – What are the steps

I have these following functional dependencies I figured out:

DM -> RA RDT -> AM AD -> RM 

I got with a software to calculate what the candidate keys were to this:

{R, D, T} {A, D, T} {M, D, T} 

But I don’t know HOW i should do this manually to figure out this. Not to use the actual software. What the steps are to solving this. First I thought I should do something like this to figure out the candidate keys:

DM+ = DMRA RDT+ = RDTAM AD+ = ADRM 

But from what I understand is that only the RDT+ is giving all the attributes for it to be a candidate key? I am so confused by this. How should I think to pick it out from these functional depedencies?

Examples of SSH key exchange algorithms requiring encryption-capable host keys

In the SSH spec, Section 7.1, key exchange algorithms are distinguished based on whether they require an "encryption-capable" or a "signature-capable" host key algorithm.

If I understood their details correctly, the well-known DH-based key exchanges algorithms such as curve25519-sha256, diffie-hellman-group14-sha256 and ecdh-sha2-nistp256 all require a signature-capable host key algorithm. What are examples of SSH key exchange algorithms that instead require an encryption-capable host key algorithm?

What usually happens to the symmetric (session) key after decrypting an email? Can the key be recovered if changing private keys?

I’ve been preparing for a CISSP exam and was reading about applied cryptography in regard to email.

It’s my understanding that the popular schemes (PGP,S/Mime) use a combination of asymmetric and symmetric cryptography. If I’m reading things correctly, in S/MIME, the message is encrypted using a sender generated symmetric key. In turn, the symmetric key is encrypted using the receiver’s public key.

Encrypted Email

If the receiver changed their private key, they would no longer be able to decrypt the message. However, I was wondering if it was possible to recover the symmetric key from when the email was previously opened?

My guess would be that the email client does not intentionally store the key since that would present a security risk. Just wanted to see if that actually occurs or if there’s something I’m missing.