Does Two Factor Authentication (2FA) prevent Phishing and/or Man-in-the-Middle (MITM) attacks?

While 2FA is clearly an improvement over only a single factor, is there anything which prevents an adversary presenting a convincing sign-in page which captures both factors?

I realise that technically a MITM attack is different to a Phishing attack, though at a high level they’re very similar — the user is inputting their credentials into an attacker-controlled page and the attacker can then input the credentials onwards into the real page.

Are hardware security keys (e.g ones supporting Fido2) “able to protect authentication” even in case of compromised devices?

Correct me if I am wrong, please.

I understand that 2FA (MFA) increases account security in case an attacker obtains a password which might be possible via various ways, e.g. phishing, database breach, brute-force, etc..

However, if the 2FA device is compromised (full system control) which can also be the very same device then 2FA is broken. It’s not as likely as opposed to only using a password but conceptually this is true.

Do hardware security keys protect against compromised devices? I read that the private key cannot be extracted from those devices. I think about protecting my ssh logins with a FIDO2 key. Taking ssh as an example, I would imagine that on a compromised device the ssh handshake and key exchange can be intercepted and the Fido2 key can be used for malicious things.

Additionally: Fido2 protects against phishing by storing the website it is setup to authenticate with. Does FIDO2 and openssh also additionally implement host key verification or doesn’t it matter because FIDO2 with openssh is already asymmetric encryption and thus not vulnerable to MitM attacks?

Authentication in Next.js application (SSR SPA with long sessions)

We’re currently developing a Next.js application (server side rendering) and are looking for secure ways to keep the users logged in for longer periods of time.

AFAIK this can either be done using silent authentication or refresh tokens. General note: When a user is not logged in yet, we can redirect the user to a login page. If the user enters their credentials, we use the Authorisation Code Grant (to my knowledge PKCE is not needed in this case as it’s all server side during these steps) that will redirect back and respond with an authorisation code. We can then exchange this authorisation code with an access token (and refresh token) using a client secret (all server side).

Refresh Tokens

Since any client side storage (local storage, cookies, etc.) is not safe (XSS attacks) for storing any kind of tokens (especially refresh tokens), we are wondering if it’s generally safe to store a refresh token (and access token) in a HTTP only cookie considering that…

  • … the token values are encrypted, e.g. AES, with a secret that is not exposed to the client side.
  • … the refresh tokens are rotating, so when you retrieve a new access token with your refresh token, you also receive a new refresh token. The old refresh token is invalidated and if used again, all refresh tokens are invalidated.
  • … the refresh token automatically expires after a couple of days, e.g. 7 days.

Silent Authentication

A possible alternative could be silent authentication via an auth request on the server side (prompt=none). The auth session for the silent authentication would also be stored in a HTTP only cookie.

In both scenarios, it’s probably necessary to make sure that the client doesn’t know about any of these tokens (You could potentially use silent authentication on the client side using an iframe (the domain is the same, just different subdomains) but the client would then potentially receive a new access tokens which has to be stored in memory (potential XSS vulnerability)).

Since it’s a server side rendered SPA, the client side still needs to be able to get new data from the API server using the access token. For this, we were thinking of using Next.js API routes as a proxy: So, if the client wants to get new data, it will send an AJAX request to the respective Next.js API route. The controller for this Next.js API route is able to read and decrypt the HTTP only cookie and can therefore send the request to the API server with a valid access token in the HTTP header. Just before the short lived access token expired, the controller would need to first send a request to the auth server to retrieve a new access (and refresh) token and then continue sending the request with the new access token to the API server.

While this sounds good and feasible in theory, we are wondering about the following points: 1.) Is it generally safe to save a (rotating) refresh and access token in a HTTP only cookie? Does the cookie value need to be encrypted or is that unnecessary? Does a rotating refresh token offer any additional security in this case? 2.) Is the “Next.js API route as a proxy” method a secure way to make sure that the client side can get new data from the API server? If e.g. otherdomain.com would try to send a request to the (“unprotected”) Next.js API route, it would not respond with any data as it’s a different domain and the HTTP only cookies therefore not accessible, correct? Is CSRF possible for these Next.js API routes? 3.) Is it safe if the HTTP only cookie for the refresh token is shared across all subdomains and not tied to one specific subdomain (application)? This would allow us to access the cookie from e.g. the actual website or other subdomains. 4.) Is the refresh token approach better / safer than the silent authentication approach?

Follow-Up question: Can the refresh token approach also be used the authenticate users in a browser extension? So:

1.) The user logs in (Authorisation Code Grant with PKCE): The login prompt/page is shown in a popup (or new tab) and the communication (authorisation code) is done through postMessage. 2.) The background script receives the authorisation code and exchanges it for an access token and rotating refresh token (which is probably necessary in this flow (?)) using the code and a code verifier. These tokens can then be saved in Chrome storage. We can potentially also encrypt the tokens but I’m not sure if that offers any additional protection (?) considering that the background script is not the same as a server. 3.) If the Chrome extension wants to receive data from the API server, it sends a message to the background script which will then send the API request using the tokens saved in Chrome storage.

B2B authentication best practices

I’m in the process of developing a B2B (business-to-business) application. I’ve implemented JWT auth, and it is working as expected. Right now the authentication functions as if it were a B2C (business-to-customer) app.

I’m trying to determine the best practices for B2B authentication.

Is having one authentication account bad practice in a B2B app? For example, every employee at Company A would use the same set of login credentials.

Replace hashing function with asymmetric cryptography when password authentication

I would like to know if the following ideas are feasible:

Hash function is one-way function.

Generate public key from private key is irreversible(asymmetric cryptography).

User password entry -> SHA(or adding salt before hashing) -> hash value(as ECC private key) -> generate public key from private key -> save public key(drop private key)

Password authentication:

User password entry -> SHA(or adding salt before hashing) -> hash value(as ECC private key) -> generate public key from private key -> verify the public key with the saved one.

Based on that:

a.User or others can encrypt selected information(by using public key) that only user can decrypt it.

b.System administrator can generate a public/private key pair then both user and administrator can encrypt/decrypt selected information(by using Diffie–Hellman key exchange method).

I think that brute-force method(exhaustive attack method) can crack any password, and it is only a matter of time.It should be an another topic.

I am trying to prevent user information leak or rainbow table attack even if system being hacked.

I have searched and read the following information:

https://crypto.stackexchange.com/questions/9813/generate-elliptic-curve-private-key-from-user-passphrase

Handling user login using asymmetric cryptography

Asymmetric Cryptography as Hashing Function

OpenVPN authentication error

Now I use Synology’s MR2200AC as my home router and Synology’s DS918+ as my NAS for hosting some virtual machines. And I’m trying to connect to the virtual machines from my laptop via a OpenVPN server of VPN plus server app on the Synology MR2200AC.

However, when I’m trying to make a OpenVPN connection to the OpenVPN server, it results in authentication error. But I can success that once in a while. So username and password is correct. The error occurs both the laptop is inside and outside my home LAN.

Current environment of the connection is here.

The laptop is outside my home:
Laptop–Smartphone(tethering)–Internet–MR2200AC–virtual machines(on Synology DS918+)

The laptop is inside my home:
Laptop–MR2200AC–virtual machines(on Synology DS918+)

Laptop:MacOS 10.14.6, with using OpenVPN connect v3.2.1(https://openvpn.net/download-open-vpn/)
Smartphone:iOS(13.3)
MR2200AC:SRM 1.2.4-8081(Internet connection is IPoE(MAP-E))
DS918+:DRM 6.2.3-25426
virtual machines:ubuntu server 20.04 on DS918+’s Virtual Machine Manager app

The OpenVPN connection between the OpenVPN server and the virtual machines is not problem. The virtual machines can always success the authorization and can keep it’s OpenVPN connection with the OpenVPN server.

I can make vpn connection with the MR2200AC from outside my home if I use WebVPN function on the VPN plus server app(Not OpenVPN connection). So I have tried to export configuration file from OpenVPN tab on the VPN plus server app when the laptop is outside my home and used the file.
Also I have tried to change udp protocol to tcp protocol, and to launched the OpenVPN APP on the laptop with root priviledge.

But those work once in a while, not always.

I thought the above IPoE(MAP-E) may cause problem. But a DNS configuration of the MR2200AC works correctly.

I can’t understand what is wrong.

I’d like to want to build a reliable vpn connection between the laptop and the virtual machines. For example, I access to a mysql server on the virtual machine, whether the laptop is inside or outside my home LAN. In this example, the above WebVPN is useless.

Please help me.

One of Logs for example is here.

7/31/2020, 1:04:33 PM OpenVPN core 3.git::3e56f9a6 mac x86_64 64-bit built on Jul 3 2020 15:36:10 7/31/2020, 1:04:33 PM Frame=512/2048/512 mssfix-ctrl=1250 7/31/2020, 1:04:33 PM UNUSED OPTIONS 1 [tls-client] 3 [pull] 5 [script-security] [2] 7/31/2020, 1:04:33 PM EVENT: RESOLVE  7/31/2020, 1:04:33 PM Contacting ************* via TCPv4 7/31/2020, 1:04:33 PM EVENT: WAIT  7/31/2020, 1:04:33 PM UnixCommandAgent: transmitting bypass route to /var/run/agent_ovpnconnect.sock { "host" : "**********", "ipv6" : false, "pid" : 35641 } 7/31/2020, 1:04:33 PM Connecting to [***************]:**** (***********) via TCPv4 7/31/2020, 1:04:33 PM EVENT: CONNECTING  7/31/2020, 1:04:33 PM Tunnel Options:V4,dev-type tun,link-mtu 1603,tun-mtu 1500,proto TCPv4_CLIENT,keydir 1,cipher AES-256-CBC,auth SHA512,keysize 256,tls-auth,key-method 2,tls-client 7/31/2020, 1:04:33 PM Creds: Username/Password 7/31/2020, 1:04:33 PM Peer Info: IV_VER=3.git::3e56f9a6 IV_PLAT=mac IV_NCP=2 IV_TCPNL=1 IV_PROTO=2 IV_GUI_VER=OCmacOS_3.2.1-1484 IV_SSO=openers  7/31/2020, 1:04:34 PM VERIFY OK: depth=2, /O=Digital Signature Trust Co./CN=DST Root CA X3 7/31/2020, 1:04:34 PM VERIFY OK: depth=1, /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 7/31/2020, 1:04:34 PM VERIFY OK: depth=0, /CN=************** 7/31/2020, 1:04:40 PM SSL Handshake: CN=*****************, TLSv1.2, cipher TLSv1.2 DHE-RSA-AES256-GCM-SHA384, 2048 bit RSA 7/31/2020, 1:04:40 PM Session is ACTIVE 7/31/2020, 1:04:40 PM EVENT: GET_CONFIG  7/31/2020, 1:04:40 PM Sending PUSH_REQUEST to server... 7/31/2020, 1:04:40 PM AUTH_FAILED 7/31/2020, 1:04:40 PM EVENT: AUTH_FAILED  7/31/2020, 1:04:40 PM EVENT: DISCONNECTED  7/31/2020, 1:04:44 PM Raw stats on disconnect:  BYTES_IN : 4993 BYTES_OUT : 2163 PACKETS_IN : 10 PACKETS_OUT : 10 AUTH_FAILED : 1 ⏎7/31/2020, 1:04:44 PM Performance stats on disconnect: CPU usage (microseconds): 9352624 Network bytes per CPU second: 765 Tunnel bytes per CPU second: 0 

Mutli user/mutli service authentication + HSM as a key signing/encrypting key?

I’m looking to implement a multi user authentication environment for a small (11) but growing team, to a reasonable number (currently 500+) of managed devices/services (routers, firewalls, linux cloud instances and on prem physical servers etc). I’m struggling to understand where/how to originate the root(s) of trust for a lot of unique key material that achieves compartmentalisation, particularly as the number of services/devices/users grows, and how to tie it back to a control system for revoking/validating those keys.

This is mostly about infrastructure, so SSH, VPN tunnels etc, rather than web apps with built in authentication via single sign-on/AD integration etc. That said, I’m interested in how a solution might cater to providing authorisation in that space for HTTPS web interface sign-in (obviously also subject to what the specific app/service/site provides as it’s own authentication integration, SAML/TACAS/RADIUS/LDAP/AD etc). Perhaps that can be done by:

  • tying a key to a user in LDAP/AD/RADIUS/TACAS?
  • tying a certificate to a user and presenting a signed key and certificate?

I’m leaning towards an on premise, centralised system utilising a bastion host (or something to that effect). If, however, there are good suggestions for a distributed and largely decentralised ‘local to the user’ solution, I’m all ears. We do need a way to securely maintain access control, even if a key is known to a host, we need the ability to invalidate it as a login credential (ideally in real time). So , either reclaim their keys or have a method of rendering any keys they maintain from being a valid login credential.

There’s two problems here as I see it:

  1. Secure generation and use of keys when there are lots of them.
  2. Externally validated and centrally administered access control, based on those user keys, to control valid logins over time.

Key security:

It would be nice for each user to have their own key (so a key is tied to an identity), and for that user to use a unique key for each service, on each system, such that in some sort of compromise, only one such service/system is comprised…hopefully. This obviously starts to require a lot of keys, and some sort of key-agent for the user to help manage it all.

A nice answer would be an HSM per user that can support an arsenal of keys, tied to an agent that automatically selects the right one. If each user had a low cost USB HSM’s (Yubikey/NitroKey etc), they seem to have a very limited number of slots/keys they can store. Is it valid to try and expand this so it can authenticate (somewhat indirectly) more services by storing keys externally, but making them only usable via the HSM?

i.e. HSM as a master key, where it generates and exports encrypted keys, which are stored in a software agent, and passed back to the HSM for decryption by the agent when needed for login?

Similarly, if using a central server as a bastion host that users log in to, it would still need to hold a heap of keys, any reason that this approach would be unwise there?

Key administration – validation/revocation:

I suspect this going to involve PKI and some sort of online CRL…

Assuming there is a good solution to generating and storing lots of unique keys, what is the best way to provide a separately managed validation server for those keys?

The granularity would only need to be basic authentication, i.e. this user can login to this server/service; yes or no.

This seems like, in order to scale, it would require a root of trust for a PKI and certificates to be associated with users, either as a ‘signed key’ or just a separate traditional certificate that must also be presented and validated.

In my mind, it would be something where the user authenticates once a day (LDAP or similar), and the server validates their cert/key for say 8 or 12 hours (but an admin can remove that validation at any point in time, nullify login attempts from that point on). When a login request hits a managed device, it would query said server via a secure connection to check authorisation for the cert/key and allow/deny login accordingly.

I know of commercial solutions that exist for certain environments (i.e. AD, proprietary firewall managers etc), but nothing that is fairly simple and ‘cross environment,’ for say OpenVPN/SSH/WireGuard authentication. LDAP or RADIUS seem like the best bet, but not sure how to tie that into SSH with a permissions cross check, even if the key was authorised on the host?