Microservice security: How to perform authorization + services also need auth checks individually?

I have the following architecture for accessing a REST service that requires authentication:

enter image description here

  • Oidc token flow managed at the client
  • Access token verified at the server in the auth service (proxied by the api gateway), exchanged for a jwt that contains authorisation information about the user.
  • The resource is accessed

In the current model, every request needs to verify the access token (which is normal), but also needs to retrieve the authorization information on every request, which I don’t feel is ok.
The jwt used in this model is only for internal use at the server cluster, as there really is no need to send it back tot the client. Also generating a jwt on every request doesn’t feel quite right.

Storing the jwt in a server store (cache / database) is something I don’t feel is right with this model, because this makes the system stateful again (in case of multiple api gateways, there is need again for sticky sessions, synchronisation etc). Hence this doesn’t offer a solution.

One possible solution would be that authorization is not checked upfront along with the authentication (i.e. verification) process, but only depending on the requested route / action. I don’t particularly like this, as this requires back and forth messaging when a protected resource is accessed. It doesn’t smell like clean architecture.

What is the advised way to go about this?
Related, I wondered if it is enough to perform authentication in the api gateway. These microservices work independently, and I feel a bit uncomfortable that the api gateway grants all access while keeping the underlying services ‘dumb’. Is this a misplaced sense of paranoia?

Are the trade offs for putting an auth token in an http-only cookie for an SPA worth it?

I’ve been building a web app (rails api + react SPA) for learning / fun and have been researching authentication. The most commonly recommended approach for authenticating SPAs that I have read is to put the auth token (such as a JWT) in a secure HTTP-only cookie to protect from XSS. This seems to have a couple of consequences:

  • We now have to handle CSRF since we are using cookie authentication
  • Since it’s an SPA we can’t protect against CSRF until the user is logged in, which means we are vulnerable to a login phishing attach (https://stackoverflow.com/questions/6412813/do-login-forms-need-tokens-against-csrf-attacks)

But what is the real downside to just storing the auth token in browser storage (i.e. session storage)? XSS becomes slightly more convenient for the attacker? Even with an HTTP-only cookie the attacker can still use the auth token by making requests directly from the site, because if there’s a XSS vulnerability then they don’t need to be able to read the token to use it.

It seems that the popular recommendation just makes things more complicated to protect against CSRF just to make things a little more difficult for the attacker in the case of XSS. Due to the amount of resources making these recommendations I feel like I am missing something and would appreciate any feedback or clarifications!

Here is a couple of sources I’ve been reading that have been quite adamant against browser storage for auth tokens:

  • https://cheatsheetseries.owasp.org/cheatsheets/HTML5_Security_Cheat_Sheet.html
  • https://jwt.io/introduction/
  • https://auth0.com/docs/security/store-tokens

Hardening ASP.NET against session fixation: Should I change the session ID despite the additional Auth cookie?


Situation

I am the responsible developer for an ASP.NET application that uses the “Membership” (username and password) authentication scheme. I am presented with the following report from a WebInspect scan:

WebInspect has found a session fixation vulnerability on the site. Session fixation allows an attacker to impersonate a user by abusing an authenticated session ID (SID).

Reproduction

I tried to reproduce the typical attack, using the guide on OWASP:

  1. I retrieve the login page. When inspecting the cookies with Google Chrome’s Developer Tools (F12), I get:

    • ASP.NET_SessionId w4bce3a0e5j4fmxj3b0lqkw2
  2. After authentication on the login page, I get an additional

    • .ASPXAUTH F0B9C00FC624E3F2C0CD2EC9E5EF7D10D91A6D62A26BAEE67A38D0608198750A2428E1F5D7278DCE6312C32EE2788D6C79E8112EA35B2397DB84FBB2BE1DBDA815A304B12505D2B786B00038B1EB0BE854DBDAF13072AFEDB3A21E36A7BCD7CD0032A0BCE8E90ECEAFA5FF487D6D2E2C

    • while the session cookie stays the same (as preconditioned for a session fixation attack)

  3. Attack: However, if steal/make up and fix only the ASP.NET_SessionId and inject it into another browser, the request is not authenticated. It is authenticated only after also stealing the .ASPXAUTH cookie, which is only available AFTER login.

Conclusion

I come to the following conclusion:

While the typical precondition for a session fixation attack is met (non-changing session id), an attack will fail because of the missing, required additional .ASPXAUTH cookie, provided only AFTER successful authentication.

Question

So, should I really change the session cookie after login? Will this only satisfy the WebInspect scan or is there a real value here?

Note: I am very likely having the exact scenario as in Session Fixation: A token and an id, but I am not asking whether it is vulnerable, but what I should do with regards to the report.

SSO – How Auth Code flow with PKCE is secure?

I’ve run into this docs when I was studying some SSO concepts;

This grant adds the concept of a code_verifier to the Authorization Code Grant. When the Client asks for an Authorization Code it generates a code_verifier and its transformed value called code_challenge. The code_challenge and a code_challenge_method are sent along with the request. When the Client wants to exchange the Authorization Code for an Access Token, it also sends along the code_verifier. The Authorization Server transforms this and if it matches the originally sent code_challenge, it returns an Access Token.

And here is the flow;

enter image description here

The part that I did not quite get is how the PKCE way is secure as the interceptor still be able to intercept the code_verifier as much as he could intercept the auth code. I believe I’m missing a point but not able to figure out what it is.

GMail Hack with 2-Factor Auth enabled

I have my business email on GMail. I use 2-factor authentication for access to said business email. I access my business email from 2 computers and 1 mobile Android device. I do not use Outlook or any email client I access it solely through the web browser. I run Webroot AV on both computers and have run MalwareBytes, Hitman Pro and Sophos Virus Removal tool with 0 hits on all.

Yesterday, spoofed emails of my business email account originating from all over the world were sent out to my customers with an attached, password protected file that was a virus. In itself this is not unusual, however, each of the emails was a actual reply from a valid email I had received previously. I immediately looked at my google account settings and verified 2-factor auth, I looked at the devices that were using my email and could verify each one. I could find no proof that someone had gained access to my email other than myself.

Does anyone have any suggestions on where I should look for this breach? I am at a loss and dreading a second round of emails going out.

What is the difference between using an auth header and the request body to send credentials?

I have an API that uses JWT token-based authentication. In order to get a short-lived token, the client first calls a /Token endpoint, passing username and password in the body of the request. As I understand it, this is a standard way to implement JWT.

I am trying to understand the difference between the way that this initial /Token endpoint is called; and calling an API that uses Basic authentication. With Basic authentication, the username and password would be passed in the Authentication header; and then validated on the server side. This seems to be the same basic idea as what we are already doing; except that instead of passing username and password in the authentication header, it is being passed in the request body.

Is there a security difference between these two methods of passing username and password to an API? Would it make sense to implement Basic authentication on our /Token endpoint instead? (Currently there is no Authentication header; but the server will return 401 if the username and/or password are missing or incorrect).

If it matters, the API is written in .Net Core; I’m aware that there are differences in terms of how the server code is written (data in the authentication header can be automatically validated using the [Authorize] attribute on the methods); but I am wondering about security differences in the actual data transfer.

Sharepoint ADFS Claims based auth trying auth on wrong server

Lab Environment: Two Stand-alone SharePoint 2016 VM servers. One was initially set up for testing and proof of concept (Server A). Once the concept was proven viable, I was tasked with creating a second server (Server B), a duplicate of the first. I restored the image of the first server to the second server and began to make the necessary changes (different domain). Everything is up and running with the exception of AFDS Claims-based Authentication. The second server (Server B) displays the choice for Windows Login or ADFS Login, but when you select ADFS login, it takes me to the original server (Server A) login screen and tries to auth. I have looked everywhere and can not seem to find where I need to make the required changes on Server B to auth locally instead of trying Server A.

Any thoughts?

What can someone do with a google auth token?

I just receieved an email from “Have I been pwned” to say my credentials have been made public via a breach. The quoted text is:

“The exposed data included names, email addresses, genders, spoken language and either a bcrypt password hash or Google auth token.”

I would like to know what someone could do with a bcrypt password hash and / or a Google auth token.

I vaguely recognise the app provider involved and don’t currently have that app installed. I have changed my Google password since I last used the app and have MFA active.

How does MariaDB’s ed25519 auth scheme work?

Newer versions of MariaDB (a MySQL database server fork) have a new password based auth scheme called “ed25519”. The docs are very sparse regarding how it works and what it does.

https://mariadb.com/kb/en/library/authentication-plugin-ed25519/

What is the value stored in the database? How is it generated from the password? What is the value sent by the client to the server on login? How is it generated from the password? Is the scheme secure to use without TLS? How resistant is it against password dumps? What is the correct full name of this auth scheme? Is it used by anything else besides MariaDB? Are there other implementations?

google auth with lightdm

I am trying to configure google-authentication on a Xubuntu host via lightdm. I only want to use the OTP code to login (i.e. user does not supply a password, only their google auth code).

I amended /etc/pam.d/lightdm-greeter adding a line to use google authenticator. Initially I omitted ‘use_first_pass’ and later added it. After each edit I did a reboot on the machine to ensure the pam stack was configured correctly.

 #%PAM-1.0  auth    required        pam_permit.so  auth    optional        pam_gnome_keyring.so  auth    optional        pam_kwallet.so  auth    optional        pam_kwallet5.so  auth    sufficient      pam_google_authenticator.so use_first_pass  @include common-account  session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so close  session required        pam_limits.so  @include common-session  session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so open  session optional        pam_gnome_keyring.so auto_start  session optional        pam_kwallet.so auto_start  session optional        pam_kwallet5.so auto_start  session required        pam_env.so readenv=1  session required        pam_env.so readenv=1 user_readenv=1 envfile=/etc/default/locale 

(I also populated the keys for a couple of accounts)

But in both cases, when I type in the google auth code in password prompt of the greeter it simply reports a bad password (since the google module is just “sufficient” I can still login using a password).

The googleauth pam module is not reporting anything in the syslog nor auth.log at startup nor authentication.

How do I get google authenticate to authenticate a code presented as a password?