I implemented a passwordless authentication with a good UX in mind. But I am not a security expert so I am asking for your advice.
This is the authentication flow:
- User types in email address
- client send email to API
- API creates User if not exists
- API generates a short living jwt with a UUID and saves the user id and session id as claims
- token id and session id get saved to db with a confirmed flag
- API sends this token to the email address
- User clicks the link on any device of choice
- if token is valid and the claims match the data in the db the confirmed flag is set to true and a last_login field is set to the token’s iat (not really sure know if I need that ^^)
- Meanwhile the client where the user logged in polls for confirmation and updates session if login was confirmed
I’ve read articles suggesting that passwords will eventually go the way of the dinosaur only to be replaced by biometrics, PINs, and other methods of authentication. This piece claims that Microsoft, Google, and Apple are decreasing password dependency because passwords are expensive (to change) and present a high security risk. On the other hand, Dr. Mike Pound at Computerphile claims that we will always need passwords (I think this is the correct video).
But as this wonderful Security StackExchange thread notes, biometrics are not perfect. Granted, the criticisms are roughly six years old, but still stand. Moreover, and perhaps I have a fundamental misunderstanding of how biometric data is stored, but what if this information is breached? Changing a password may be tedious and expensive, but at least it can be changed. I’m uncertain how biometric authentication address this problem–as I cannot change my face, iris, fingerprint, and etc.–or if it needs to address this problem at all.
Are those who argue that we can eliminate passwords prematurely popping champagne bottles or are their projections correct?
Usual password authentication systems do not store passwords directly on the server, but only hashes of those passwords. Why do fingerprint authentication systems not offer this possibility
I am trying to design an authentication system that will be based on NFC Ultralight C cards. Key concept in my system is that the card identifies a specific person, but this concept can be easily broken if the card can be cloned.
I want to prevent cloning with my cards – but all I found under authentication searches for ultralight c – gives me a write protection (authentication of the message), which is important, but not the only thing I care.
Is it possible to prevent NFC Ultralight C cloning ?
In this i would like to build a token based system where the MAP(mesh access point) generates a token for verified client and that token can be used for seamless handoff when the client travels from one MAP to another. I am not sure where to start..Can u point me to a right direction. Thank you!
Apple claims in this year’s WWDC that Face ID and Touch ID count for both Possession and Inherence identity factors, because they are using Biometrics (Inherence) to access the secure element on your phone (Possession) to retrieve a unique key. See here: https://developer.apple.com/videos/play/wwdc2020/10670/
I think both claims are a stretch. For Inherence, yes, you have proved to iOS that the person who set up Face ID is again using the phone, and therefore given access to the secure key. So iOS can claim Inherence. But your app has no proof that the human possessing the phone is actually your user. Hence my app considers mobile local authentication merely a convenient Knowledge factor–a shortcut for your username and password that resolves common credential problems like human forgetfulness.
As for Possession, again, I think the claim is a stretch unless before writing the unique key to the phone’s secure element you somehow prove that the possessor of the phone is your actual intended user. I suppose if you enable Face ID login immediately after account creation you can have this proof–the brand-new user gets to declare this is their phone like they get to choose their username and password. But on any login beyond the first you would have to acquire proof of Possession using an existing factor before you could grant a new Possession factor. Else a fraudster who steals credentials can claim their phone is a Possession factor by enabling Face ID; a situation made extra problematic by Apple’s claim that Face ID also counts as Inherence!
Am I wrong in this assessment? Which of Knowledge, Possession, and Inherence should an app developer grant mobile local biometric authentication?
I understand the logical steps of asymmetric key cryptography as it relates to TLS, however, I’ve started using Git and in a bid to avoid having to use a password for authentication, I’ve set up ssh keys for passwordless authentication. I understand that these ssh keys are complements of each other but I do not understand how the actual authentication is taking place. I’ve copied the public key to Git and stored the private key locally. As such, I am able to do what I set out to do (passwordless authentication) but I do not know the underlying steps as to why the authentication is successful. I’ve tried searching the web but every answer I’ve found thus far has been too high level in that they did not specify the steps. For example, were I looking for the TLS steps, I’d expect something along the lines of: Check cert of https page (server) – Grab public key and encrypt secret with it – Securely send secret to server which should be the only entity with the corresponding private key to decrypt – Server and client now switch over to encrypted communications using the, now, shared secret.
It is a common method on mobile applications to allow users to bypass authentication process by verifying a locally stored token (previously authenticated) on device.
This is to strike a balance between usability (avoiding authentication every time) and security.
- Are there any security holes in this process?
- What are measures to be taken to strengthen this method?
Application for user authentication on pc. A key pair is generated on an Android device. The secret key is stored in Android Keystore, the public key is sent to the PC server. The client generates a token, calculates a hash function from it, signs it with the private key and sends it to the server. Server verifies the signature and checks the hashes. It is required to protect the public key from spoofing. You can also use qr code, for example, you can generate a key pair on a PC and transfer the private key through scanning qr code, and leave the open one.
I have recently started using Cloudflare’s firewall in front of a web application. This app has a limited user base of selected applicants and they must log in to view anything. There is no public registration form and nothing within the portal can be accessed without an account.
Since moving the DNS to Cloudflare I can see we are receiving numerous daily HEAD requests to paths that are only accessible within the portal.
These requests come from one of two groups of IP addresses from the United States (we are not a US-based company; our own hosting is based in AWS Ireland region and we’re pretty sure at least 99% of our users have never been US-based):
Java User Agents
- User agent is
Java/1.8.0_171 or some other minor update version.
- The ASN is listed as Digital Ocean.
- The IP addresses all seem to have had similar behaviour reported previously, almost all against WordPress sites. Note that we’re not using WordPress here.
Empty User Agent
- No user agent string.
- The ASN is listed as Amazon Web Services.
- The IP addresses have very little reported activity and do not seem at all connected to the Java requests.
- The resources being requested are dynamic URLs containing what are essentially order numbers. We generate new orders every day, and they are visible to everyone using the portal.
- I was unable to find any of the URLs indexed by Google. They don’t seem to be publicly available anywhere. There is only one publicly accessible page of the site, which is indexed.
- We have potentially identified one user who seems to have viewed all the pages that are showing up in the firewall logs (we know this because he shows up in our custom analytics for the web app itself). We have a working relationship with our users and we’re almost certain he’s not based in the US.
I am aware that a HEAD request in itself is nothing malicious and that browsers sometimes make HEAD requests. Does the Java user agent, or lack of a user agent in some cases, make this activity suspicious? I already block empty user agents and Java user agents through the firewall, although I think Cloudflare by default blocks Java as part of its browser integrity checks.
Is there any reason why these might be legitimate requests that I shouldn’t block? The fact it’s a HEAD request from a Java user agent suggests no, right?
One idea we had is that one of the users is sharing links to these internal URLs via some outside channel, to outsource work or something. Is it possible some kind of scraper or something has picked up these links and is spamming them now? As I say, I was unable to find them publicly indexed.
Is it possible the user we think is connected has some sort of malware on their machine which is picking up their browser activity and then making those requests?
Could the user have some sort of software that is completely innocent which would make Java based HEAD requests like this, based on their web browsing activity?
Any advice as to how I should continue this investigation? Or other thoughts about what these requests are?