Preventing automated attacks on Tokens without relying on Firewall or Network Infrastructure

Our concern is more on application side prevention automated attacks. Although the firewall does it part to help prevent this, it has been mandated in our development team’s security practices that we need a 2nd level of protection. Solutions such as MFA and CAPTCHA are solutions to a different issue. They help reduce the chances an attacker has to possibly bypass authentication and guess the credentials. What we want here is just basically to detect an automated attack and stop it (or realistically, delay it).

The attack the penetration tester did was this:

http://ourapplication.com/passwordreset&token=AAAAAAbbbbCCCCDDDD####3333KkOoBvVNNJIKGDDVL

This is a link sent to email addresses for password reset. They tried automated enumeration of the token to be able to guess a correct one. Although they were not successful guessing a valid one, they still filed this as a vulnerability since our application failed to catch this automated attack and was not able to block the requests. So, we now have been in a dead end finding solutions for this.

Some solutions we have come up with:

  1. IP Address blocking – seems problematic since requests go through a number of servers and components (firewall –> web server –> app server etc.), it would be extremely difficult to get the source ip address of the requester. Sometimes attacks still could be behind proxies.

This would be doable if the enumeration was something like username and password. We can come up with a logic that detects enumeration of usernames with the same password and start blocking next requests using the same password. In this case, only a token in the input.

Running out of reasons to solve this issue. Can anyone help us on this?

How vulnerability assessment is different for application and infrastructure?

I am working for a company where vulnerability assessment for infrastructure and applications are being done by different vendors. Sometimes I get confused that assessment should happen on the infrastructure or application side.

E.g. xyz application is hosted on Windows 10.

Should I consider it vulnerability assessment on the infrastructure side or application side?

Insecurity through all the security. (Questions on infrastructure set up)

I’ve got an issue with a proposed infrastructure set up so I come to you for some enlightenment.

The flow is: client –> fw–> waf–> api gw–> applications.

Clients will communicate with us using mtls. The fw can’t terminate mtls so it looks like only the waf will perform this. We could do an SSL intercept on the fw but we’d basically end up de-encrypting and then de-encrypting twice, including the waf.

Is there a more intelligent set up?

Is this a valid equation for determining certificate lifetimes in a PKI infrastructure?

I have an intuitive sense of how certificate lifetimes should work in a PKI infrastructure, but I don’t consider myself an expert in this field so I would like someone to validate or critique my assumptions:

The “leaves” on a PKI hierarchy are the certificates issued by a CA. The maximum lifetime of one such certificate is equivalent to:

renewal interval          + renewal period           = certificate lifetime (renew yearly, i.e. 1 yr) + (1-month renewal period) = 13 month lifetime 

An intermediate/issuing CA’s cert’s lifetime follows the same pattern, plus the maximum lifetime of a cert it can issue:

renewal interval + renewal period + child lifetime = certificate lifetime 2 years          + 1 month        + 13 months      = 3 year, 2 month lifetime 

The last step “recurses” up the PKI hierarchy through any more intermediate CA tiers until you get to the root cert.

This means, necessarily, a CA’s cert must always have a lifetime longer than the certs it issues.


Context: Apple’s Announcement about 13-month maximum cert lifetimes starting September 1st 2020 must therefore only apply to leaf certs, and not to certs issued to intermediate or root CAs.

Who implements Cryptoki on PKCS#11 Infrastructure, Software or Token/Devices

I am new to PKCS#11, and I am going through the docs, and I am a little bit confused where Cryptoki fits in. And after I read the initial part of the OASIS Spec, I came to conclusion that the Implementation of the Interface is left to the application rather than the token – Smart Card, USB token or HSM. And, if that’s the case then how do you circumvent vender specific devices.

Thanks

Is it a security issue that underlying infrastructure (like e.g. Kubernetes cluster) can easily be revealed?

I have recently found out that a very common setup of Kubernetes for some use cases of access over TLS returns an invalid certificate with name Kubernetes Ingress Controller Fake Certificate. I.e. making it obvious to anyone that one is using Kubernetes.

So the question is not really so much about Kubernetes itself, but if disclosing such information about underlying infrastructure is considered undesirable or it does not matter?

P.S. Kuberentes information in more details:

Default installation of nginx ingress controller provides a “fallback” (called a default backend) that will respond to anything it does not know about. Sounds good, but the thing is that it does not have TLS configured, but does answer on port 443 as well and returns an (obviously) invalid certificate with name Kubernetes Ingress Controller Fake Certificate.

Is it possible to do ‘Oracle Cloud Infrastructure Certifications’ without following ‘SQL Fundamentals | 1Z0-06’? [on hold]

I recently switched my career from SE to Oracle DBA. Currently, the company that I’m working is in progress of switching to a cloud system. As a beginner is it possible to do complete ‘Oracle Cloud Infrastructure Certifications’ without ‘SQL Fundamentals | 1Z0-06’.And note that, I do have good self-studied architectural knowledge about oracle DB and good experience in SQL and PLSQL(from my previous SE career)

How does a spammer typically setup smtp infrastructure? [on hold]

I am a bit confused when it comes to spammers sending spam from botnets. I know that protection mechanisms like SPF and DKIM are there to validate the mail through ip whitelisting and cryptographic signing. But how would a spammer send a huge amount of emails if he was spoofing a domain without SPF and DKIM? Because even if he had many bots, he would have to use a third party provider like gmail or yahoo, because they do have FQDN’s. And an attacker would not use gmail or something similar since it would easily be detected and it would probably not allow host spoofing.

So, is a FQDN needed to deliver spam emails or do botnets set up their own local smtp server on each bot and send from there? Won’t this traffic be blocked somewhere? It is just not clear to me, how a spammer would typically set up the smtp server structure. How are these spam floods possible?

Does Application Development (App Dev) Drive the Infrastructure or Infrastructure Drive Application Development?

I am both a front-end (Angular) and backend (Spring Boot + Openshift) developer. Over the decade, I have seen how Application Development evolved and how infrastructure has evolved over the years.

There has been a symbiotic relationship between App Dev and Infrastructure. My question, then, is which had a bigger impact on which? Which drives which?

Does App Dev Frameworks drive Infrastructure change or Infrastructure enables these App Dev Frameworks?

it seems that both was working in parallel with little convergence as both group of people came from different planets at first until they converged recently (devops).

DDD and Infrastructure micro-Services – how should the interface be designed?

We’ve extracted our email sending into an EmailService – this is a microservice that provides resiliency and abstracts the email logic into an Infrastructure service. The question is how the interface to the EmailService should be defined, with respect to the information it requires about the [User] domain

Approach 1:
EmailService exposes an interface that takes all the fields of the [User] domain that it requires.

Approach 2:
The EmailService interface takes only the UserID. The EmailService then queries the UserService using this ID to fetch the fields that it requires

There are some obvious pros/cons with each approach.
Approach1 requires the calling service to know everything about a User that the EmailService requires, even if its not part of the callers Domain representation of a User. On the other hand the contract between the services is explicit and clear.

Approach2 ensures that the [User] fields are fetched as late as possible, minimising consistency problems. However this creates an implicit contract with the UserService (with its own problems)

I’ve been doing a lot of research here and on SO, but I haven’t been able to find anything around this specific interface design problem. Keeping DDD principles in mind, which approach is correct?