I did a vulnerability scan on some of our company workstations. These are workstations used by employees (dev, HR, accounting, etc.) to do their job. One of the common result I found is SSL/TLS Certificate Signed Using Weak Hashing Algorithm. Based on the vulnerability description "An attacker can exploit this to generate another certificate with the same digital signature, allowing an attacker to masquerade as the affected service." I’m thinking this is more on a server side.
My question is, what could be the impact of this in an ordinary workstation?
What can an attacker/pentester do to the workstation with this vulnerability?
I’m working with a client that, in order to use their OAuth 2.0 web API, requires me to provide them with a JWK that contains an embedded X.509 certificate. Then, when I’m requesting information from the API, they say I need to pass a "signed (with private keys) JWT Bearer token" on each request.
I’ve never worked with JWK’s before so I was looking over the official JWK documentation, but it’s very dense and doesn’t really talk about how these are used in real life applications.
I found this site / command line tool that can generate JWK’s in different formats, and it generates the JWK with an X.509 certificate that is self-signed. I’m wondering, in this case, is it okay to use a self-signed cert to talk to this API? I understand that with web browsers, you absolutely need a cert that is from a trusted CA because the client and web server are essentially strangers, but this cert isn’t being used publicly for a website; it’s just being used between my application and this OAuth API, and both parties already trust each other.
So really my question is, would generating a JWK with a self-signed X.509 certificate be sufficient, and then use the private key of the certificate to sign JWT Bearer tokens when actually using the API?
I am new to GPG so I may be going about this all wrong. I have some files I would like to encrypt symmetrically using AES256, but I want to sign them with my GPG key. I need to automate this process however so I’m using
--batch and I pass the symmetric key after
--passphrase. Since it needs to be automated, I’ve disabled passphrase caching by following the steps here. However, this means my script has no way to pass in the passphrase for my GPG private key. My script will be piping the files to gpg so passing the passphrase to gpg via stdin is also not an option.
If there is no reasonable way to pass both the AES password and private key passphrase, I might consider doing this in two steps, with gpg symmetrically encrypting and then a second round of gpg for signing. It seems excessive though, considering gpg can clearly do this in one step if one passes the private key passphrase interactively.
For reference, I’m using gpg2 exclusively and don’t care about backwards compatibility with gpg 1.x.
Here is the command I’m currently using for encryption. It encrypts and signs as expected, but I can only pass it the private key’s passphrase interactively in the text-based dialog "window".
gpg2 --batch --passphrase <my-long-symmetric-password> --sign --symmetric --cipher-algo AES256 plaintext.txt
My scenario is about a closed system utilising a db driven website. It’s not supposed to be open to the public and the URL is hidden.
Currently I use Commodo certs to do the SSL, but I’m wondering since it a closed system whether it makes sense to use a self signed one. Is there any danger to this? I control all the end users computers so could easily install the cert in their browsers.
This is very different from the question marked as a dupe, it’s not a LAN environment.
Having successfully integrated my old web forms app with an ADFS server I got to thinking about how the process works as a whole. The old app passes the user to the remote ADFS, then eventually the user arrives back in our server having a signed-in identity of firstname.lastname@example.org but I’m not entirely clear on whether I’m supposed to trust that’s right, or whether I’m supposed to try and ensure it’s right.
Supposing that a rogue actor at somedomain.com replaces the sign on at the remote end or manipulates it in some way such that my local server ends up being told that email@example.com signed in (when it was actually tom.hacker@somedomain,com), or worse that firstname.lastname@example.org signed in, what do we do with such situations?
Is this handled already by the auth process such that we can be sure there are some local rules that enforce the federated server may only return users with some characteristic such as "must really be a user of somedomain.com, for which you know this identity server is responsible" ?
When we hand off authentication to a third party, and get the "user X auth’d successfully", do we need to be wary about whether it’s truly user X and whether the server confirming the identity truly has authority to do so for the user given?
At the moment I’m thinking I should also implement my own local check that the announced user matches a pattern to ensure the federated server isn’t used to break into other domains’ accounts and also implement 2FA to give some extra check that the user announced truly is that person
I just gathered all the drivers in my system32/drivers folder and checked their certificate (my windows is updated and its a windows 10 x64)
But i found that so many of them have expired certificate! and some are not even signed! (pictures included)
so my questions are :
Is this normal? if not, what should i do? and if not, then why are the expiration date expired?
How are these drivers are able to get loaded when they have no certificate or its expired? my system is W10 x64 with secure boot enabled, i thought you can only load signed drivers with valid certificates?
What is the role of these countersignatures put in simply? i tried reading MSDN and other websites but couldn’t understand whats the need of this?
here are some examples
WindowsTrustedRTProxy.sys (countersignature is also expired) :
winusb.sys (no certificate) :
I have spent the last few days setting up a freeradius server with eap-tls as the only authentication method. I have used this old tutorial for setting up my own CA and generating the certificates and adjusted the older parameters to match the current ones.
So far I managed to authenticate my iPhone 6 running iOS 11.1.2 as a test device, for that I have:
- Installed the root CA’s(the one I created) certificate on my iPhone
- Installed a test identity profile on my iPhone with the name "Test" and test passphrase, which I converted to a .p12 file
Now when I connect to the network with the freeradius server running in debug mode, I can select EAP-TLS as the auth type and tell it to use the identity certificate. It then prompts me to trust the server’s certificate and I get a successful connection.
I have 2 questions:
- Why do I need to trust the server’s certificate if I have the root CA’s certificate installed? As far as I understood the way the authentication works is as follows:
The server and client each send their respective certificate for the other party to authenticate with the root CA’s certificate. After both are completed there is an optional challenge for the client to complete? (I’m not sure about this) and the client is authenticated
The server doesn’t need to be told to explicitly trust the client certificate but the client needs to explicitly trust the server’s even though they are both issued and signed by the same root CA and both parties have the certificate needed to be able to verify it
AFAIK the whole point of certificate-based authentication is to prevent MiTM attacks that other methods are vulnerable against. If the user initially connects to a spoofed access-point and accepts that certificate it will refuse the correct RADIUS server and leak the client certificate to the wrong server, this would be avoided if the client can verify the server certificate on its own without user intervention
- There is a username option when selecting the network on the iPhone, which does get matched against a backend SQL database by the freeradius server regardless of that username existing the server accepts the authentication. This page notes that the username is used in inner and outer authentication but to me, that doesn’t seem to make sense as there is no inner and outer identity in EAP-TLS. I assume there is a way to tell the radius server to only accept requests that match a username in the database but if it is not configured that way by default what is the point? Doesn’t the certificate already uniquely identify the device/user and what is the point of the username field if anything can be entered?
I would appreciate an explanation to these concepts, I’m relatively new to certificate-based authentication and RADIUS in general so I’m still learning the basics.
The goal of this endeavor is to deploy the server in an eduroam-like environment where users can generate certificates for their devices on some website, download the two needed certificates and get access without having to trust another.
I should also note that I have complete access and control over the server and my CA so I can modify anything as needed, so no quirky workarounds here.
There is an existing client configured and running (SshClient) using apache mina to ssh to one of our internal jump boxes. It currently uses PEM based authentication. Due to compliance we have to switch to using internally signed certificates (internally we are using hashicorp vault as a CA). I’m unable to find any documentation regarding how to use signed certificates for ssh in apache mina to start with. Is it not supported? Will I perhaps have to use any other java ssh library?
Access tokens that are passed to the public in an OAuth flow clearly need to be signed using asymmetric encryption (e.g. RSA) so that they cannot be altered by the client to gain access to new scopes, etc.
Id tokens on the other hand are not used to access any resources on the server. So if the client is able to alter the id token they will not be able to gain access to any extra resources. Does that mean that it would be okay to use a symmetric (HMAC) signature with a secret that is shared between the server and a specific client application (like the oauth client_secret for a given oauth client)?
It seems when SQL TDE is implemented certificates are used to protect the keys used to encrypt the data. What would be the benefits of using a CA signed certificate in this scenario over a self-signed certificate?