In multiple instances lately, I have received the plaintext passwords I entered(not given to me) emailed to me after signing up. The sites in question have been legitimate small businesses, so I suspected it was a default setting. Is this something I as a user should be worried about? In other words, are they not only storing my password in plaintext but sharing it with my mail provider? Here is a link to one example screenshot, too large to fit in the post.
The named hardware dongles (or at least several models of them) allow me to store PGP secret keys key.
Suppose I am using such a secret key to sign data (doesn’t matter what). As I understand the operation happens on the hardware itself and the PGP secret key doesn’t leave the device.
Now suppose I am signing several GiB of data, does that mean all that data gets squeezed through the hardware and therefore the hardware dongle becomes a bottleneck, or is the signature practically the same as signing a hash of the data – where the hash gets computed on my host machine?
- When signing large amounts of data, will that data go through the hardware dongle in some way or will its hash be computed and the signature simply signifies the validity of the hash?
- Does the involvement of
gpg-agentchange anything? I.e. suppose I am signing content on
host1which has the hardware dongle with the PGP secret key plugged in.
- Suppose I am encrypting data against some public key and subsequently signing it. Does this change anything or create a bottleneck?
I am working on an application where the general approach is signing most response payloads with HMAC, but there are a couple of endpoints that return
text/event-stream in the response. What is the common approach in such cases?
Does the OpenSSL
req command have a OpenSSL configuration file equivalent to the
new_certs_dir option? I’d like to establish a default directory for all Certificate Signing Requests ("CSRs") that are created using the
Here is the use case:
- Router vendor to support 3rd party app hosting on their Routers with apps digitally signed by Router vendor only.
- Router to support skipping signature verification for App developers/vendors during Dev phase.
- Router to enforce signature verification in production mode.
- Router vendor to build this solution on a production router image so that they don’t need to provide separate Dev/Test router image to app vendors during Dev phase.
What is the secure way to support this on embedded systems like Routers without having to create 2 separate sets of images for Dev and Production from the router vendor?
Some companies install corporate VPNs which also come with a root certificate installed on all employees’ machines. This allows for encrypted traffic to be decrypted by technology installed on the VPN. Some companies even have to do this to meet certain auditing and compliance requirements.
Is it possible for a website to set up a certificate signing chain in a way that if root cert that signed it is replaced by the corporate VPNs root cert, it would either fail to load the website, or prevent it from being overwritten by the root cert entirely in the first place?
Or, if there’s a root cert installed on a machine, is it impossible to prevent TLS intercepting by a MITM party?
Looking over the spec for JSON Web Signature / JWS https://tools.ietf.org/html/rfc7515, I realise that it doesn’t seem to specify a method for signing HTTP headers.
Is there some way to use JWS to verify that HTTP headers were set by the same party that signed the JWS payload? [Or perhaps, suggest an alternative mechanism]
Cross posting from SO to here for better reach https://stackoverflow.com/questions/62137112/certificates-for-authentication-and-signing
I have a client server scenario.
I have both thick client and a thin client (browser) which communicates with my server.
My thick client uses X-509 system certificate for client certificate authentication scenario and communicates with the server
Also this certificate is used for used to generate signed URL (with expiration time) for my thin client to communicate with my server which is used for integrity and authorization purpose. I also have a token based approach for authentication purpose in this case.
Now i want to complete move my authentication mechanism to OAuth based flow using client credentials or auth code based.
I understand that authentication and authorization can be moved to OAuth based communication. But how do i move my signing (digital signature) based use case to OAuth from certificate based ?
I don’t think there is any other way than to use certificate based PKI mechanism for digital signing. Can the private and public keys be distributed other than the certificates ?
Related to this (too broad) question: How to implement my PKI?
I have a self-signed CA (
I would like to create a CA (
ca1) with limited power derived from that first CA.
ca1 should only be able to sign certificates for
*.foo.com and for
From this question, I found out that the
Name Constraints extension is probably what I want.
The key for
ca1 is already created and is
I already have an incomplete command for creating the request:
libressl req -new -sha512 -key ca1.foo.key.pem -out ca1.foo.csr.pem
What should I add to that line to limit ca1’s power to what I want?
It is my understanding that when file is signed with gpg the signature will provide 2 guarantees, a proof of ownership of the signature and the integrity of the signed data.
lets say I have a doc.txt I want to sigh, so I use:
gpg --output doc.sig --detach-sig doc.txt
but I see that many software destitution use a slightly different scheme to provide the same guarantee using extra step.
instead of signing doc.tx directly with gpg a checksum of doc.txt is created and then this checksum is signed with gpg.
So why add this extra step with the checksum file?