I’m torn as a sending stone is a magic item and it would greatly enhance communication while scouting ahead
I’ve experimented a lot with GPG the last couple days and one issue persists:
Someone signed my GPG key and published it to a keyserver X. I can see on the webpage that his signature is shown under my key, so that worked. But my local PGP doesn’t understand:
gpg --keyserver [X] —refresh-keys [myKey] gpg: refreshing 1 key from [X] gpg: key [myKey]: [...] not changed gpg: Total number processed: 1 gpg: unchanged: 1
If I check my signatures, that new signature doesn’t appear.
gpg --list-sigs [myKey] => only outputs the signatures I already had before (either manually imported or signed by other keys that belong to me)
Also, when signing a key and performing a
--send-keys, it throws no errors but the key just never arrives on the servers. This only happens with some keyservers. I read something about ports maybe being closed by the firewall, but didn’t find any concise answer for what to check and how to fix.
Thanks in advance!
PS: Sometimes, the webinterfaces of the keyservers I’m using just load forever or are generally very slow or unreachable, is this normal?
I am evaluating options to choose email providers for a HIPAA compliant web application. I understand that, if the email contains any form of PHI, it would be violating the HIPAA rule especially if the email is not encrypted.
What if the email that is been sent only contain a link to login and nothing else ? Would it still be violation of HIPAA ? I am concerned about the part that email being identified as PHI from the list of PHI’s. So, would the recipient email address itself would be considered as a PHI and violate HIPAA ?
Banks offer to send alerts for each transaction happening on a customers account. My bank is offering three options: SMS alerts, email alerts and push alerts into their own app. Only sms alerts are free of charge. The other options are a very small fee each time a transaction happens.
Could scammers make much out of transaction alerts containing the transaction’s IBAN, the amount and sender and recipient?
Besides that, what is the wisest choice between the options? Email notifications could be intercepted if the scammers gain control of my email account. I might not receive emails while traveling in an area with low reception, while sms still come through.
SMS might be problematic if my phone gets stolen with the sim card. So there’s no way to be informed (but also no good way to authenticate myself to the bank to halt all transactions).
The bank app is not good because it relies on me having a smart phone and an internet connection. The bank states that email and sms will be send unencrypted (as usual), this option is not clear, but it probably will be.
Which of these options offers the most unlikely scam scenarios?
My old skype account that I haven’t used in over 4 years is sending my friends Baidu spam links. I tried logging in but couldn’t remember the password. I checked my old email on haveibeenpwned, and found out it was in a data breach from 2017 for Yu-Gi-Oh Dueling Network. Is this Yu-Gi-Oh data breach still available online? Because I can’t remember my old email password either. I don’t know if that’s how the hacker accessed my Skype but I can’t think of another way. PS; that email was my main e-mail from ages 12-19, so I’m worried about this leak. I really don’t know what to do… Thanks
i am by no means a security engineer , and i have barely started my journey as a web developer. Im utilizing a python package known as django for my backend , react.js for my front end . Recently i have incorporated django-channels , which is a package that gives me the ability to use websockets in my project. Since i have decoupled my front and backends , the basis of authentication im using is via tokens (will look into using jwt) .
const path = wsStart + 'localhost:8000'+ loc.pathname document.cookie = 'authorization=' + token + ';' this.socketRef = new WebSocket(path)
doing this allows me to then extract out the token information through utilizing a customized middleware on my backend .
import re from channels.db import database_sync_to_async from django.db import close_old_connections @database_sync_to_async def get_user(token_key): try: return Token.objects.get(key=token_key).user except Token.DoesNotExist: return AnonymousUser() class TokenAuthMiddleware: """ Token authorization middleware for Django Channels 2 see: https://channels.readthedocs.io/en/latest/topics/authentication.html#custom-authentication """ def __init__(self, inner): self.inner = inner def __call__(self, scope): return TokenAuthMiddlewareInstance(scope, self) class TokenAuthMiddlewareInstance: def __init__(self, scope, middleware): self.middleware = middleware self.scope = dict(scope) self.inner = self.middleware.inner async def __call__(self, receive, send): close_old_connections() headers = dict(self.scope["headers"]) print(headers[b"cookie"]) if b"authorization" in headers[b"cookie"]: print('still good here') cookies = headers[b"cookie"].decode() token_key = re.search("authorization=(.*)(; )?", cookies).group(1) if token_key: self.scope["user"] = await get_user(token_key) inner = self.inner(self.scope) return await inner(receive, send) TokenAuthMiddlewareStack = lambda inner: TokenAuthMiddleware(AuthMiddlewareStack(inner))
However this has raised some form of security red flags (or so im told) .
Therefore i wish to extend this questions to the security veterans out there :
- Is this methodology of sending token authentication information via cookie headers safe?
- Is my implementation of this method safe?
- Is there a way to secure this even further?
In the last months I very often was in a position where I needed to send an email attachments with sensitive content to someone whom I didn’t know well personally (so that I could talk to them how set up encryption), but about whom I knew that they had little IT background and barely knew how to operate a mail client. I’m not an expert myself, but I do know there is such a thing called PGP and with some time&pain I can get it to work.
(Imagine the receiver to be a non-tech person from a big company who little no time to deal with encryption and me being an non-IT engineer, who is technically minded, but does not have deeper IT/infosec knowledge and wants to protect his privacy as much as is possible.)
Because it is not clear to me that the email that I send will be send via TLS between server (and it is also not clear to me why I should trust those intermediate servers), it seems a very bad idea to a pdf with send sensitive content as a standard mail attachment.
Out of desperation I have resorted to uploading the pdf on a file sharing platform (which we shall assume to be trusted, so that my data is safe there). Then I send the download link to that file via (unencrypted) mail. The link has an expiration date and is password-protected and I’m sending the password along the link; this may seem stupid at the first glance, but please read along.
In this way the receiver of the email can still easily access the file without further IT knowledge on his side, but my privacy is slightly enhanced: Whiile I know that if someone would be after me and is intercepting my mail, it would still be very easy for him to get his hands on my pdf, if he is fast enough to download it before the link expires (which is usually a few days). But my threat model is not about protecting against that type of attack, but rather about protecting myself against automatic data collection & hoarding (think, e.g., government authorities snooping on subway cables).
I would assume, since getting the pdf involves some human action, such as filling in a password, that even if my data is collected, it will take too long until a human looks at at and by that time the link will have expired.
My question is:
Is this a good solution for my very moderate threat model described above? My file sharing platform doesn’t use Captchas when one introduces a password to download a file. I assume that, if they would, that I would be 100% secure against such automated data collection, since even if such software would also automatically extract the password from the mail (which I doubt would happen, because if you hoard millions of mails that have passwords in them, you would need a very large amount of computational power to run automated NLP algorithms on them, to get the correct string that is the password, perhaps more than is available), it could not go past a Captcha?
Do you know any other way to securely send the email attachment (including any improvements to my solution above), so that the receive can still download it with minimal IT knowledge and time investment?
(Note that there was another question here regarding sending of links in mails, but my use case is different and more specific.)
I recently started learning about pentesting, and I tried to see the request body of a website that I use frequently, and that’s what I saw:
(“senha” means “password” in portuguese and “entrar” means “to enter”). My question is: Is this a correct approach? I mean, can someone intercept this data and get my password? And how could this be possible?
I am building a website for use in the state of Ohio where users enter their last 4 digits of SSN or their Driver’s License number. This data is submitted to the webserver which generates a PDF with the information included on it. The PDF is then emailed to the user.
Are there security standards that govern how this type of sensitive data is handled, especially concerning email?
Also are there potential legal issues / concerns in building an application like this?
I’m working on an API that I’d like to be accessible internally by other servers as well as devices that I consider both as confidential private clients. Devices are considered private clients because the client_secret is stored in an encrypted area that prevents from unauthorised readout and modification (even though nothing is never bullet proof)
For auth, I’d like to use OAuth2 with the
client_credentials grant that seems to be a very good fit for these use cases. However I’m wondering how flexible is the standard regarding sharing the client_secret.
Basically the RFC doesn’t say much about sending your client id / client secret, it just offers an example here: https://tools.ietf.org/html/rfc6749#section-4.4.2 which is very simple by using the following header Authorization Basic: base64(client_id:client_secret)
In my opinion, it could be slightly more secure by computing a hash:
- the client requests a random to the server by sending their client_id
- the server replies with a random code (valid for like 10 mins, just like an authorization code)
- the client computes a hash = sha256(client_id, client_secret, code) and asks for a token
- the server computes the same hash, compares the client hash with the computed hash and sends an access token if they match
It would add an extra layer of security in case https is somehow broken or if anyone is able to read the header somehow.
However it doesn’t seem very OAuth2 compliant and I don’t really like re-inventing a standard. Another option would be to create my own extention grant, I’m just wondering if it’s really worth it, like no one seems to have done this.
Also, if I want to share my API with a 3rd party app, not sure it’s a good thing to force them into using something non really standard.