Mi name is Jose and I work for a payment gateway for e-commerce in Europe.
I saw in this Forum that many of you had problems with your Payment Gateway.
I would like to open this area to share your problems and help you to solve it.
Thank you so much.
The company I work at uses zscaler to restrict access to certain websites.
Earlier today, I tried to visit pastebin.com, but got the error message in the picture below:
Trying to google why pastebin is considered a high risk service, I didn’t really find much, except this one blog post which talks about certain hacker groups pasting sensitive data to the site.
This alone doesn’t seem like a very strong reason to block the site, as there should be a multitude of other options for making information public. What am I missing here?
Last days I’ve received multiple password recovery attempts for a WordPress user. The user didn’t initiate these attempts.
I’m blocking the IP’s on the server, but I don’t see what the goal of the attacker is. I checked the mails the user receives, and they contain a valid password reset link (so no phishing attempt).
So I don’t really understand what the attacker is trying to achieve with these password recovery requests. Or are they just checking for vulnerabilities on that page?
I’m a listener of the podcast "Security Now" where Steve Gibson, a security expert, often claims that there are no reasons to limit the number of characters a user can use in their passwords when they create an account on a website. I have never understood how it is even technically possible to allow an unlimited number of characters and how it could not be exploited to create a sort of buffer overflow.
I found a related question here, but mine is slightly different. The author of the other question explicitly mentions in their description that they understand why setting a maximum length of 100000000 characters would be a problem. I actually want to know why it would be a problem, is it like I have just said because of buffer overflows? But to be vulnerable to a buffer overflow, shouldn’t you have a sort of boundary which you can’t exceed in the first place, and thus if you didn’t limit the number of characters, would you even have this risk? And if you are thinking about starving a computer’s RAM or resources, could even a very large password be a problem?
So, I guess it is possible not to limit the number of characters in a password: all you’d have to do would be to not use the maxlength attribute or not have a password validation function on the server side. Would that be the secure way to do it? And if it is, is there any danger in allowing an unlimited number of characters for your passwords? On the other hand, NIST recommends developers to limit passwords to 256 characters. If they take the time to recommend a limitation, does it mean there has to be one?
I’m a fan of Visual Studio, and recently installed VSC on my Linux Mint. However recently I did a deep-dive on Richard Stallman’s site reading about just how evil and depraved all the big tech companies are, including Microsoft (not that I didn’t already know that these companies are evil, I just didn’t realise the extent of their depravity).
Now I’m feeling edgy about having VSC on my machine. I’m security and privacy conscious, and use TOR browser often. I read recently that on Windows 10 machines, opening certain files through TOR browser reveals your real IP to Microsoft. This being the case, I’m wary of having microsoft software installed on my machine.
I was wondering what the risks are if I keep VSC installed on my Linux Mint 19? How can I confirm that this program is "sandboxed"? I don’t want it spying on me. What sort of information is it known to transmit to Microsoft, and what other information could it potentially be collecting without my knowledge?
Forgive me if this is a stupid question. I’m not sure how to do an independent investigation which is why I want to come here and ask the experts. If no one knows a direct answer to my question, I will still upvote pointers as to how I could investigate it independently. (I haven’t confirmed this, but knowing microsoft, VSC is closed-source, which obviously makes the job much harder)
Some features are not yet available on the web platform and thus require cooperation with a native application in order to provide them. One method for a web application and a native application to communicate with each other is a custom protocol handler.
For instance, the web application can call "mycustomproto://some/params", where "mycustomproto" must first be registered with the operating system as a valid URI protocol. On Windows, this is done in the registry. There are a few keys/subkeys/values etc that must be added to the registry, but only one actually deals with specifying the executable and it’s parameter(s).
Note that once the protocol handler is registered with the operating system, it can be launched by any website that knows of its existence, subjecting it to potential abuse.
Example Windows registry value for this purpose
All of the examples that I’ve found documenting this show the following:
Assuming that the registered handler (e.g. "myapp.exe") has zero possible security flaws, is the above example registry value sufficient for ensuring that malicious websites are unable to piggyback additional commands and/or arguments?
- For the purpose of this question, please assume that the protocol handler (e.g. "myapp.exe") is incapable of exposing vulnerabilities of its own – it’s idle – it launches, does nothing, and quits. This question is specifically related to the browser and/or OS and the "execution" of this registry value.
- Can malicious actors somehow escape out of the "%1" double quotes and cause the browser and/or OS to run additional commands (e.g.
- Similarly, can malicious actors somehow send additional arguments to the protocol handler? Or does the "%1" ensure that the handler will only ever receive a single argument?
- If this registry value is insufficient to only ever call the protocol handler (and nothing more) with a single argument, is there a better way?
I’m wondering whether git commit metadata can shred light on potential risk signals or vulnerabilities.
Henry Hinnefeld has investigated this, here but this seems to be a way of detecting vulnerabilities which already have been spotted by other developers.
Can anyone think how metadata alone can detect vulnerabilities that have never been found before?
Should one perform a virus scan on a file (using ClamAV) before attempting to determine it’s mime /content type (using Apache Tika), or does it not matter?
An employer (someone’s employer) issued an android app and requests that all of the employees install it. During installation the app requests access to all of the phone’s resources and it wouldn’t work if the access is declined.
The official purpose of the app is sending some internal requests concerning work-related stuff. But, who knows, maybe an employer has some additional goals.
What is the risk for employees when installing such an app on a personal phone? What an employer might see on an employee’s phone? Could it see the employee’s location? What files or personal data can it get access to?
What can an employee do to restrict the employer’s access?
The question is not about using a separate phone. A separate phone for each app is not what the question is about. It is about a 3rd party app on a personal phone.
One of the companies I worked for used client-side hashing to minimize risk when logging the password in the server logs. Can this be a good reason to implement client-side hashing?