SPF record does not preventing the sender spoofing

I am bug hunter & still new in bug bounty programs. I’ve reached to this topic which I can’t go further before understanding this one .

I used one of the most SPF record finder online , the result of this test was they already have a SPF record


I still can send an email as their domain exactly!

so , does really SPF record prevent email spoofing attack? If it does, why I still can send an email as their domain exactly ?, if it doesn’t, how can we really prevent the email spoofing attacks

also maybe I’ve some misunderstanding between SPF misconfiguration & missing of SPF record do they mean same ?! what is the situation as written above is it a misconfiguration or missing SPF record ?!


Preventing HTTPS Replay Attacks

I’ve read here that HTTPS replay attacks aren’t possible from MITM attacks but I want to be sure that it’s not saying that HTTPS replay attacks aren’t possible at all. I want to know if I have to implement my own obscure method for temporarily preventing the inevitable or if something like this already exists.

Suppose the attacker is the client. They have access to the client and are communicating with the server legitimately, analyzing the traffic. Therefore the attacker has access to the client’s private key (or at least, the ability to replicate its generation). What’s stopping them from just replaying the traffic through a fake client after sniffing the payload before it’s encrypted? That is to say, running it through the client to encrypt it then send it themselves.

My client relies on the hardware information from the system to validate one-user-per-subscription and want to know what all of the weak points are for this system. Spoofing it seems really easy if they generate it normally once then spoof it every time after.

Custom structure permalink preventing 404

I am trying to find out what impact setting a custom stucture prefix in permalinks has on 404’s

I have a custom structure of:


This is working as expected if you tried to go to a page: domain.com/news-opinion/non-existing-url I will get a 404 as expected.

However if I use: domain.com/non-existing-url this will redirect a user back to the homepage and not 404.

Am I missing somthing here I should have accounted for?

This is a Bedrock / Composer based install and this is the list of plugins in use if any of these are known to cause this issue:

"wpackagist-plugin/cache-enabler": "^1.3.4", "wpackagist-plugin/classic-editor": "1.5", "wpackagist-plugin/relevanssi": "4.2.0", "wpackagist-plugin/safe-svg": "1.9.4", "wpackagist-plugin/wp-mail-smtp": "^1.8", "wpackagist-plugin/instant-images": "4.2.0", "wpackagist-plugin/shortpixel-image-optimiser": "4.16.1", "deliciousbrains-plugin/wp-migrate-db-pro": "^1.9", "humanmade/s3-uploads": "^2.1", "custom-repo/advanced-custom-fields-pro": "^5.7.0", "custom-repo/gravityforms": "^2.4.0", "wpackagist-plugin/duplicate-post": "3.2.4", "wpackagist-plugin/filebird": "2.7.1", "wpackagist-plugin/crop-thumbnails": "1.2.6", "wpackagist-plugin/redirection": "4.7.1", "wpackagist-plugin/advanced-cron-manager": "2.3.10", "wpackagist-plugin/wp-seopress": "3.8.4", "wpackagist-plugin/cookie-bar": "1.8.7", "custom-repo/vcaching": "^1.8.0",  "wpackagist-plugin/wordpress-importer": "0.7", "wpackagist-plugin/export-media-with-selected-content": "2.0", "wpackagist-plugin/user-roles-and-capabilities":"^1.2.3", "wpackagist-plugin/wp-all-export": "1.2.5" 

If I can provide any more information that may be pertanent to this please let me know.

Any help with this would be apperciated.

Is there anything preventing a familiar from Readying a Help action? [duplicate]

Suppose the following situation: I want a familiar to give me advantage on melee attacks via the help action. However, I do not want to be yet another adventurer with an owl familiar doing fly-by helps. Instead, I pick something smaller (say, a spider), and have it ride on me.

My idea is thus: The familiar, on its turn, readies an action to help me whenever I attack anybody. Seeing as I close to melee to do that, the familiar is within both 5ft of whoever they’re helping and the target, satisfying the range requirement. Is there a RAW reason this shouldn’t work?

Preventing automated attacks on Tokens without relying on Firewall or Network Infrastructure

Our concern is more on application side prevention automated attacks. Although the firewall does it part to help prevent this, it has been mandated in our development team’s security practices that we need a 2nd level of protection. Solutions such as MFA and CAPTCHA are solutions to a different issue. They help reduce the chances an attacker has to possibly bypass authentication and guess the credentials. What we want here is just basically to detect an automated attack and stop it (or realistically, delay it).

The attack the penetration tester did was this:


This is a link sent to email addresses for password reset. They tried automated enumeration of the token to be able to guess a correct one. Although they were not successful guessing a valid one, they still filed this as a vulnerability since our application failed to catch this automated attack and was not able to block the requests. So, we now have been in a dead end finding solutions for this.

Some solutions we have come up with:

  1. IP Address blocking – seems problematic since requests go through a number of servers and components (firewall –> web server –> app server etc.), it would be extremely difficult to get the source ip address of the requester. Sometimes attacks still could be behind proxies.

This would be doable if the enumeration was something like username and password. We can come up with a logic that detects enumeration of usernames with the same password and start blocking next requests using the same password. In this case, only a token in the input.

Running out of reasons to solve this issue. Can anyone help us on this?

Preventing Users from Using QR Code Password and Scanner for Authentication

A user was discovered using a QR code to log into a PC. Apparently, the password was put into a QR code generator and printed. The user:

  1. Provides their username
  2. Scans the QR code with a handheld scanner and is granted access

Our company utilizes handheld scanners for a variety of reasons so it is not feasible to use endpoint protection USB device control to block all scanners or brands of scanners. This user also uses handheld scanners for everyday work duties. We are curious of a creative way to prevent this technically. We also plan on addressing this administratively through policy. One idea was floated that if possible (through GPO):

  1. Having a startup script to disable scanners
  2. A log off script to disable scanners
  3. A login script to re-enable the scanner

The handheld scanner apparently shows as a generic HID keyboard in device manager. Does anyone know of a feasible way to block this or perhaps an alternative solution to the problem (blocking the device at login)? Thank you!

What are the balance ramifications of always preventing creatures from taking reactions before their first turn?

I’ve recently been toying with some rule modifications to 5e’s Initiative rules in an attempt to resolve the rule/reality inconsistency where a player “gets the jump” on a monster, but the monster is able to act first via a Reaction.

The rule changes I’m considering is the following:

A creature cannot use its Reaction before the start of its first turn.

Note that this is not a modification to Surprise rules. This rule would apply any time that Initiative is rolled.

Clearly this will add greater emphasis on rolling well for Initiative. However, for the sake of completeness, I’d like to know if there are there any larger balance ramifications that might occur under this change?

For context: the following is the intent of the change:

  • Deliver on the thematic of being the “first to draw” in combat; i.e., support the intuitive understanding that Initiative defines which creatures are quickest to act.
  • Deliver on my players’ desire to feel like they can surprise their opponents without needing to roll for stealth.

Preventing XSS by filtering data from the server to the client

Before you immediately comment “you can’t trust the client!”, please read the whole question.

I’ve been reading about how to prevent XSS attacks lately, and everything I’ve found says that the server should sanitize the data that will be put into the webpage. This would basically look like addToDatabase(filter(userResponse)). Then the client can safely add display anything that it gets from the server.

I was wondering if it would be safe to store the potentially unsafe data in the server, and have the client filter it when it was received, like addHTML(filter(serverResponse)). This would stop the data from being executed client-side, so no XSS would take place. I understand that anyone could simply remove that filter, however all that would do is make themselves vulnerable. Since other clients would filter anything sent to them, a malicious client could could only disable their own filter and mess up themselves. (I’m not talking about SQL injection prevention, that would be obviously have to be server-side)

To summarize: The server doesn’t sanitize, but the clients sanitize whatever they receive.

Would this be safe?

How is browsing from a virtual machine/virtual box preventing fingerprinting or tracking?

is it increasing your internet security in terms of privacy/tracking/fingerprinting, if you are surfing with your web browser in a virtual machine enviroinment (virtual box + vpn)? Instead of surfing from your normal windows operating system…

Or is a virtual machine not helping you in fingerprinting cases? I just want to understand if you can use a virtual machine as a additional privacy tool and if yes, on what aspects would it have an impact (ip address, virus infections, fingerprinting, etc.)?