TCP reset attack / forged TCP reset prevention

How do you prevent someone from doing an TCP reset attack between client and host without having acess to host?

I am trying to solve a CTF for fun and learning purposes. In one of the challenges I establish a connection with a server that starts sending me TCP packages, but I am interupted by a third party that sends what appears to be a forged tcp reset. I receive a RST, ACK and the packages stops coming.

I have tried both DROPing and REJECTing the RST package without success, using the following command:

iptables -A INPUT -p tcp --tcp-flags RST RST -j DROP 

Is there any way that I could nullify the attacker trying to prevent communication between me and the host?

Should I present forged documents in a Penetration Test/Red team engagement?

A previous question of mine lead to this discussion which mentioned the subject of Document forgery.

I’ve seen many people (in videos) forge IDs and employee badges for such engagements so that seems fine as a test. However, if asked to present a more critical/serious document like a “Permission to Attack” slip (when caught), or asked by a police officer to present some ID, should we test them by first show them a forged “Permission to Attack” slip or ID and only show the real documents if caught?

How to legally verify a digital signature was validly created and used, and the audit trail was not forged

I am still learning about how exactly a digital signature is created, but I’m wondering about the process of making sure a digital signature was actually created by the end user.

For example, say a customer of some platform was logged in to the platform app. They were browsing some of the new products and clicked on one. On that page there is the agreement that must be digitally signed before you can use the product (let’s say no additional money is exchanged). Then it’s technically very simple to automatically click the “I agree” button for the user without their knowledge. I mean, you don’t even need the user to navigate the page, you could programmatically create a user session on their behalf or do some stuff behind the scenes to automatically have them sign some documents without them knowing. I don’t see how you can guarantee that this didn’t happen for any document that gets legitimately signed. I don’t see how you can prove in court that “yes, this customer’s digital signature on this document is legitimate”.

I understand how the digital signature prevents against tampering with the document. But I don’t see how you can prove that the application didn’t automatically sign it on the users behalf without their consent. I started to try and resolve this by thinking “well wait, maybe they can create an audit trail, by recording the actions of the user on the web app, like you would with an analytics tool for customer data tracking”. But theoretically you could just have the application simulate some actions from within the browser while the user is actively logged in and using the app, they wouldn’t even see anything necessarily. So in the end here, there’s no way to tell if the user actually performed each action or if the application did it programmatically.

I’m wondering if one could shed some light on the security here. How you would demonstrate, then, legally, or how you would verify in court, that in fact it was the customer that performed these specific actions, and that they in fact were the ones to sign the document, not the computer. That is, you would show that without a doubt the computer did not programmatically perform the actions on the user’s behalf without their consent. I don’t see how this could be reliable shown.

At first thought, it seems you would need some verified, trusted, reliable third party tool or API that would do the analytics/auditing. They would build the browser JS library for doing analytics tracking of each event. Then they would at least have the audit trail stored in a way that was trusted and known not to be tampered with. Or maybe this is what the blockchain solves (I’m not familiar with the blockchain too much at all, but I think it might solve this in some way). But even then, as the platform app you could still invoke those method calls whenever you wanted, so then the data could still be forged without the user’s consent. I don’t see how you can reliably create the data that tracks the user’s behavior and know that it is them actually doing it. The only way I can imagine this being verified as true is if you were trusted as an application. That your code was inspected perhaps, and your app regularly audited by third-parties. But even then, you could sneak something in on a one-off basis for an hour here or there. So back and forth and back and forth, I don’t see how to make this process secure.

Maybe if you require the user to upload their certificate at the time of signing, this is only possible because of browser security features. Then that actually does require manual intervention; the browser couldn’t do that in the background.