Is there any way to mitigate crits?

When I ran a testround the tank char of the group (8 soak!) got hit by a 4 dice attack with a pistol that managed to do 9 damage and due to rolls got 2 triumphs (npc was attacking).

Now that resulted in a crit, which then took the PC out of the combat and adventure (1 strain for every action done….rolled very high for crits). What wondered me how “deadly” the system is in effectively taking chars out of combat.

Is there something I have overlooked there in terms of how a PC can mitigate a critical?

(the combat was during a breakin scene so they had no time for a complete medical healing…at max. first aid and nothing more).

how to mitigate mailsploit

if a mail sender encode his sender-adress in from: field in the smtp protocol, the mail transfer agent only check the sender after the unencoded @. in this way protection mechnisms like SPF/DKIM/DMARC are subverted.

Is there are any Mitigation againt this?

From: =?utf-8?b?$  {base64_encode('potus@whitehouse.gov')}?==?utf-8?Q?=00?==?utf-8?b?$  {base64_encode('(potus@whitehouse.gov)')}?=@mailsploit.com" 

mitigate token impersonation

is there any mitigation to prevent a local administrator to impersonate other logged on user accounts by duplicate the security token?

szenario: AdminA is working on ServerA AdminB grabs SYSTEM-User rights and impersonate AdminA on ServerA so AdminB can list Network-Shares which are only accessable for AdminA in normal way.

how to prevent this type of attacks?

How can a demon mitigate the danger of Compromise when changing to demon form?

On p.43 of the Demon: The Descent manual we find:

Some demons feel much more comfortable in their demonc form, to the point that they try to arrange their lives so that they can spend some time that way every day. These demons sometimes band together to create safe spaces where the can assume their demonic forms and “let their har down” without worrying about curious humans for the God-Machine’s agents.

Given that full transformation results in a Compromise roll at -3 dice, how can a demon change fully every day without Compromise? Are there ways that PC’s can mitigate this roll?

LUKS mitigate brute force attacks even if salt parameter is know?

From What users should know about Full Disk Encryption based on LUKS

In Linux world, LUKS implementations are based on cryptsetup and dm-crypt.  In  order  to  mitigate  the  problem  of  brute  force  attacks based  on  weak  user  passwords,  LUKS  combined  the  ideas  of  salt and key derivation function (i.e., PBKDF2). Because salt parameter is known and user password may be guessed, we focus on iteration counts and their ability to slow down a brute force attack as much as possible 

I can’t understand how can the usage of salt (when it is known as in this case) mitigate brute force attacks ? What am I missing ?

How to mitigate backend stress generated from malicious traffic

I want to reduce, or mitigate the effects of, malicious layer 7 traffic (targeted attacks, generic evil automated crawling) which reaches my backend making it very slow and even unavailable. This regards load-based attacks as described in the excellent answer https://serverfault.com/a/531942/1816

Assume that:

  1. I use a not very fast backend/CMS (e.g ~1500ms TTFB for every dynamically generated page). Optimizing this is not possible, or simply very expensive in terms of effort.
  2. I’ve fully scaled up, i.e I’m on the fastest H/W possible.
  3. I cannot scale out, i.e the CMS does not support master-to-master replication, so it’s only served by a single node.
  4. I use a CDN in front of the backend, powerful enough to handle any traffic, which caches responses for a long time (e.g 10 days). Cached responses (hits) are fast and do not touch my backend again. Misses will obviously reach my backend.
  5. The IP of my backend is unknown to attackers/bots.
  6. Some use cases, e.g POST requests or logged in users (small fraction of total site usage), are set to bypass the CDN’s cache so they always end up hitting the backend.
  7. Changing anything on the URL in a way that makes it new/unique to the CDN (e.g addition of a &_foo=1247895239) will always end up hitting the backend.
  8. An attacker who has studied the system first will very easily find very slow use cases (e.g paginated pages to the 10.000th result) which they’ll be able to abuse together with random parameters of #7 to bring the backend to its knees.
  9. I cannot predict all known and valid URLs and legit parameters of my backend at a given time in order to somehow whitelist requests or sanitize the URL on the CDN in order to reduce unnecessary requests from reaching the backend. e.g /search?q=whatever and /search?foo=bar&q=whatever will 100% produce the same result because foo=bar is not something that my backend uses, but I cannot sanitize that on the CDN level.
  10. Some attacks are from a single IP, others are from many IPs (e.g 2000 or more) which cannot be guessed or easily filtered out via IP ranges.
  11. The CDN provider and the backend host provider both offer some sort of DDoS attack feature but the attacks which can bring my backend down are very small (e.g only 10 requests per second) and are never considered as DDoS attacks from these providers.
  12. I do have monitoring in place and instantly get notified when the backend is stressed, but I don’t want to be manually banning IPs because this is not viable (I may be sleeping, working on something else, on vacation or the attack may be from many different IPs).
  13. I am hesitant to introduce a per-IP limit of connections per second on the backend since I will, at some point, end up denying access to legit users. e.g imagine a presentation/workshop about my service taking place in a university or large company where from tens or hundreds of browsers will almost simultaneously be using the service from a single IP address. If these are logged in, then they’ll always reach my backend and not be served by the CDN. Another case is public sector users all accessing the service from very limited amount of IP addresses (provided by the government).
  14. I do not want to permanently blacklist certain large IP ranges of countries which sometimes are the origins of attacks (e.g China, eastern Europe) because this is unfair, wrong, will deny access to legit users from those areas and attacks from other places will not be affected.

So, what can I do to handle this situation? Is there a solution that I’ve not taken into consideration in my assumptions that could help?

How can I mitigate the possibility of SSL private key being copied without my knowledge?

OCSP stapling and Certificate Transparency logs seem to provide a pretty good defense against man-in-the-middle attacks if I discover that my private key has been stolen. I can revoke my old certificate and switch to a new one, and clients should be aware of the revocation.

However, what if I don’t know that my private key has been compromised? Are there any measures for detecting if my key is being used by an unauthorized party? Or ways to defend against MITM in such a scenario?