DDOS attack mitigation – enough to analyse only GET/POST requests?

I am developing a DOS attack recognition module for application layer requests. The application has a backend consisting several APIs. They all are connected through an API gateway(developed in Nodejs). Every request is recorded to a database and another server (written in python-Flask) analyses the number of GET/POST requests for every 20 seconds and calculate the entropy of the incoming requests and block any suspicious IPs based on the entropy value.

My question is, In order to defend from DOS attacks, do I have to consider other types of TCP packets other than HTTP’s (ex: ICMP).

My backend APIs doesn’t allow any user to continue without logging in. In that case is it worth developing the DOS attack recognition module.

Architecture of the application

The research paper: https://www.sciencedirect.com/science/article/pii/S1877050915005086

Is CSRF protection required for sensitive GET requests with CORS enabled?

Based on other questions, it seems protecting GET resources with CSRF tokens is useless. However, that quickly becomes untrue when CORS gets thrown into the mix

I have a server domain server.com and a UI app at client.com. The server.com domain handles user auth and sets a session cookie under the server.com domain. It also has a rest-like endpoint that serves GET requests at /user_data and returns sensitive user data to users with a valid session. The 3rd party UI at client.com needs access to the user_data in an AJAX call, so CORS is enabled at the /user_data endpoint for the origin domain client.com via Access-Control-Allow-Origin.

The endpoint in question has no side effects, although it serves sensitive data to a 3rd party. Do I need to implement some CSRF token protection for the endpoint? Could the user_data be read by a compromised client.com webpage (via persistent XSS)? If so, can I use a query param mechanism of CSRF token exchange? The way I understand it, it’s the only option, because the client.com cannot read csrf tokens stored in a server.com cookies. However OWASP guidelines state that:

Make sure that the token is not leaked in the server logs, or in the URL.

If that’s also a problem, how can I secure my application?

am I being spied on? requests in unpublished services

Just today i’m migrating some of my webpages to a new server, after a couple hours the services was ready, I set up hosts file with my domain example.com pointing to the server address, went to my browser and navigated to the domain and all worked ok, just before make public the new server (by changing the dns) I noticed 2 requests in the nginx log comming from different IPs to my own:

184.105.139.67 176.58.124.134 

By curiosity I went to that address in my browser and both says that are from bots that scans services (tequilaboomboom and shadowserver), ok but… how did they now the domain? the server is public and people can go to the ip address, but my nginx log is specific for my domain example.com, I only have React extension on my W10 Chrome browser, how did they catch my request to know what site to scan? am I being spied on? is my maquine compromised? is my network compromised?

Why are DNS requests visible with DNS over HTTPS enabled?

So, Firefox 73 rolled out today and with it comes a new DNS option called NextDNS. I thought of giving it a shot and clicked “Enable DNS over HTTPS” and selected NextDNS.

Now, my understanding of HTTPS is that it encrypts the traffic (to provide confidentiality) and prevents tampering (to check integrity). But, when I started snooping on my own traffic using tcpdump, I found entries such as these:

root@Sierra ~ % tcpdump dst port 53  00:16:18.598111 IP 192.168.1.102.57991 > 192.168.1.1.domain: 15871+ A? detectportal.firefox.com. (42) 00:16:18.601087 IP 192.168.1.102.55182 > 192.168.1.1.domain: 44174+ A? www.goodreads.com. (35) 00:16:18.602982 IP 192.168.1.102.57991 > 192.168.1.1.domain: 63750+ AAAA? detectportal.firefox.com. (42) 00:16:18.855488 IP 192.168.1.102.34760 > 192.168.1.1.domain: 7245+ A? mozilla.org. (29) 00:16:18.855976 IP 192.168.1.102.34570 > 192.168.1.1.domain: 17221+ A? mozilla.org. (29) 00:16:18.855998 IP 192.168.1.102.34570 > 192.168.1.1.domain: 24136+ AAAA? mozilla.org. (29) 00:16:18.856830 IP 192.168.1.102.42346 > 192.168.1.1.domain: 52531+ A? detectportal.firefox.com. (42) 00:16:24.097262 IP 192.168.1.102.35499 > 192.168.1.1.domain: 38286+ A? mozilla.org. (29) 00:16:24.097448 IP 192.168.1.102.35499 > 192.168.1.1.domain: 44461+ AAAA? mozilla.org. (29) 00:16:24.451349 IP 192.168.1.102.40330 > 192.168.1.1.domain: 60808+ A? s.gr-assets.com. (33) 00:16:24.456921 IP 192.168.1.102.48310 > 192.168.1.1.domain: 6906+ A? i.gr-assets.com. (33) 00:16:29.106318 IP 192.168.1.102.39619 > 192.168.1.1.domain: 54705+ AAAA? mozilla.org. (29) 00:16:33.269314 IP 192.168.1.102.43004 > 192.168.1.1.domain: 3958+ A? mozilla.org. (29) 00:16:42.515778 IP 192.168.1.102.53688 > 192.168.1.1.domain: 33887+ A? sync-580-us-west-2.sync.services.mozilla.com. (62) 00:16:42.516330 IP 192.168.1.102.59568 > 192.168.1.1.domain: 62418+ A? api.accounts.firefox.com. (42) 00:16:42.889225 IP 192.168.1.102.48174 > 192.168.1.1.domain: 41105+ A? sync-580-us-west-2.sync.services.mozilla.com. (62) 00:16:43.453717 IP 192.168.1.102.60703 > 192.168.1.1.domain: 44380+ A? d3cv4a9a9wh0bt.cloudfront.net. (47) 

Apparently, this doesn’t look encrypted. When I changed my DNS server to Cloudflare, I could only see the entries for Cloudflare’s DNS server (which is what I expect from DoH). So, what’s wrong with NextDNS? How is NextDNS different from unencrypted DNS? And, am I missing something here?

Should I make one request for the whole web site or separated requests, which is better

Hi guys,
I've just learned that you can make just one request to the server for the whole website like a trick below:

<!doctype html><html lang="en"> #Head     #CSSfile    <body> #Navigation_main #MainContainer #Footer #JSfile </body></html>
Code (markup):

In this case, I send just one request for a simple html file to the server.
At the server, I replace a hastag like #Footer with block of code like something below:

<div id="footer">     Footer Stuffs <div>
Code (markup):

Now, I was wondering…

Should I make one request for the whole web site or separated requests, which is better

Should I log 200 OK requests from Apache

It appears that by default Apache only logs requests that have a response code of anything other than a 200 OK. Considering that many failed or successful attempts at attacks will simply return a 200 OK, this seems like it would exclude a ton of potential security information from being analyzed. Is there any reason to not log all HTTP requests, other than log volume?

What technologies to use for the “front end” server to safely validate requests?

Imagine that I have a server made with technologies and programming language that are not secure for some reason, such as legacy versions with known vulnerabilities or anything else that may result in giving a hacker an opportunity to forge an invalid request to access more data or internals of the system.
Now to solve the security problem instead of rewriting the core I would like to place the 2nd server in front of it. It should have libraries to be able to accept simple HTTP requests with headers, parse/generate JSON, do the validation (to ensure it passes valid JSON to the 1st server, i.e. recursively check structure, sizes and encodings) and so be quick, simple and safe enough to make is possible to rarely update it and use any unsafe protocol for easier communication with the 1st server.
What technologies should I use for the 2nd server? What programming language?

Why is a host making requests for WPAD file from external location?

In NGFW logs of my customer, I noticed requests to [REDACTED]/wpad.dat being made. Destination domain is registered on an external IP not related to the customer and user agent suggests that Windows AutoProxy is used. I was able to download the wpad file myself and inspect its contents:

function FindProxyForURL(url, host)   {   return "DIRECT";   }  

If I understand correctly, the traffic is not routed through any rogue proxy server for it to be a WPAD attack.

I’m trying to figure out what could have caused this traffic to take place to begin with? “Internet settings” changes (made by e.g. malware) on the hosts? And are there any other risks related to this traffic, aside from the fact that the wpad file can be changed by the server owner?

POST requests are bypassing PHP checks

I have a website with a contact form on PHP and a mail server. Email are sent with the help of PHP mail function like so (skipping validation code for brevity)

$  name = $  _POST["name"]; $  email = $  _POST["email"]; $  message = $  _POST['message'];  $  headers = array(     'From' => $  name . '<' . $  email . '>',     'MIME-Version' => '1.0',     'Content-type' => 'text/html; charset=iso-8859-1' ); $  result = mail($  to, $  subject, $  message, $  headers, '-r' . $  sender); 

Recently I’ve been attacked by a spammer who is posting emails with From field value like this

check@mydomain.com, this@mydomain.com, link@mydomain.com,  "US:http"@mydomain.com://www.somedomain.com/page <somename@mail.com> 

So I prohibited the @ character in the name field like so

if (strpos($  _POST["name"], "@") !== false)     exit() 

I’ve tried sending a POST request with a name like name@ from Postman and it was rejected successfully but am still getting the same spam emails.

Any ideas please how the spammer is bypassing the validation check?