Implicitly allow requests in IIS from valid hostname

I have a few publicly accessible IIS servers and sites (personal and corporate), these hosts have own domains/subdomains, and all legit access to these https sites happen through domains.

Almost all HTTP app vulnerability scans from bots/rooted servers happen to the servers through IP, without valid hostname, and if there is hostname it is the default reverse DNS host, not the actual domain of the site.

Is there a way in IIS to implicitly only allow requests with proper hostname? The site root app only has bindings to the hostname, but IIS still accepts requests, and responds with 404. The best thing would be to timeout the request similar fashion as if the site doesn’t have HTTP open.

I of course understand that this does not guarantee anything in security wise, the scanner can still figure out the proper hostname in many ways, but it would still filter out 90% of dummy scans.

IPS in firewall can probably do some things, but in some cases I do not have that luxury. Is there way in IIS? Redirect the http request to oblivion? (this would probably just change the error to proxy gateway http errors?)

Handle multiple simultaneous requests Mysql

sorry for this noob question. I do have an application which expects about 5000 users accessing it simultaneously, my current database is running on RDS and for each request, a query is called, it takes about 30 milliseconds to be executed.

The main caveat is when we open multiple connections the CPU database spikes to 100% and the app starts to getting timeout error.

What solution would be possible to handle so many requests?

Architecture:

  • 3 EC2 db.r5.16xlarge(db.t3.large) running behind a load balancer
  • 1 RDS (db.t3.2xlarge) MySql 5.7

enter image description here

Can I use scripts to block other script or block/rewirte HTTP requests?

I’m building a personal website using this premade Enterprise-class CMS because it has both the blogging and wiki/docs parts in one package, it’s not as gorgeous as a WordPress site can get but it’s got a ton of management tools and then some. Plus, I’ve invested on it already and got it looking pretty good for an Enterprise CMS––actually it ain’t that bad even compared against the blogging CMSes.

There’s a big issue with it though: analytics. They are disabled on the backend but the code still loads in every page and I found out that HTTP POST requests are made to a REST endpoint, fortunately all within the domain (although this might be because my reverse-proxy, HAProxy, injects Content Security Policy headers so no requests outside of my domains are allowed) and in the same proxy those REST calls are blocked so they never make it to the server and finally the server itself is blocked from connecting to the Internet on its own so it can’t ever phone home to upload stuff.

Only doing all of this I feel I feel confident about visitor (and my own) privacy and I would leave it at that except for the fact that those REST calls have the word "analytics" right on the URL therefore privacy tools like uBlock Origin flag them on a site with otherwise perfect privacy score.

The CMS allows to put in some code in sections on it, I’m already using code put in the end of the body section to hide the login section back up in the header section, not needed on a personal site. It’s a something like:

<script> jQuery('#sectionid').hide(); </script> 

So I’m thinking about using something like that to either block loading the analytics script’s module I guess it’s called, or perform a function similarly to a CSP, forbid the page to make HTTP requests to that address thus uBlock Origin won’t flag my site. I tried blocking the script from being requested altogether but it’s in some form of multiplexed request with other scripts (as you may tell by now; I know nothing about code) and they are loaded lumped together in batch.js files breaking the site with it when blocked. I found about all of this (and the concept of minify) after a couple of hours viewing logs and analyzing the code with the developer tools on different browsers. Didn’t fix a thing but at least I didn’t break things* and I got an idea on how to proceed.

I also found this resource://gre thing:

enter image description here

…and I am begging that "gre" doesn’t mean what it means in the networking world, y’know–a tunnel, because I’d have to dump the CMS and start looking again. I’ll leave that for later though.

Is there some code to block other code or block/rewrite requests? I have other servers from where I can easily server the code if it can’t be out inline. Any suggestion is welcome.

BTW those last sentences sound like dev talk, at least to me a little, but it’s only what I’ve learned from using a proxy–I really know no code.

*: actually I did break some stuff but thankfully virtualization saved me: I snapshoot (snapshotted?) back in time.

How to understand a single packet embedded with multiple requests?

When I read Multi VERB Single Request:

This Attack is also a variation of the Excessive Verb Attack strategy. The attacking BOT creates multiple HTTP requests, not by issuing them one after another during a single HTTP session, but by forming a single packet embedded with multiple requests.

How to understand the a single packet embedded with multiple requests?

is it mean one HTTP request packet have multiple HTTP requests?

Unusal GET requests in my nodejs journal – has my nginx/node been hacked?

Saw this in the journalctl for a service I have:

jul 29 12:39:05 ubuntu-18 node[796]: GET http://www.123cha.com/ 200 147.463 ms - 8485 jul 29 12:39:10 ubuntu-18 node[796]: GET http://www.rfa.org/english/ - - ms - -     jul 29 12:39:10 ubuntu-18 node[796]: GET http://www.minghui.org/ - - ms - -      jul 29 12:39:11 ubuntu-18 node[796]: GET http://www.wujieliulan.com/ - - ms - -     jul 29 12:39:11 ubuntu-18 node[796]: GET http://www.epochtimes.com/ 200 133.357 ms - 8485     jul 29 12:39:14 ubuntu-18 node[796]: GET http://boxun.com/ - - ms - - 

These GET requests are not coming from any code I’ve written.

"Correct" entries look like this:

jul 29 12:41:46 ubuntu-18 node[796]: GET / 304 128.329 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/bootstrap.min.css 304 0.660 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/font-awesome-4.7.0/css/font-awesome.min.css 304 0.508 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /img/250x250/deciduous_tree_5.thumb.png 304 0.548 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/style.css 304 7.087 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /img/logos/250x250/brf_masthugget.250x250.jpg 200 0.876 ms - 9945 

The server is a nodejs instance v8.10.0, running on nginx v1.14.0, running on up to date Ubuntu server 18.04.

The ubuntu is a Digital Ocean droplet.

I’ve tried generating similar requests from a javascript console, but my the browser blocks access to http (not allowing mixed http and https); if I try https I get cross-origin error – which is good 🙂

I’m puzzled as to how these GET requests are being generated/sent?

How does the cache / memory know where to return results of read requests to?

The pipeline of a modern processor has many stages that may issue read requests to main memory, e.g. in fetching the next command or loading some memory location into a register. How is the result of a read request returned to the right pipeline stage, given that there are more than one possible recipients? Since most CPUs access main memory via a cache hierarchy, the question becomes: how does the L1 cache know which part of the pipeline to return a result to?

I imagine that access to the L1 cache is queued, but each access presumably needs a ‘return address’. How is this typically handled?

What is the most restrictive way to allow IPv6 ICMP requests on iptables?

This is what I have so far but it is pretty open.

*filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] -A INPUT -p ipv6-icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A OUTPUT -p ipv6-icmp -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT COMMIT 

If you have time, explaining the rules would be amazing.