Cancelling a request stops new authentication cookie getting to browser, invalidating all further requests

I am trying to secure my login system using authentication cookies.

If the user tries to access a protected resource they must provide an authentication cookie. If the cookie is valid, the request is authenticated and the resource is returned, along with a new auth cookie for the user.

I rotate the auth cookie as an extra protective measure. In case anyone managed to steal it, it would only be valid until you made your next request.

However, if the user makes a request and the server authenticates it, but before the resource and new cookie reaches the client the user closes the browser, then that means the browser’s cookie is not the same as the token in the database. Any further requests can’t be authenticated and the user is forced to log in again.

What’s the correct approach to this? Should I not send a new token with every response? Should the browser confirm that it received the new token?

DNS server on AWS : How to use iptables to redirect requests to it

I’m running my own DNS server on an AWS instance. I’ve modified my security group to accept UDP and TCP connections on port 53.

However, my server is running on port 8053 so I somehow need to direct those outside requests going to 53 to 8053.

I’m pretty sure I need to update iptables, but can not find out how. So far, the most promising commands are

sudo iptables -t nat -A OUTPUT -o lo -p tcp --dport 53 -j REDIRECT --to-port 8053 sudo iptables -t nat -A OUTPUT -o lo -p udp --dport 53 -j REDIRECT --to-port 8053  sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 53 -j REDIRECT --to-port 8053 sudo iptables -A PREROUTING -t nat -i eth0 -p udp --dport 53 -j REDIRECT --to-port 8053 

Here’s what the result looks like:

Table: nat Chain PREROUTING (policy ACCEPT) num  target     prot opt source               destination 1    REDIRECT   tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:53 redir ports 8053 2    REDIRECT   udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:53 redir ports 8053  Chain INPUT (policy ACCEPT) num  target     prot opt source               destination  Chain OUTPUT (policy ACCEPT) num  target     prot opt source               destination 1    REDIRECT   udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:53 redir ports 8053 2    REDIRECT   tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:53 redir ports 8053  Chain POSTROUTING (policy ACCEPT) num  target     prot opt source               destination 

However, if I run nmap against this host I get this:

Nmap scan report for xxx.amazonaws.com (x.x.x.x) Host is up (0.040s latency). Not shown: 997 filtered ports PORT   STATE  SERVICE 22/tcp open   ssh 53/tcp closed domain 80/tcp open   http 

I know my DNS server is listening on 8053. What’s going wrong??

Throttling promise execution (api requests, etc)

I was working on a client for a remote api and realized that i needed to throttle requests to < 4/second. I wrote a simple typescript class to do it and wanted to solicit code review- it basically should accept a () => Promise and return a Promise that resolves once executed by the queue.

My prototype solution is here, and I’d just like to solicit feedback for it. Original code here: https://codesandbox.io/s/throttled-promises-3pscr

/**  * Request throttle has an add() method which takes a () => Promise<string>  * and queues the promise to be executed in order.  * add() returns a promise that resolves with the original promise result.  */ class RequestThrottle {   stack = [];   spacing = 1000;    add: (req: () => Promise<string>) => Promise<string> = req => {     let executor;     const requestPromise: Promise<string> = new Promise((resolve, _reject) => {       let localExecutor = () => {         resolve(req());       };       executor = localExecutor;     });      this.stack.push(executor);     return requestPromise;   };    pullAndExecute = () => {     const op: () => Promise<string> = this.stack.shift();     // if (op) console.log("throttle found:", op);     if (op) op();   };    interval = setInterval(this.pullAndExecute, this.spacing);    stop = () => clearInterval(this.interval); }  const throttle = new RequestThrottle();  /**   * Promise tester - to add a bit of extra async operations  * (eg a network request to an api)  * */  const addChild: (c: number | string) => Promise<string> = count => {   const list = document.getElementById("list");   const node = document.createElement("LI");   node.innerHTML = count.toString();   list.appendChild(node);   const promise: Promise<string> = new Promise(resolve =>     setTimeout(() => {       log(`added "$  {count}"`);     }, 500)   );   return promise; };  const log = (s: string) => {   const list = document.getElementById("log");   const node = document.createElement("pre");   node.innerHTML = s;   list.appendChild(node); };  addChild("Starting a List").then(console.log);  const enqueue: (i: number | string) => () => Promise<string> = i => {   console.log("enqueueing " + i);   return () => {     return addChild(i);   }; };  for (var i in [1, 2, 3, 4, 5, 6]) {   throttle.add(enqueue(i)).then(console.log); } ``` 

unable to route the http requests to the web application in Nginx

I have a reactjs application running at port 5000. I want to route the requests from nginx to the webapp.

I am getting the below log

2019/06/20 04:30:10 [error] 17709#17709: *67 connect() failed (111: Connection refused) while connecting to upstream, client: 72.163.217.106, server: 159.65.123.84, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: “example.com” 2019/06/20 04:30:10 [error] 17709#17709: *69 connect() failed (111: Connection refused) while connecting to upstream, client: 72.163.217.106, server: 159.65.123.84, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8000/favicon.ico", host: “example.com”, referrer: "http://example.com/“ 2019/06/20 04:30:10 [error] 17709#17709: *71 connect() failed (111: Connection refused) while connecting to upstream, client: 72.163.217.106, server: 159.65.123.84, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8000/favicon.ico", host: “example.com”, referrer: "http://example.com/“ 

Here are my nginx config file at /etc/nginx/sites-available/default

server {     listen 0.0.0.0:80;     server_name example.com; # or server_name subdomain.yourapp.com;      location / {         proxy_pass http://127.0.0.1:8000;         proxy_set_header X-Real-IP $  remote_addr;         proxy_set_header X-Forwarded-For $  proxy_add_x_forwarded_for;         proxy_set_header Host $  http_host;         proxy_set_header X-NginX-Proxy true;          # Enables WS support         proxy_http_version 1.1;         proxy_set_header Upgrade $  http_upgrade;         proxy_set_header Connection "upgrade";         proxy_redirect off;     } } 

what could be the reason for this kind of behavior. How to fix this issue.

How Can I Drop Repeated Long-Running Requests with NGINX and AWS ELB?

In our environment, we have:

  1. AWS Application Load Balancer shunting HTTPS to EC2 Instances
  2. EC2 Instances running NGINX that shunt requests to either PHP-FPM or Passenger (Ruby Server)
  3. The PHP and Rails applications at the end of the chain

We have some optimization problems with a few processes, and they can take a second or two to resolve. If someone gets clicky, they can cause the following process:

  1. Click a link. Start a request that makes it to the application server (PHP-FPM or Passenger) and starts processing.
  2. Click the same link. Start a second request that makes it to the application server. ELB and NGINX hang up the first request at this point (HTTP status 499), but the abort signal is ignored by PHP-FPM and Passenger. So a thread on the App Server is still processing the first request when the second one comes in. The second one also starts processing.
  3. Repeat Step 2 until all servers are busy responding to long-running processes.

We’ve been able to mitigate this somewhat in two ways: 1. Stopping some of the requests that were caused by buttons (ie- shoehorning in JS to disable the button that was causing the request.) 2. Implementing NGINX Rate Limiting.

The problem we get is that at scale it doesn’t take much to bog down the system.

In plain English, all I want to do is:

If the same requester asks for the same thing three times in a short period, stop passing that through to the application for a time.

It appears that NGINX rate limiting doesn’t allow this (unless I’m missing something.)

The Web Application Firewall rules for AWS are for “The maximum number of requests from a single IP address that are allowed in a five-minute period.” With a minimum of 2000.

I’m looking more for “3 times in 10 seconds” not “2000 times in 5 minutes”.

Perhaps it’s something we’ll need to include at the application layer.

Mostly I’m fishing for strategies. We can’t be the only ones to have this problem where long-running application processes chew up resources even though they’ve been cancelled.

Is there a silver bullet method for dropping these requests?

Is there a way to actually-cancel the application after NGINX sends the abort signal?

Is there a way in NGINX or ELB/WAF to deny identical requests?

Thanks team.

New spfx web part (1.4.1) requests WsaUpload.ashx, gets 403

I just generated a new web part using the Yeoman generator for SPFx 1.4.1, and am debugging it in the hosted workbench. We are using SP2016, On Prem.

I am seeing console logging that looks like this:

Failed to load resource: the server responded with a status of 403 (Forbidden) [http://our-onprem-sharepoint.com/_layouts/15/WsaUpload.ashx] 

Any suggestions on

  • the source of these requests? (I didn’t write them…)
  • how I can make them stop?

How to send all the requests to proxy server written in python 3, using apache server

I have written a proxy server in python 3 which listens on the port 4444. I have the Damn Vulnerable Web Application(DVWA) running in my virtual machine. I started the apache2 server in virtual machine and I’m able to access the DVWA in my host machine. Now what I want to do is: I want apache2 server to pass all the requests to proxy server and process the requests that are sent by proxy server only.

Basically I want to implement the following :

     If(Requester is not the proxy server)        Then            Pass the request to the proxy server.        else          Reply to the request. 

How can I configure apache2 server to achieve such functionality. And please tell me which configuration file to edit to do so..

Thanks in advance.