Unusal GET requests in my nodejs journal – has my nginx/node been hacked?

Saw this in the journalctl for a service I have:

jul 29 12:39:05 ubuntu-18 node[796]: GET http://www.123cha.com/ 200 147.463 ms - 8485 jul 29 12:39:10 ubuntu-18 node[796]: GET http://www.rfa.org/english/ - - ms - -     jul 29 12:39:10 ubuntu-18 node[796]: GET http://www.minghui.org/ - - ms - -      jul 29 12:39:11 ubuntu-18 node[796]: GET http://www.wujieliulan.com/ - - ms - -     jul 29 12:39:11 ubuntu-18 node[796]: GET http://www.epochtimes.com/ 200 133.357 ms - 8485     jul 29 12:39:14 ubuntu-18 node[796]: GET http://boxun.com/ - - ms - - 

These GET requests are not coming from any code I’ve written.

"Correct" entries look like this:

jul 29 12:41:46 ubuntu-18 node[796]: GET / 304 128.329 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/bootstrap.min.css 304 0.660 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/font-awesome-4.7.0/css/font-awesome.min.css 304 0.508 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /img/250x250/deciduous_tree_5.thumb.png 304 0.548 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/style.css 304 7.087 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /img/logos/250x250/brf_masthugget.250x250.jpg 200 0.876 ms - 9945 

The server is a nodejs instance v8.10.0, running on nginx v1.14.0, running on up to date Ubuntu server 18.04.

The ubuntu is a Digital Ocean droplet.

I’ve tried generating similar requests from a javascript console, but my the browser blocks access to http (not allowing mixed http and https); if I try https I get cross-origin error – which is good 🙂

I’m puzzled as to how these GET requests are being generated/sent?

How does the cache / memory know where to return results of read requests to?

The pipeline of a modern processor has many stages that may issue read requests to main memory, e.g. in fetching the next command or loading some memory location into a register. How is the result of a read request returned to the right pipeline stage, given that there are more than one possible recipients? Since most CPUs access main memory via a cache hierarchy, the question becomes: how does the L1 cache know which part of the pipeline to return a result to?

I imagine that access to the L1 cache is queued, but each access presumably needs a ‘return address’. How is this typically handled?

What is the most restrictive way to allow IPv6 ICMP requests on iptables?

This is what I have so far but it is pretty open.

*filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] -A INPUT -p ipv6-icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A OUTPUT -p ipv6-icmp -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT COMMIT 

If you have time, explaining the rules would be amazing.

Solution to User Initial HTTP Requests Unencrypted Despite HTTPS Redirection?

It is my understanding that requests from a client browser to a webserver will initially follow the specified protocol e.g, HTTPS, and default to HTTP if not specified (Firefox Tested). On the server side it is desired to enforce a strict type HTTPS for all connections for the privacy of request headers and as a result HTTPS redirections are used. The problem is that any initial request where the client does not explicitly request HTTPS will be sent unencrypted. For example, client instructs browser with the below URL command.

google.com/search?q=unencrypted-get

google.com will redirect the client browser to use HTTPS but the initial HTTP request and GET parameters were already sent unencrypted possibly compromising the privacy of the client. Obviously there is nothing full-proof that can be done by the server to mitigate this vulnerability but:

  1. Could this misuse compromise the subsequent TLS security possibly through a known-plaintext
    attack (KPA)?
  2. Are there any less obvious measures that can be done to mitigate this possibly through some DNS protocol solution?
  3. Would it be sensible for a future client standard to always initially attempt with HTTPS as the default?

Hacking Attempt Requests Not showing Up on Webserver Logs But Google Analytics Shows it

Which hacking tool makes a request and does not show up on web-server logs?

/en/latest/ has been requested over 116, we don’t have this URL on the website at all!

The request to that URL does not show up on web-server logs but I setup google analytics to track ad-blockers by loading the script on a different URL that ad-blockers don’t know . But ever since i setup this google analytics it has trapped lots of hacking request on none existing URL?

How comes google analytics captures the request(The Hackers don’t actually know) and the request seems not reach the web-server because no logs are shown?

The thing is there is a deliberate request to none existing URL, that don’t show up on web-server logs, but my secrete google analytic scripts captures the URL

Why are browsers makeing PUT requests for static assets on my site?

Our site hosts static assets at /assets/…. In debugging a font-related issue, I looked through our logs for unusual activity. I found a bunch of requests like these

method path                         referer PUT     /assets/js/40-8b8c.chunk.js https://mysite.com PUT     /assets/fonts/antique.woff2 https://mysite.com/assets/css/mobile-ef45.chunk.css 

The requests come from lots of different IP addresses all over the world. I don’t see any pattern in the User-Agents. The only HTTP methods are HEAD (odd, but fine), GET (expected), and PUT (very suspicious).

I haven’t been able to identify any code in our system that would cause a browser to make PUT requests to these paths.

I have no evidence that this activity is malicious. It could certainly be a broken browser plugin.

Has anyone seen this sort of behavior?

What is the best practice for resetting multi-factor when a user requests to recover their password?

I am in the process of developing a web-application which requires MFA on every login. (On the first login, before you can do anything, you are forced to setup MFA. Due to monetary restrictions and development time restraints, the MFA chosen is a simple TOTP solution, but in the future I may include other providers such as Authy.

In the process of developing the Password Recovery flow, I thought that if someone forgot their password, they most likely forgot/lost their MFA as well. In your experiences, is this assumption correct? What is the “best practice” here? Do I reset the MFA along with the password on password recovery, or do I force the user to authenticate through another method in order to have their MFA reset?