How To Set Up Apache Virtual Hosts on CentOS 7

In this tutorial, we will start with an empty VPS with a freshly installed CentOS 7 and will end up with three different sites sharing the same IP address, hosted on the same machine.

Apache web server analyzes HTTP request headers and appropriately connects it to the directory structure inside the VPS. The technical term for “sites” inside VPS boxes is “virtual host” – the server is the “host” to many domains at the same time, hence, they are not real, but only “virtual”.

Let’s see how it’s done.

What We Are Going To Cover

  • Creating a Non Root User
  • Installing Apache
  • Installing and setting up firewall
  • Defining File Locations For the Default Apache Page
  • Creating File Structure For Our Demo Virtual Hosts
  • Granting Permissions
  • Creating index.html Pages For Our Demo Sites
  • Creating Virtual Hosts Files
  • Turning On the New Virtual Host Files
  • Testing the Virtual Hosts
  • Securing Your Domains With Let’s Encrypt TLS certificates


We’ll use CentOS 7:

  • Starting with a clean VPS with
  • At least 512Mb of RAM and
  • 15Gb of free disk space
  • You will need root user access via SSH
  • Two domain names pointed to your server’s IP address using A records at your DNS service provider.
  • We use nano as our editor of choice, and you can install it with this command:
sudo yum install nano -y 

Step 1: Update the System

First, update your system to the latest stable version by running the following command:

sudo yum update -y sudo reboot 

It may take several minutes for the transaction to get finished. You can always stop it by pressing Ctrl-C, but Centos will later, while performing another transaction, ask you to finish this transaction first. Be patient.

Step 2: Create User With Non-root Privileges

Sudo is already installed on Centos. Add user simpleuser and make it have access to the sudo command:

adduser simpleuser 

Debian and Ubuntu will automatically ask you for the new user’s password but on Centos you have to do it yourself:

passwd simpleuser 

Add simpleuser to the wheel group:

usermod -aG wheel simpleuser 

Step 3: Clean Up yum Cache

When installing packages, yum will download them into its special cache. After successful installation, the packages should be deleted from the cache, but it may not always be the case. The following command will clean all cached information:

sudo yum clean all 

It will also reclaim disk space or clear errors due to corrupted metadata files.

This is a typical result:

Step 4: Install Apache

Apache installation files are in the yum repository, so one command is more than enough to install it:

sudo yum -y install httpd 

Step 5: Install firewalld as Our Firewall

Apache uses standard ports 80 and 443 for HTTP and HTTPS traffic, respectively.

Let’s now install firewall-cmd, the command line front-end for firewalld (firewalld daemon), for CentOS. It supports both IPv4 and IPv6, firewall zones, bridges and ipsets, allows for timed firewall rules in zones, logs denied packets, automatically loads kernel modules, and so on.

Install it in the usual manner, by using yum:

yum install firewalld 

Let us now start it, enable it to auto-start at system boot, and see its status:

systemctl start firewalld systemctl enable firewalld systemctl status firewalld 

Firewall is now running:

Here is a list of ports and feel free to add any other that your host requires for the normal functioning of the system:

firewall-cmd --permanent --zone=public --add-service=ssh firewall-cmd --zone=public --add-port=3000/tcp --permanent firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --permanent --zone=public --add-service=https firewall-cmd --reload 

We have added addresses 3000 and 8080 as examples only and will not use them in the rest of this text. You should, however, do well to find out the list of ports that the applications on your VPS are going to need and put them in this exact spot for later.

Always enable ssh, http, https and other critical ports in ufw, otherwise you will NOT be able to log back into your VPS server, nor see you site!

Now we can start Apache:

sudo systemctl start httpd 

We want Apache to always be there so we make sure that it starts at boot:

sudo systemctl enable httpd 

We can check the status of Apache:

sudo systemctl status httpd 

This would be a typical output for the status of Apache:

The following command, to stop Apache from running, is here for your reference only (do not execute it as a part of this installation)

sudo systemctl stop httpd 

The real test of successful installation is whether you can access files from the server through your local browser. Navigate to this address:


You should see a welcome page for Apache on CentOS, which means that you now have Apache running on your VPS.

Step 6: Configure SELinux

You’ll also need to configure SELinux (a security system within Centos) to allow Apache to operate normally:

sudo setsebool -P httpd_unified 1 

Step 7: File Locations For the Default Apache Page

In CentOS, root document directory is in directory /var/www/html and if it contains file index.html, its contentw will be shown to the world. The fresh installation of Centos does NOT contain one such file in that directory and is set up so that in that case, it will show the contents of file


That’s the page with grey background and with Testing 1 2 3 as the headline that we have already seen.

Let us now install an index.html page into proper document root i.e. /var/www/html. Create that file by opening it in nano:

sudo nano /var/www/html/index.html 

and insert the following text:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <title>Untitled Document</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <style type="text/css"> <!-- .style1 {color: #66CC66} --> </style> </head>  <body> <h1><span class="style1">Centos 7 Root document index.html file    </span> </h1> </body> </html> 

Enter the address of the site into the browser address line and get this:

The directories and the index.html files belong to the root user.

Under CentOS 7, all Apache files are stored in folder /etc/httpd:

ls -la /etc/httpd 

We shall now create directories sites-available and sites-enabled which will hold information on sites that exist and on sites that are permitted to be served to the Internet, respectively:

sudo mkdir /etc/httpd/sites-available /etc/httpd/sites-enabled 

You’ll also need to configure Apache to look for them by adding a line of configuration. Open the main config file:

sudo nano /etc/httpd/conf/httpd.conf 

Scroll down to the very end of the file and add the following line:

IncludeOptional sites-enabled/*.conf 

Save and close the file.

This is what you should see:

By adding more config files into /etc/httpd/sites-available and creating the corresponding symbolic links in /etc/httpd/sites-enabled, we can serve different pages to different domains. That is how we can host multiple independent sites off of one and the same IP address.

Each site in Apache parlance is called “virtual host” and will have to reside in its own subdirectory. The plan for adding new virtual hosts boils down to

  • creating file structure for each site,
  • populating HTML and other files in the site,
  • creating a new .conf file in /etc/httpd/sites-available, and
  • creating a new symbolic link in /etc/httpd/sites-enabled.

Apache will then do the rest, automatically.

Step 8: Creating File Structure For Our Demo Virtual Hosts

We will create two virtual hosts called and They will correspond to DNS entries and that we have previously made using A records at our DNS service provider. (Instead of these, you should enter your own site domains.) If we now navigated to in a browser, the default index.html page with basic Apache information would be served, again.

Let’s now create a directory to hold our site. Apache on Ubuntu stores its HTML files under /var/www/html so we could also use it to create our demo sites there. One possibility is to make that folder the top one and to put the demo sites into it. With commands such as

cd /var/www/html sudo mkdir sudo mkdir ls -la 

we would have the following directory structure:

Here we decide upon the other possibility, and that is to go one level up the directory tree and create the demo sites in /var/www. Inside folders and we can create whatever directory structure we need, for example:

cd /var/www sudo mkdir -p /var/www/{public_html,private,log,cgi-bin,backup} sudo mkdir -p /var/www/{public_html,private,log,cgi-bin,backup} ls /var/www/ -la 

Step 9: Granting Permissions

From the image above we see that root user is still owning the public_html folder, from which our public files will be served to the Internet. We will now change the ownership so that our simpleuser can access the public_html files. The commands are:

sudo chown -vR apache:apache /var/www/ sudo chown -vR apache:apache /var/www/ 

Also make sure that files in /var/www and its subfolders can be read correctly:

sudo chmod -R 755 /var/www 

Step 10: Create index.html Pages For Our Demo Sites

Our public files will be in /var/www/ and /var/www/ We’ll now create an index.html in each of these two folders so that we have something to see while browsing. Using nano, we will create index.html for

sudo nano /var/www/ 

We need only one line of text, preferably in H1 format for better readability. Insert the following text into nano, then save and close the file:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <title> Title</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <style type="text/css"> body,td,th {     color: #FF3333; } </style></head>  <body> <h1>This is  </h1> </body> </html> 

Create index.html for

sudo nano /var/www/ 

and paste this in:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <title> Title</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <style type="text/css"> .style1 {color: #3333FF} </style></head>  <body> <h1 class="style1">This is</h1> </body> </html> 

Step 11: Create Virtual Hosts Files

We’ll now inform Apache that there are two new sites to be served by creating two new Virtual Hosts.

Step 11A: Create the First Virtual Hosts File

Create the first virtual host file:

sudo nano /etc/httpd/sites-available/ 

Paste in the following (and change the domain names to yours):

<VirtualHost *:80>     ServerAdmin     ServerName     ServerAlias     ServerAlias     DocumentRoot /var/www/     #ErrorLog $  {APACHE_LOG_DIR}/error.log     #CustomLog $  {APACHE_LOG_DIR}/access.log combined </VirtualHost> 

Here is a detailed explanation of the above virtual host:

  • 80: This virtual host will listen on port 80. You can change port number through file httpd.conf with this command:
sudo nano /etc/httpd/conf/httpd.conf 

You’ll find a Listen directive with port 80 specified, and that is where you can customize it. Remember to restart Apache afterwards by running:

sudo systemctl restart httpd 
  • ServerAdmin: It is the email address to which Apache will send messages for administrator in case of an error in the system. May be omitted.
  • ServerName: Server name, obviously, it should coincide with the domain name.
  • ServerAlias: Another name for the same server as above. You can have as many of these aliases as you like.
  • DocumentRoot: Points to the absolute address of the site on disk.

Step 11B: Create the Second Virtual Hosts File

Do the same for the other site/domain. Here are the commands:

sudo nano /etc/httpd/sites-available/ 

This is what you should return paste into nano:

<VirtualHost *:80>     ServerAdmin     ServerName     ServerAlias     DocumentRoot /var/www/ #    ErrorLog $  {APACHE_LOG_DIR}/error.log #    CustomLog $  {APACHE_LOG_DIR}/access.log combined </VirtualHost> 

Save and close the file.

Step 12: Turn On the New Virtual Host Files

You’ll have to create symbolic links from sites-enabled directory to sites-available, so Apache will start serving the virtual hosts. The commands are:

sudo ln -s /etc/httpd/sites-available/ /etc/httpd/sites-enabled/ sudo ln -s /etc/httpd/sites-available/ /etc/httpd/sites-enabled/ 

If you want to disable the default site, the command would be:

sudo mv /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.d/welcome.conf.bak 

Then, restart Apache:

sudo systemctl restart httpd 

Be sure to clear cache in your browser or it may start fooling you with old values instead of the properly refreshed ones.

Step 13: Testing the Virtual Hosts

If you have left the default IP address activated, it will currently show the ‘Welcome to Apache’ screen. If you enter the address, you will see this:

And entering into the browser changes the image to:

Step 14: Securing Your Domains With Let’s Encrypt SSL

We can use free TLS certificates from Let’s Encrypt to turn our HTTP traffic into HTTPS traffic, which will make connecting to your site secure.

Certbot for CentOS 7 comes from the EPEL repository, which you’ll need to install first:

sudo yum install epel-release -y 

Install Certbot:

sudo yum install certbot python2-certbot-apache -y 

Then, run Certbot:

sudo certbot --apache 

It will ask for your email address in case of emergency, then another two questions that you may answer however you like and then the most important question – which names would you like to activate HTTPS for?

Choose your domain(s) from the list.

The last question will be whether do you want HTTPS access or not. You do, so choose 2.

Restart Apache:

sudo systemctl restart httpd 

In your browser, go to address 

We have entered the HTTP address and Apache automatically redirects to HTTPS, as it should:

You’ll notice that the actual site address starts with HTTPS and that there is green padlock in the address bar, signifying a secure connection.

Let’s Encrypt certificates expire after 90 days. Certbot can automatically renew them for you, but it must be told to do so:

echo "0 0,12 * * * root python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew" | sudo tee -a /etc/crontab > /dev/null 

What Can You Do Next

We have shown how to share one IP address to one, two, three or dozens or hundreds of independent sites. You can now use this knowledge to host all your sites on one low cost VPS box, running CentOS 7.

Dusko Savic is a technical writer and programmer.

The post How To Set Up Apache Virtual Hosts on CentOS 7 appeared first on Low End Box.

git pull (ssh public key) from script CentOS

I’m trying to create a bash script (CentOS 7) to pull code from bitbucket like this

#!/bin/bash PRJ=$  1 cd /var/$  PRJ git init git remote add origin$  PRJ.git git pull origin master npm install 

and i get the error

Permission denied (publickey). fatal: Could not read from remote repository. 

As you can see the remote is ssh with public key

if i try “git pull” inside the directory it works fine. Does running something ssh related from a script requires any special configuration or am i missing something?

Varnish Latest Version Centos 6.1 Magento 2.2

Please can someone clarify this for me I’m very confused.

I want to install Varnish for Magento 2.2 Store – Linux CENTOS 6.10 [server] v80.0.22

Im using these instructions to install

For CentOS Users

yum install -y epel-release

yum update

yum install -y varnish

The output I get at the install stage is Package varnish-2.1.5-6.el6.x86_64 already installed and latest version

Im checking the varnish website and see Latest Varnish version is 6.2.0

Can someone provide instruction to install via CLI SSH putty or provide a link to page with clear instructions – I want to install the latest version

I have checked online all day and found some very different answers but nothing to help me install the latest version or update my varnish to latest

Unable to ping to or rdp to KVM windows 7 guest but able to ping to other linux guest centos

Unable to ping to only Windows guest. able to ping to other guest.

HOST os Ubuntu 18.4.2

What i did so far- Tried to change device to virtio,rtl8329,e1000

Internet is working on guest.

Windows 7 guest firewall disabled.

installed virtio driver for windows from -

from host able to ping to centos guest in same network and host.

Here is network configuration of host.

root@hp-e840-g1:/var/lib/libvirt/dnsmasq# ifconfig  br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         inet6 fe80::3832:b6ff:fed3:b592  prefixlen 64  scopeid 0x20<link>         ether 3a:32:b6:d3:b5:92  txqueuelen 1000  (Ethernet)         RX packets 0  bytes 0 (0.0 B)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 1272  bytes 213475 (213.4 KB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  br0:avahi: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         inet  netmask  broadcast         ether 3a:32:b6:d3:b5:92  txqueuelen 1000  (Ethernet)  enp0s25: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500         ether d0:bf:9c:1f:6d:7b  txqueuelen 1000  (Ethernet)         RX packets 0  bytes 0 (0.0 B)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 0  bytes 0 (0.0 B)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0         device interrupt 20  memory 0xd0700000-d0720000    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536         inet  netmask         inet6 ::1  prefixlen 128  scopeid 0x10<host>         loop  txqueuelen 1000  (Local Loopback)         RX packets 18229  bytes 1627449 (1.6 MB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 18229  bytes 1627449 (1.6 MB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  virbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         inet  netmask  broadcast         ether 52:54:00:f6:8c:87  txqueuelen 1000  (Ethernet)         RX packets 27291  bytes 1676922 (1.6 MB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 21479  bytes 53057319 (53.0 MB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         inet6 fe80::fc54:ff:fe63:38bc  prefixlen 64  scopeid 0x20<link>         ether fe:54:00:63:38:bc  txqueuelen 1000  (Ethernet)         RX packets 404  bytes 44813 (44.8 KB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 358  bytes 23713 (23.7 KB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         inet6 fe80::fc54:ff:fe05:1413  prefixlen 64  scopeid 0x20<link>         ether fe:54:00:05:14:13  txqueuelen 1000  (Ethernet)         RX packets 634  bytes 50098 (50.0 KB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 8973  bytes 667052 (667.0 KB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  wlo1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         inet  netmask  broadcast         inet6 fe80::2064:887d:4812:fd7d  prefixlen 64  scopeid 0x20<link>         ether ac:fd:ce:00:c4:6d  txqueuelen 1000  (Ethernet)         RX packets 105684  bytes 110034160 (110.0 MB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 70214  bytes 9358661 (9.3 MB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 

any help really appreciated! Thanks

Ambiente Azure – VM Linux Centos 7 com Virtual Host no Apache [pendente]

Boa tarde. Configurei o Virtual Host no Apache no SO Linux Centos 7, fiz testes incluindo os ips e os nomes no arquivo host do windows, o broswer acessa os dois sistemas sem problema, agora estou tentando utilizar o DNS da Azure para resolver o nome, fiz testes com o comando nslookup com sucesso, mas o browser não encontra do ip, como posso resolver este problema?

CentOS 7 LAMP Server Tutorial Part 6: Moving to NGINX

Welcome to the last installment in the CentOS 7 LAMP Server Tutorial: Modernized and Explained series! In this article we are going to turn our LAMP server into a LEMP server by removing Apache and installing NGINX (pronounced “Engine X”) in its place.

Why would we want to replace the Apache web server with NGINX? The answer is in concurrency. NGINX can handle more connections at the same time than Apache can without causing any extra work for the server. It can be used as a stand-alone web server, or it can be used as a “buffer” or “shock absorber” between Apache and the rest of the Internet. In this tutorial we’re going to replace the Apache web server with a standalone installation of NGINX. Let’s get started!

Preparation for the switch

If you’ve followed along with our series, you now have a WordPress website running on Apache with PHP 7.3 and a Let’s Encrypt SSL certificate. Apache serves the non-SSL site on port 80, and the SSL site on port 443. NGINX also wants to listen on port 80 and port 443, because these are the ports that HTTP and HTTPS always use. We cannot put NGINX alternate ports because that’s no web browser would ever find it. The solution is to disable or remove Apache, and then set up NGINX in its place.

In Part 2 we configured Apache with a VirtualHost that tells Apache where to serve our WordPress site from. NGINX will need a similar configuration so that it knows how to serve our website. We’ll also have to tell NGINX how to make use of the Let’s Encrypt SSL certificate.

A word of Caution!

We’re going to be making some drastic configuration changes in this tutorial, so a word of caution is in order: Backup your data, and don’t do this on a production server. If you already have a working website that is bringing you income or is otherwise important to you, stop. It is in our best interest to get a second VPS, run through the tutorial in full, and then set up a test site. Once you’re sure it works for you, then migrate your site to your new NGINX based server.


Removing Apache

Still with us? Great! This is going to be fun. As was previously mentioned, we need to get Apache out of the way so that NGINX can do its job. There are two ways to go about this: Stop and disable Apache, or just remove Apache. We are going to remove it. This is easily done with a single command:

yum -y remove httpd

Here’s how this looked on our low end VPS:

You’ll notice that Yum also removed mod_ssl (Apache’s SSL module) and the Let’s Encrypt SSL certbot for Apache. We’ll be replacing these with the NGINX equivalents in the next step.

Installing NGINX

NGINX can be installed directly from the base CentOS 7 Yum repositories, but it lacks some features that we’re going to need. We’re going to use a repository that is maintained by somebody with the username “error”. Don’t worry- there are no errors! We need to install that repository before installing NGINX. Here is the command to use:

curl -o /etc/yum.repos.d/nginx-error.repo \

What about Let’s Encrypt SSL certificates? In part 3 of this series we learned that certbot uses its Apache module to intelligently request and install Let’s Encrypt SSL certificates. But since we’re switching to NGINX, we need to install the similar NGINX module for certbot.

The following command will install the updated version of NGINX and the certbot NGINX module:

yum -y install nginx python2-certbot-nginx

Here’s how this looked on our VPS:



Now we’ll enable NGINX so that it’ll start at boot up, and start it for the first time:

systemctl enable nginx.service systemctl start nginx

You may notice that we’re not doing anything with PHP. That’s because we previously configured PHP-FPM for PHP 7.3. PHP-FPM is independent of the web server. We can use any web server to communicates with PHP-FPM via its unix socket. We’ll be configuring that communication very soon. For now you should see the following page when you go to your website:



If you don’t see that page, then go back and check your work- it’s probably something small.

Configuring NGINX to replace Apache

We originally configured Apache with a VirtualHost configuration so that it could serve files for our virtually hosted website. NGINX has the same feature, but NGINX calls them “server blocks” instead of “virtual hosts”. And much like Apache, we can create a directory that is just for configuration files that are specific to each hosted website.

Let’s first create the necessary directory and then back up the default nginx.conf file (NGINX’s main configuration file) before we create our own files.

mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup mkdir /etc/nginx/sites-enabled

Now we’re going to use “nano” to create two files: /etc/nginx/nginx.conf and /etc/nginx/sites-enabled/lowend.conf. The nginx.conf file contains directives for the entire server, where lowend.conf contains the server block and configuration information specific to our website. Copy and paste these configuration files in. Be sure to read them in full, because they explain the entire configuration. The comments are very much part of this tutorial.

nano /etc/nginx/nginx.conf

Paste in the following configuration file:

user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/;  # Load dynamic modules. See /usr/share/nginx/README.dynamic.  include /usr/share/nginx/modules/*.conf;  events {         worker_connections 1024; }  http {        log_format main '$  remote_addr - $  remote_user [$  time_local] "$  request" '        '$  status $  body_bytes_sent "$  http_referer" '        '"$  http_user_agent" "$  http_x_forwarded_for"';        access_log /var/log/nginx/access.log main;        log_format cache_status '[$  time_local] "$  request" $  upstream_cache_status';        access_log /var/log/nginx/cache.log cache_status;        sendfile on;        tcp_nopush on;        tcp_nodelay on;        keepalive_timeout 65;        keepalive_requests 1024;         include /etc/nginx/mime.types;        default_type application/octet-stream;  #Enable the NGIXN fastcgi cache in /var/run/nginx-cache and configure cache keys  fastcgi_cache_path /var/run/nginx-cache levels=1:2 keys_zone=nginx-cache:100m inactive=60m; fastcgi_cache_key "$  scheme$  request_method$  host$  request_uri";  # Add X-Cache headers to HTTP requests. This allows us to see if cached content is being served. # If cached content is being served, the header will contain the value "HIT", and if not "MISS".  add_header X-Cache $  upstream_cache_status;  # Detect logged in WordPress user cookie # This is used in the Server Block to bypass the cache for logged in WordPress users. map $  http_cookie $  logged_in {        default 0;        ~SESS 1;        ~wordpress_logged_in 1; }  # Enable individual server blocks to be stored in /etc/nginx/site-enabled # and still be included in the NGINX configuration.  include /etc/nginx/sites-enabled/*.conf;  }

Now we’ll create the server block for our website:

nano /etc/nginx/sites-enabled/lowend.conf

Paste in the following configuration file:

server {         #We'll start off by enabling GZIP compression for the server        gzip on;        gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/ application/x-font-ttf font/opentype image/svg+xml image/x-icon;        gzip_proxied no-cache no-store private expired auth;        gzip_min_length 1000;         #This Server Block listens on port 80 for HTTP requests         listen 80;         #Define the hostnames for this server block, separated by a space        server_name lowend-tutorial.tld www.lowend-tutorial.tld;         #Define the documentroot that NGINX will serve pages from for the site.        root /home/lowend/public_html;         #Look for index.php. If you want to add index.html or any other default page, add it here.        index index.php;         #Define the access log file for the account        access_log /home/lowend/logs/lowend-tutorial.tld-access.log main;         #Don't log favicon.ico         location = /favicon.ico {               log_not_found off;               access_log off;        }         #Don't log robots.txt        location = /robots.txt {               allow all;               log_not_found off;               access_log off;        }         location / {               # include the "?$  args" part so non-default permalinks doesn't break when using query string               try_files $  uri $  uri/ /index.php?$  args;                      #Bypass cache for logged in WordPress users               #This is important so that caching doesn't get in the way of development               fastcgi_cache_bypass $  logged_in;               fastcgi_no_cache $  logged_in;        }         location ~ \.php$   {                try_files $  uri =404;                # Connect to PHP-FPM for PHP calls via fastcgi                # This is the NGINX equivalent of the Apache fastcgi directives that were previously used                fastcgi_pass unix:/var/run/php73-fpm/php73-fpm.lowend.sock;                fastcgi_index index.php;                fastcgi_param SCRIPT_FILENAME $  document_root$  fastcgi_script_name;                include fastcgi_params;                 #Make use of the nginx-cache that was defined in nginx.conf                 fastcgi_cache nginx-cache;                fastcgi_cache_valid 200 60m;        }         # Set a max expiration time for various file types, and don't log 404's for them        location ~* \.(js|css|png|jpg|jpeg|gif|ico)$   {               expires max;               log_not_found off;        }        # Configure cache purging. This will allow us to use a small shell script to hourly purge the cache.        location ~ /purge(/.*) {               allow;               deny all;               fastcgi_cache_purge nginx-cache "$  scheme$  request_method$  host$  1";        }  }


Now we can restart NGINX to enable the new configuration:

systemctl restart nginx

NGINX and Let’s Encrypt

In previous steps, we removed the certbot-apache module and installed the certbot-nginx module. The new module does all of the same things for NGINX that certbot-apache did for Apache. We will use the certbot-nginx module to reinstall our certificate with the following command:

certbot --nginx

Since there’s already a certificate, we can select the option “1: Attempt to reinstall this existing certificate”

We also need to update our cron job for renewing certificates. Edit the crontab with the following command:

crontab -e

Edit the configuration so that it looks like the following line:

1 */12 * * * certbot --nginx renew

Check your Work

You should now be able to load your WordPress website in a browser. If not, then please double check all previous steps, and check the NGINX error logs.

With that restart, NGINX is fully configured for your WordPress site. Caching is enabled, and when you’re logged into WordPress, NGINX will bypass the cache so that you can easily see your changes.

Clearing the cache

We’ve configured a way to clear the NGINX cache by calling a URL with HTTP “PURGE” instead of “GET”. This is a great way to do it, and in fact we installed the custom version of NGINX just so that we could enable this feature. But there is a minor problem. At the time of the writing of this article, there are no NGINX cache plugins for WordPress that actually work when they flush the cache using the “PURGE” method. They require that PHP can access the NGINX cache files directly, which presents a security risk: all sites share the same cache directory, and each site’s PHP-FPM instance would have access to another sites cache. That’s not secure at all!

To remedy this, we’re going to leverage wp-cli to give us a list of all the posts and pages on your WordPress site, then pass that to a “for” loop that calls CURL to create an HTTP request that purges the cache of each post or page. We’re going to do this in a very simple shell script. This script should not go in the public_html directory, and should be owned and ran by the website user. Our website username is “lowend” and so this file will go in /home/lowend/. We used the following command to create the file:

sudo -u lowend nano /home/lowend/

Paste in the following script. Be sure to correct the URL to match your site:

#!/bin/bash #Use wp-cli to find all pages and posts and use CURL to issue a  # HTTP PURGE to flush the cache for each page and post. for page in  $  (/usr/local/bin/wp --path=/home/lowend/public_html post list --post_type=page,post --format=csv --field=post_name) do curl --silent --output /dev/null -X PURGE -I "https://lowend-tutorial.tld/$  page" done

Now set the permissions to 755:

chmod 755 /home/lowend/

We want to clear the cache regularly, so we’ll create a cron job that will run our script hourly:

crontab -e -u lowend

Paste in the following cron job:

1 * * * * /home/lowend/

Last but not least, log into your WordPress website and remove all caching plugins. Instead we’re going to install a plugin called “Redis Object Cache” By Till Krüss. Install and activate the script, and it’ll take care of object caching via Redis, while NGINX will take care of page caching. It’s a great setup.

And with that- we’re done! Our testing showed that we were able to serve about 30% more requests per second using this configuration. Depending on your VPS you may be able to do the same, and possibly much more.

The End is just the Beginning

This tutorial marks the conclusion of the LAMP Server series. We hope that you’ve found it informative! The things you’ve learned by building your own server from scratch will help you to understand and appreciate what goes on behind the scenes when you use a control panel that manages the configurations for you. They really do save a lot of work! But even more than that, you’ve learned how virtual hosting works, how PHP can be configured, and how to use basic Linux tools to administer your server.

What are some other things you could do? Instead of setting up NGINX as we did in this tutorial, why not leave Apache set up and learn how to set up NGINX as a caching proxy server? Learn how to create multiple websites by creating new users and new virtual hosts or server blocks (or both, as the case may be). Tired of building configuration files from scratch every time? Learn a tool such as Ansible that’ll do all the work for you. The possibilities are endless. This may be the end of this series, but it’s just the beginning for someone who’s ready to keep learning. Enjoy!

The post CentOS 7 LAMP Server Tutorial Part 6: Moving to NGINX appeared first on Low End Box.

CentOS 7 LAMP Server Tutorial Part 5: Speeding up WordPress with Redis

In the previous CentOS 7 LAMP Server Tutorials, we configured a LAMP stack, secured it with Let’s Encrypt SSL certificates, and installed WordPress with WP-CLI. Here in Part 5 we’re going to up our game just a bit more by installing server side caching with a program called Redis. Then we’ll configure WordPress to use Redis and check our performance to see if Redis actually made an impact. Let’s get started!

What is Redis?

According to, Redis is a “data structure store” which is used as an in memory database. Because it stores information (called “objects”) in memory, it saves the server the trouble of having to look in its database. In this way it serves as a cache, and as such can help the server handle more traffic without having to work harder to do it. 

By default, WordPress doesn’t know how to use Redis. We’re going to enable WordPress to use Redis by installing a WordPress caching plugin called “W3 Total Cache”. First, we need to install Redis and get it started.

Installing Redis

Redis is available from the CentOS “Extra Packages for Enterprise Linux” (EPEL) software repository. If you’ve followed this tutorial series from the beginning, then the CentOS EPEL repository is already enabled. If not, then you’ll need to enable it now using the following command:

yum -y install epel-release

Now you’re ready to install Redis. It’s all too easy, just a quick Yum install. Run the following command:

yum -y install redis

Here’s how that looked on our VPS:

With Redis now installed, we need to start Redis for the first time and configure CentOS to launch Redis at startup. Run the following commands:

systemctl start redis systemctl enable redis

Here’s how those commands looked on our VPS:

Lastly, we need to make sure that Redis is actually started and that it is reachable. Much like PHP-FPM’s default configuration, Redis listens on the “localhost” IP ( on a specific port (6379). Other programs can communicate with Redis by making a connection to that IP and port.

We’re going to test that communication now using the redis-cli program. Just like you can ping an IP with the “ping” command, you can also ping Redis with the “redis-cli ping” command. If Redis is running properly, it’ll simply reply with “PONG”. Give it a try:

redis-cli ping

Ours was running properly, and here’s how it looked:

If you got “PONG”, then great! Redis is running and you’re ready to configure WordPress to take advantage of it. Almost. First, we need to get PHP up to speed.

Enabling Redis in PHP 7.3

Now that we know that Redis is working, we need to give PHP the ability to talk to Redis. Thankfully, this is as simple as installing the PHP 7.3 Redis extension and restarting PHP-FPM. If you’ve followed our tutorial, then the following commands will do the job:

yum -y install php73-php-redis systemctl restart php73-php-fpm

Configuring WordPress to use Redis

As previously mentioned, WordPress doesn’t have the know-how to use Redis. It also doesn’t have any built in caching. We will solve both of those problems by installing the W3 Total Cache plugin for WordPress. Log in to your WordPress Dashboard, and go to Plugins > Add New. Search for “W3 Total Cache” and click on “Install Now” as shown below:

Click on Activate to enable W3 Total Cache. You’ll see a new menu item in the WordPress Dashboard: “Performance”. Click on Performance > General to get into the W3 Total Cache settings.

Go to each cache setting and enable it, and choose “Redis” from the drop down if it is available. If Redis is not available on any of the dropdowns and is greyed out, then it’s likely that either Redis isn’t installed, running, or enabled in PHP. You’ll need to retrace the previous steps taken before continuing.

Once all of the caches are enabled and Redis is selected, click on “Save All Settings” as shown below:

Check the function of your site. If your site doesn’t look correct, you may need to change the Minification settings, as those can often cause problems with CSS and Javascript. It’ll take some trial and error. Once your site is working correctly, we can continue. 

Configuring WordPress for Redis really is that easy! But it doesn’t tell us whether it’s doing us any good. We want to know: Is Redis doing its job, and making an impact on speed? Let’s find out! 

Testing Redis

To know whether Redis is making a positive impact on your site, we’ll need to do some load testing. For that we used Their free account will allow you to use up to 50 Virtual Users for 12 minutes at a time to load test your site. We configured a test for 50 Virtual Users (VU) for 3 minutes using the “Stress” ramp-up setting as shown below:

Both tests were run with W3 Total Cache installed. The “Disk Caching” test was done with W3 Total Cache configured for “Disk” or “Disk: Enhanced”. The “Redis Caching” test was done with Redis configured as previously mentioned. You can see the results in the chart below. Disk caching is Blue, while Redis caching is Red, of course.

With Disk caching enabled, our WordPress site served 27 requests per second with an average response time of 22ms. For a single core VPS with 512mb RAM, this fine. 

When we reconfigured the site to use Redis, there was a very nice improvement. The server was able to serve 35 requests per second, with a slightly higher average response of 34ms. The higher response time is likely due to the server being busy handling so many more requests. Still, 34ms is quite good and completely acceptable. 

On the other hand, 35 requests per second vs 27 is VERY good. It’s a 30% increase in performance, and all we had to do was install and configure Redis! 

While we’ve covered the general installation and utilization of Redis, we’ve only really scratched the surface. Redis can be configured in countless different ways. Tuning for higher performance is possible, and fine tuning it for your specific requirements may yield even better results. You can learn more about Redis in the official Redis documentation:

Are we done?

Not quite! There’s still more performance to be had. In the next installment of this series, we’re going to make even more use of caching by replacing Apache with a caching web server called Nginx (pronounced “Engine-X”). Check back soon! 

The post CentOS 7 LAMP Server Tutorial Part 5: Speeding up WordPress with Redis appeared first on Low End Box.

Connecting Remote Desktop CentOS 7 to Ubuntu 19.04 using Remmina

I have a CentOS 7 VPS, fully upgraded, on wich I installed XFCE desktop environment and xrdp, and I can easily log into its desktop by using Windows 10’s built in Remote Desktop Application.

Different thing with Ubuntu 19.04: Same computer, same connection, but Remmina just give me a flashing black screen when I put my credentials. No option seem to change the outcome.

How can I correctly access my desktop on CentOS from Ubuntu? Please help me, I can’t believe I must use Win10 only for accessing my Linux server