403 on fresh install of Apache with PHP 7 on CentOS 7

I have a fresh install of Apache with PHP 7 on CentOS 7, I have used Composer to install a web API and Apache is giving me 403 errors when I try to access the PHP pages. I have configured nothing manually and this has all been done on a fresh install of CentOS 7 on an untouched VPS. I did this once before and had no issues but when I decided to start again from scratch suddenly apache doesnt want to work. php files and parent directories are all set to 0755.

I have included a pastebin link to my httpd.conf file.

https://pastebin.com/0x7tjJRE

Docker | CentOS | $releasever is “” when running docker build

I am attempting to build a docker image from a CentOS parent image … and installing MongoDB. The error is happening when docker build is adding the MongoDB repo.

You can see that the error is happening because $ releasever is blank for some reason.

DockerFile

FROM centos:latest MAINTAINER "MyName" <myemail@gmail.com> ENV container docker RUN echo -e "\ [mongodb-org-4.0]\n\ name=MongoDB Repository\n\ baseurl=https://repo.mongodb.org/yum/redhat/$  releasever/mongodb-org/4.0/x86_64/\n\ gpgcheck=1\n\ enabled=1\n\ gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc\n" >> /etc/yum.repos.d/mongodb-org-4.0.repo RUN yum repolist all RUN yum install -y mongodb-org 

ERROR

https://repo.mongodb.org/yum/redhat//mongodb-org/4.0/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found 

CentOS 7 LAMP Server Tutorial Part 4: WordPress and wp-cli

Welcome to the fourth installment of the CentOS 7 LAMP Server Tutorial: Modernized and Explained series.In Part 1 and Part 2 we configured a LAMP server with PHP-FPM running PHP 7.3 and a modern version of MariaDB. Then we configured a single VirtualHost for hosting a website, and in Part 3 configured that VirtualHost to run securely with a free Let’s Encrypt SSL certificate.

In this bit, we’re going to put our shiny new server to practical use by installing WordPress. We’re also going to learn how to use a tool that’ll allow us to install, configure, and maintain WordPress right from the command line.

The WordPress Command Line Interface

As its name clearly states, the WordPress Command Line Interface (or wp-cli) is a command line based administration for WordPress. Why would you want such a thing? Good question!

Linux server administrators are known for preferring to do things without having to leave the command line. Switching back and forth between a command line to a GUI or web interface is seen as inefficient at best. But there’s more to it than just basic efficiencies. The command line interface can be automated with shell scripts that allow us to do otherwise tedious tasks without ever having to click a mouse button.

Furthermore, there are tasks that can be done via wp-cli that would otherwise require editing the WordPress database directly or by using third party plugins. Password resets, database search and replaces, and much much more are available. Let’s get it installed!

Installing wp-cli

Before we install wp-cli, we need to satisfy one pre-requisite. In the first article in this LAMP tutorial series, we set up PHP 7.3 to work from the command line with

scl enable php73 bash

What we didn’t do was set up the associations needed so that PHP 7.3 would work the same from the command line as via Apache. In its current state, PHP cannot access MySQL databases. Let’s resolve that now. Paste in the following commands:

echo #!/bin/bash >> /etc/profile.d/php73.sh echo source /opt/remi/php73/enable >> /etc/profile.d/php73.sh echo export X_SCLS="`scl enable php73 'echo $  X_SCLS'`" >> /etc/profile.d/php73.sh

Now we’re good to go. Installing wp-cli couldn’t be simpler. Log in to your server as root and paste in the following commands:

curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar chmod +x wp-cli.phar mv wp-cli.phar /usr/local/bin/wp

You can test your new wp-cli installation by simply typing the command “wp” at the command line:

The help documentation will be loaded inside of a Linux program called less so that you can scroll through it with the up and down arrows on your keyboard. Go ahead and take a look at the page and get a feel for the capabilities that wp-cli has. Press q to exit.

Installing WordPress on our VirtualHost

You’ll recall that we created a VirtualHost called lowend-tutorial.tld with the username “lowend”. We’re now going to create a database that matches that username for no other reason that to associate the two in our heads. We’re also going to create a MySQL user with a unique password so that the database can be accessed by a non-root user. This is an extremely important step. Your databases should never be accessed as root! You’ll want to substitute your own information of course.

Creating the Database and User

To create the database, we’re going to need to switch to the MySQL (MariaDB) interface by simply typing

mysql -u root -p

We’re logging in as root here because only the root user can add databases and database users. Edit the following command to match your own username and chosen password, and paste it in:

create database lowend; GRANT ALL PRIVILEGES ON lowend.* TO 'lowend'@'localhost' IDENTIFIED BY 'Your-DB-Password'; flush all privileges;

In these three commands we created the database, created a user, and gave that user full access to the “lowend” database and all its tables. Then we flushed the privileges to enact the changes. This is how it looked on our VPS:

Make sure to keep track of the database name, username, and password. We’ll need it to install WordPress. Exit the MySQL interface by simply typing “exit” and pressing Enter.

Installing WordPress with wp-cli

The typical method of installing WordPress involves downloading the files to the public_html directory and then going to the website in a browser to finish the installation. The wp-cli program lets us skip all of that and perform the installation all from the command line.

First, switch to your VirtualHosts user and then to its public_html directory.

su - lowend cd ~/public_html/

This is the same public_html folder that we used to verify that PHP was working in Part 2 with the “test.php” file. If that file still exists, it would be good to remove it.

rm test.php

With the public_html directory now empty, we’ll begin the WordPress installation. First we need to use wp-cli to download the core WordPress files:

wp core download

If you now type “ls” you will see all of the core WordPress files. Now we need to configure WordPress. WordPress’ main configuration file is wp-config.php, and it contains all of the information that WordPress needs to talk to the database, and more. You’ll see that it doesn’t exist yet. We’re going to use wp-cli to create it and then change its permissions so that they are correct. Edit the following command to match your configuration, then paste it in:

wp core config --dbhost=localhost --dbname=lowend --dbuser=lowend --dbpass=Your-DB-Password chmod 644 wp-config.php

Now WordPress knows how to talk to its database, but that database is still empty. WordPress now needs an identity and its first administrator. We’re going to configure those using wp-cli and set some permissions. Again, edit the following so that it matches your own configuration, and paste it in. Be sure to use your real email address and set a strong password! You can be sure that as soon as you get WordPress installed, there will be bots trying to log into it. It’s also recommended that you do not use “wordpress_admin” as your username for the administrative user. Choose something unique.

wp core install --url=https://lowend-tutorial.tld --title="LowEnd Tutorial Site" \ --admin_name=wordpress_admin --admin_password=​Your-DB-Password \ --admin_email=your-email-address@gmail.com chmod 755 wp-content/uploads

Here is how this looked on our server:

Now, we can test to see our new website is working by simply visiting it in a browser: https://lowend-tutorial.tld. Check it out and come back when you’re done.

Does it work? Great! If not, check for any errors along the way, and examine them for any typos or errors.

It’s time to get logged in by going to https://lowend-tutorial.tld/wp-admin. Log in using the username and password you created with wp core install. Once you’re logged in, navigate to Settings > Permalinks > select Post name, and then Save Changes as shown below:

 

Changing the Permalinks structure (the way that WordPress maps a URL to a page or blog post) finishes things off by writing the .htaccess file that’s needed for many other WordPress functions. Reconfiguring the permalinks can be done via wp-cli, but when we tested it, it didn’t write the .htaccess file. Your WordPress installation is now ready to be turned into an awesome website. Well done!

Wait, that’s it?

Yep, you did it! You built a LAMP server with PHP 7.3, secured it with an automatically renewing Let’s Encrypt SSL certificate, and installed WordPress. Be assured that we aren’t just stopping here, however.

A few more things…

In this series, you installed one user, one VirtualHost, one SSL certificate, and one website. But, if you create a second user, a second VirtualHost, and a second PHP-FPM configuration file, you can host a second website on your server! And a third, and a fourth, and… as many as your server can handle! The process can even be automated with some simple shell scripts.

Next up we’ll be doing fun things like configuring Redis and then switching over to Nginx instead of Apache. Stay tuned!

The post CentOS 7 LAMP Server Tutorial Part 4: WordPress and wp-cli appeared first on Low End Box.

cobbler_web unavailable on CentOS 7.6

When I try to load the cobbler 2.8.4 web page I see the below error in /var/log/httpd/ssl_error_log

Traceback (most recent call last): File "/usr/share/cobbler/web/cobbler.wsgi", line 26, in application _application = get_wsgi_application() File "/usr/lib/python2.7/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application django.setup(set_prefix=False) File "/usr/lib/python2.7/site-packages/django/__init__.py", line 22, in setup configure_logging(settings.LOGGING_CONFIG, settings.LOGGING) File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 56, in __getattr__ self._setup(name) File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 41, in _setup self._wrapped = Settings(settings_module) File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 110, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/usr/share/cobbler/web/settings.py", line 89, in <module> from django.conf.global_settings import TEMPLATE_CONTEXT_PROCESSORS ImportError: cannot import name TEMPLATE_CONTEXT_PROCESSORS 

python2-django-1.11.20-1.el7.noarch
cobbler-2.8.4-4.el7.x86_64
cobbler-web-2.8.4-4.el7.noarch

CentOS 7 LAMP Server Tutorial Part 3: Let’s Encrypt SSL

Welcome to the third installment of the CentOS 7 LAMP Server Tutorial: Modernized and Explained series. This tutorial builds on the work done in Part 1 and Part 2, so if you haven’t checked them out, now’s a good time.

In this installment we’re going to secure our new Virtual Host (lowend-tutorial.tld) with a Let’s Encrypt SSL certificate. We’ll be installing WordPress in Part 4. It’ll be good to get a SSL certificate installed prior to installing WordPress.

Let’s Encrypt, Shall We?

We’re going to look at how the Let’s Encrypt SSL certificate gets installed and how we can make use of the certificate. Let’s get started!

If you’re not familiar with Let’s Encrypt, take a moment to browse on over to their website at https://letsencrypt.org/. They are a Certificate Authority who offers free SSL certificates to anyone who can prove that they own the domain they are attempting to get a SSL certificate for.

The way they do this is via the ACME protocol. You can read more about it on their site, but it works like so: A program on the server (we’ll talk about Certbot in a moment) puts a code inside a file at http://lowend-tutorial.tld/somefilename. Then it tells Let’s Encrypt’s servers where that file is, and they go looking for it. If the URL exists and loads the coded message, then they know that the request came from the real lowend-tutorial.tld server, and they issue a certificate.

That means that http://lowend-tutorial.tld needs to be a working website before Let’s Encrypt will issue a certificate. In the last installment we had a working site even though it had no content. That will work fine for this purpose. As mentioned, the program that controls all of this is called Certbot. It’s an amazing bit of software that makes this entire process look incredibly simple. Let’s install Certbot!

Installing Certbot on CentOS 7

For CentOS 7 we need to install both Certbot and the python module that Certbot uses for integrating with Apache. Use the following command:

yum -y install certbot python2-certbot-apache

Before we can run Certbot and get a Let’s Encrypt SSL certificate, we need to do a little bit more configuration. HTTPS (SSL) connections happen on port 443 (vs port 80 for unsecured HTTP connections) and so we need to allow port 443 through the firewall. Firewalld knows about the association between port 443 and https, so we can just enable “https” in Firewalld. Paste in the following commands:

firewall-cmd --zone=public --add-service=https --permanent firewall-cmd --reload

Certbot is smart and knows that we’re running the Apache web server, and what’s more it’s smart enough to know how we’re running Apache. It actually reads the configuration files and reacts accordingly. You’ll recall that we created a new Apache VirtualHost in /etc/httpd/sites-enabled/lowend-tutorial.tld.conf. This configuration file is responsible for mapping http://lowend-tutorial.tld to /home/lowend/public_html and making PHP work.

The first line of /etc/httpd/sites-enabled/lowend-tutorial.tld.conf looks like this:

<VirtualHost *:80>

This VirtualHost is specific to port 80. But SSL happens on port 443, so there will need to be a new VirtualHost for port 443. What do we need to do to configure it all? Let Certbot work its magic! At the command line, run certbot with the following command:

certbot

You’re going to need to answer some questions. If you want your website to automatically redirect to https:// you can configure that here or you can manually do it later in the websites own configuration. Here’s how it looked on our VPS:

What just Happened?

If you look in /etc/httpd/sites-enabled, you’ll see a new file, lowend-tutorial.tld-le-ssl.conf. An examination will show that the VirtualHost directive defines a VirtualHost on port 443 and that the entire VirtualHost file is wrapped in <IfModule mod_ssl.c> tags. At the bottom are some new lines pertaining to the SSL certificates. Here are the additions and changes:

<IfModule mod_ssl.c> <VirtualHost *:443> ...  ... skipping original VirtualHost content for brevity ... Include /etc/letsencrypt/options-ssl-apache.conf SSLCertificateFile /etc/letsencrypt/live/lowend-tutorial.cf/cert.pem SSLCertificateKeyFile /etc/letsencrypt/live/lowend-tutorial.cf/privkey.pem SSLCertificateChainFile /etc/letsencrypt/live/lowend-tutorial.cf/chain.pem </IfModule mod_ssl.c>

You can see how the configuration is SSL specific. The SSL configuration is loaded and the paths to the SSL certificate files are now included. Certbot did all of this for us, and it even restarted Apache to enact the changes. Thanks, Certbot!

Let’s see if it all worked. Load your site in a browser, then change the URL to https://. It should still load. If it doesn’t, then check carefully for ACME errors, and make sure the site loaded with http:// originally. Also be sure that DNS is pointing at the server correctly. These things account for most errors.

Nothing Lasts Forever

Like most good things, Let’s Encrypt SSL certificates don’t last forever. They last 90 days and need to be renewed. If we tell Certbot to run regularly, it’ll automagically renew any SSL certificate that is less than 29 days away from expiration. For that, let’s use a cron job.

Cron jobs are automated tasks that run on a schedule that we define. These schedules happen in a tabulated file called a “crontab”. Linux has a built in feature for modifying crontabs, but it relies on using your own text editor. We prefer nano for its ease of use vs vim (feel free to disagree, we don’t mind!) and so we’re going to set that as our editor before we start editing things:

echo "export VISUAL=nano"

Since we want this to be the case every time we log in, lets go ahead and add it to /root/.bash_profile. the .bash_profile file is a script that gets ran every time its user logs in:

echo "export VISUAL=nano" >> ~/.bash_profile

Now let’s edit the crontab and add a job that will run every 12 hours:

crontab -e

With nano open, paste in the following

1 */12 * * * certbot renew

That entry tells cron to run the “certbot renew” command on the first minute of every 12th hour of every day. If there are any certificates that need renewing, it’ll renew them for us as long as ACME is able to verify the domain again.

Next up: WordPress

And with that, we’re done. You’ve just installed Certbot, which installed a Let’s Encrypt SSL certificate on your CentOS 7 LAMP server. For more information, go check out all of the official documentation for Let’s Encrypt and Certbot. They are a treasure trove of information, especially if you need to troubleshoot things:

https://letsencrypt.org/
https://certbot.eff.org/docs/

In the next installment we’re going to install WordPress on our new LAMP server and learn how to administer it without even leaving the command line. Stay tuned!

The post CentOS 7 LAMP Server Tutorial Part 3: Let’s Encrypt SSL appeared first on Low End Box.

SonarQube 7.7 Does not start on CentOS Linux

Title says everything. Same error happens with SonarQube 6.7.7 LTS but I write only 7.7 issues here. I have no version restriction which I should use.

I downloaded SonarQube 7.7 from your website. Configured limits as following:

# cat /etc/security/limits.conf | grep sonar sonarqube   -   nofile   65536 sonarqube   -   nproc    4096 

In sysctl.conf I set proper values for things:

vm.max_map_count=262144 fs.file-max=65536 

I also applied these values after configuring.

After unpacking the zip to /opt/sonarqube/sonarqube-7.7 I chowned the whole directory to the newly created user sonarqube:

chown -R sonarqube:sonarqube /opt/sonarqube 

After it, I configured sonar.properties as following (removed comments from the config for better readability:

sonar.jdbc.username=sonarq sonar.jdbc.url=jdbc:postgresql://localhost/sonarq_production http.proxyHost=aproxy.some.where http.proxyUser=proxyuser http.proxyPassword=proxypass 

PostgreSQL has the specified SQL user and database and triple checked it could login with these details.

After trying to start sonarqube with

sudo -Hu sonarqube ./bin/linux-x86-64/sonar.sh console  

I got the following output

Running SonarQube... wrapper  | --> Wrapper Started as Console wrapper  | Launching a JVM... jvm 1    | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org jvm 1    |   Copyright 1999-2006 Tanuki Software, Inc.  All Rights Reserved. jvm 1    | jvm 1    | 2019.06.12 16:51:43 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/sonarqube-7.7/temp jvm 1    | 2019.06.12 16:51:43 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001 jvm 1    | 2019.06.12 16:51:43 INFO  app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/sonarqube-7.7/elasticsearch]: /opt/sonarqube/sonarqube-7.7/elasticsearch/bin/elasticsearch jvm 1    | 2019.06.12 16:51:44 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running jvm 1    | 2019.06.12 16:51:44 INFO  app[][o.e.p.PluginsService] no modules loaded jvm 1    | 2019.06.12 16:51:44 INFO  app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin] jvm 1    | 2019.06.12 16:51:58 INFO  app[][o.s.a.SchedulerImpl] Process[es] is up jvm 1    | 2019.06.12 16:51:58 INFO  app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/opt/sonarqube/sonarqube-7.7]: /usr/java/jdk1.8.0_212-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/sonarqube-7.7/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.proxyHost=aproxy.some.where -Dhttps.proxyHost=aproxy.some.where -cp ./lib/common/*:/opt/sonarqube/sonarqube-7.7/lib/jdbc/postgresql/postgresql-42.2.5.jar org.sonar.server.app.WebServer /opt/sonarqube/sonarqube-7.7/temp/sq-process5107517876487134023properties jvm 1    | 2019.06.12 16:52:03 INFO  app[][o.s.a.SchedulerImpl] Process [web] is stopped jvm 1    | 2019.06.12 16:52:03 INFO  app[][o.s.a.SchedulerImpl] Process [es] is stopped jvm 1    | 2019.06.12 16:52:03 WARN  app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143 jvm 1    | 2019.06.12 16:52:03 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped wrapper  | <-- Wrapper Stopped 

es.log says

2019.06.12 16:51:47 INFO  es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [44gb], net total_space [50.5gb], types [rootfs] 2019.06.12 16:51:47 INFO  es[][o.e.e.NodeEnvironment] heap size [495.3mb], compressed ordinary object pointers [true] 2019.06.12 16:51:47 INFO  es[][o.e.n.Node] node name [sonarqube], node ID [qT3jesniTNOvQuFTc150Lg] 2019.06.12 16:51:47 INFO  es[][o.e.n.Node] version[6.6.2], pid[10153], build[default/tar/3bd3e59/2019-03-06T15:16:26.864148Z], OS[Linux/3.10.0-957.5.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_212/25.212-b10] 2019.06.12 16:51:47 INFO  es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/opt/sonarqube/sonarqube-7.7/temp/es6, -XX:ErrorFile=/opt/sonarqube/sonarqube-7.7/logs/es_hs_err_pid%p.log, -Xms512m, -Xmx512m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/sonarqube-7.7/elasticsearch, -Des.path.conf=/opt/sonarqube/sonarqube-7.7/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] loaded module [analysis-common] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] loaded module [lang-painless] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] loaded module [mapper-extras] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] loaded module [parent-join] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] loaded module [percolator] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] loaded module [reindex] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] loaded module [repository-url] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] loaded module [transport-netty4] 2019.06.12 16:51:48 INFO  es[][o.e.p.PluginsService] no plugins loaded 2019.06.12 16:51:52 WARN  es[][o.e.d.c.s.Settings] [http.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version. 2019.06.12 16:51:53 INFO  es[][o.e.d.DiscoveryModule] using discovery type [zen] and host providers [settings] 2019.06.12 16:51:54 INFO  es[][o.e.n.Node] initialized 2019.06.12 16:51:54 INFO  es[][o.e.n.Node] starting ... 2019.06.12 16:51:54 INFO  es[][o.e.t.TransportService] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001} 2019.06.12 16:51:57 INFO  es[][o.e.c.s.MasterService] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {sonarqube}{qT3jesniTNOvQuFTc150Lg}{5wBh0CdCRxqzg0TXK1029Q}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} 2019.06.12 16:51:57 INFO  es[][o.e.c.s.ClusterApplierService] new_master {sonarqube}{qT3jesniTNOvQuFTc150Lg}{5wBh0CdCRxqzg0TXK1029Q}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}, reason: apply cluster state (from master [master {sonarqube}{qT3jesniTNOvQuFTc150Lg}{5wBh0CdCRxqzg0TXK1029Q}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]]) 2019.06.12 16:51:57 INFO  es[][o.e.n.Node] started 2019.06.12 16:51:57 INFO  es[][o.e.g.GatewayService] recovered [0] indices into cluster_state 2019.06.12 16:52:03 INFO  es[][o.e.n.Node] stopping ... 2019.06.12 16:52:03 INFO  es[][o.e.n.Node] stopped 2019.06.12 16:52:03 INFO  es[][o.e.n.Node] closing ... 2019.06.12 16:52:03 INFO  es[][o.e.n.Node] closed 

Call me blind, but I cannot see any error why the ES does not keep running. I tried to set ES JVM arguments to bigger memory (1G, 2G) but no success. I checked system logs, nothing in them about ES or any other Java-based process.

Note: I tried to read all documentation about how to install SonarQube, followed different guides with proper restarts, no success. Please do not ask me if I tried to reinstall it or rebuild the whole machine. I’m newbie with SonarQube but not with Linux administration. If I need to do something, please explain it why you think I need to do it because in the end I have to write a system documentation about this and I have to fully understand what’s going on.

How To Create a DNS Server On CentOS 7

The DNS or Domain Name System is the distributed database that allows zone records, such as IP addresses, to be associated with domain names. When a computer, such as your laptop or phone, needs to communicate with a remote computer, such as a web server, over the internet they use each others IP addresses. People are not very good at remembering IP addresses but they are good at remembering the words and phrases in domain names. The DNS system allows people to use domain names when they interface with computers whilst still allowing computers to use IP addresses when they communicate with each other.

In this guide, we will examine how you can install and configure a DNS server that will be the authoritative DNS server for your domain names. This will allow you to have complete control over your DNS information and make immediate changes to your DNS records whenever you need to make them.

Requirements

In order to follow this guide you will need:

  • A CentOS 7 server.
  • A domain name.
  • A non-root sudo enabled user on the server.

In order to begin this guide, you must log into your server as the non-root user.

Installation

The DNS server that we will use in this guide is BIND. BIND is the most deployed and one of the oldest DNS servers in use on the internet.

Before we install BIND you should ensure that your server up-to-date with the latest packages:

sudo yum update sudo yum upgrade 

BIND is available from the default Debian repositories and is installed with the following command:

sudo yum install bind bind-utils 

BIND is now installed so we can move on to configuring it.

Global BIND Settings

Making BIND function as a DNS server falls into two parts. The first is setting the global parameters which will make BIND function in the manner we desire. The second is to create the domain-specific DNS information that BIND will serve. This information is known as “zone information” or “zone records”.

In this section, we will configure the global parameters.

The first configuration file that we will edit is located at /etc/named.conf and configures how bind will operate. Open this file with your favorite text editor, here nano is used:

sudo nano /etc/named.conf 

Edit /etc/named.conf so that it looks like the following. You do not need to make any changes to this template:

options {         listen-on port 53 { any; };         directory       "/var/named";         dump-file       "/var/named/data/cache_dump.db";         statistics-file "/var/named/data/named_stats.txt";         memstatistics-file "/var/named/data/named_mem_stats.txt";          secroots-file   "/var/named/data/named.secroots";         allow-query     { any; };         recursion no;          dnssec-enable yes;         dnssec-validation yes;          bindkeys-file "/etc/named.iscdlv.key";         managed-keys-directory "/var/named/dynamic";          pid-file "/run/named/named.pid";         session-keyfile "/run/named/session.key"; };  logging {         channel default_debug {                 file "data/named.run";                 severity dynamic;         }; };  zone "." IN {         type hint;         file "named.ca"; };  include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; 

There are quite a few options that are set here. However, the important ones for an authorative names server are the following:

  • listen-on port 53 { any; }; – This sets the port that BIND will listen on for incoming DNS requests. Port 53 is the default DNS port. The any options is used here instead of an IP address. This instructs BIND to attach to all available interfaces, private and public.
  • recursion no – This option configures BIND to only respond with information about domains that it has configuration files for. If this is set to yes then BIND will become a recursive DNS which means it will look up any request it receives a request for like Google’s recursive server at 8.8.8.8. This should always be set to no when BIND is also configured to respond to requests from any IP as we have set it up above for security reasons. This is because it can be used for DNS amplification attacks or other nefarious purposes.

Next, we need to configure BIND where to read the zone information files that we will create with the DNS information for your domain. Use the following example and add these two zones sectino to the bottom of /etc/named.conf:

zone    "exmaple.com"  {         type master;         file    "/var/named/forward.example.com";  };  zone   "10.100.51.198.in-addr.arpa"  {        type master;        file    "/var/named/everse.example.com";  }; 

The lines in this file mean as follows:

  • zone – This is the domain name or IP address that BIND will answer requests for.
  • type master – BIND will read the zone information from the local storage and provides authoritative information for the domain listed on the zone line.
  • file – The file that contains the zone information.

As you can see there are two sections to this file that have the same syntax. The first section lists the domain (example.com) and is the so-called, forward DNS record. This means that it will convert domain information to IP addresses.

The second is the reverse or PTR record of the server’s IP address. This converts in the opposite direction, i.e. IP addresses to domain names. The zone line for the reverse record looks a little strange because the IP address is in reverse. The IP address that this is the reverse record for is 198.51.100.10.

Reverse records are important to have because many security systems such as spam filters will be less likely to accept mail sent from an IP address that has no reverse record.

Now that BIND’s global configuration is set we can create the zone files that will hold the forward and reverse DNS information.

Zone File Configuration

The first zone file that we will create is the forward information for the domain name. Open and create the file with a text editor:

sudo nano /var/named/forward.example.com 

And use the following as your template:

$  TTL 1d @               IN      SOA     dns1.example.com.    hostmaster.example.com. (                 1        ; serial                 6h       ; refresh after 6 hours                 1h       ; retry after 1 hour                 1w       ; expire after 1 week                 1d )     ; minimum TTL of 1 day ; ; ;Name Server Information  @               IN      NS      ns1.example.com. ns1             IN      A       198.51.100.10 ; ; ;Mail Server Information example.com.    IN      MX      10      mail.example.com. mail            IN      A       198.51.100.20 ; ; ;Additional A Records:    www             IN      A       198.51.100.30 site            IN      A       198.51.100.30 ; ; ;Additional CNAME Records: slave           IN      CNAME   www.example.com. 

The first configuration block beginning $ TTL 1d has only a single line that you need to edit by changing example.com to your domain:

example.com.    IN      SOA     dns1.example.com.    hostmaster.example.com. ( 

This line means from left to right:

  • @ – This is replaced with the domain from the named.conf.local file i.e. example.com.
  • IN – The type of record, in this case, INternet records.
  • SOA – The record is the Start Of Authority record. This is the authoritative record for this domain.
  • dns1.example.com. – The nameserver where the DNS records are found.
  • hostmaster.example.com. – The email address of the administrator of the nameserver. The @ symbol is replaced with a dot.

The rest of the lines here set values such as Time To Live’s which you can copy from the example as they are functional values.

You should note the dots at end of the domains and hostnames e.g. example.com. This final dot stops the domain name getting added automatically. We want this to happen with, for example, the www’s in the following line www IN A 198.51.100.10 as this will resolve www.example.com to the IP address.

The next section – Name Server Information – is mandatory and should be edited to use the hostname of this nameserver and its IP address. It is customary to label the first nameserver ns1.domain.com but you can choose any hostname you want.

The remaining sections are optional and are included as examples. The first of these, Mail Server Information, is an example of how an email will get sent to an email server at the IP 198.51.100.20. MX records should always resolve to hostnames so the required A record for mail.example.com is included in the mail records section for ease of understanding.

The final two sections are further examples of A and CNAME records.

Next, we need to create a reverse zone file. Open and create the file with a text editor:

sudo nano /var/named/reverse.example.com 

Use the following example as your template:

$  TTL 1d @               IN      SOA     dns1.example.com.    hostmaster.example.com. (                 1        ; serial                 6h       ; refresh after 6 hours                 1h       ; retry after 1 hour                 1w       ; expire after 1 week                 1d )     ; minimum TTL of 1 day ; ; ;Name Server Information  @               IN      NS      ns1.example.com. ns1             IN      A       198.51.100.10 ; ; ;Reverse IP Information 10.100.51.198.in-addr.arpa.      IN      PTR       ns1.example.com. 20.100.51.198.in-addr.arpa.      IN      PTR       mail.example.com. 30.100.51.198.in-addr.arpa.      IN      PTR       www.example.com. 

The first two sections are the same as the forward zone file. The last section configures the IP to domain name records.

The IP listed in the backward format with the hostname you want it to resolve to at the end of the line. Here the reverse maps are set for all three IP addresses used in the forward zone file as examples.

Check Your Configuration For Errors

BIND provides a pair of tools to check that its configuration files do not contain any errors that would prevent BIND from starting.

The first checks the global configuration files and is used as follows:

sudo named-checkconf /etc/named.conf 

The second tool will check the zone files and is used as follows:

sudo named-checkzone <DOMAIN-NAME> <ZONE-FILE> e.g. sudo named-checkzone example.com /var/named/forward.example.com 

When you have finished editing these files and they do not throw any errors when you check BIND must be restarted and enabled so that it starts on boot:

sudo systemctl enable named.service sudo systemctl start named.service 

Configure Systemd To Keep BIND Running

When you start using your own nameservers for your domain it is critical that BIND is always running. If it stops then anything that uses your domain e.g. email, website etc will stop working. Systemd is the program that, amongst other services, starts and stops programs like BIND on your server. In addition to starting and stopping it can be configured to ensures that a program is re-started if it stops for any reason.

First, make a copy of the BIND systemd service file that we will edit:

sudo cp /lib/systemd/system/named.service /etc/systemd/system/ 

This will ensure that the edits will not be lost in future system updates. Next, open the file in an editor:

sudo nano /etc/systemd/system/named.service 

And add the following two lines to the [Service] section:

Restart=always RestartSec=3 

Then prompt Systemd to reload all its service files:

sudo systemctl daemon-reload 

And restart BIND:

sudo systemctl restart named.service 

Now, if BIND stop running for any reason, systemd will restart it again automatically.

Testing The DNS Server

Before you begin using your new DNS server you need to test that it works correctly i.e. BIND serves the correct DNS information for your domain.

The DNS inspection tool dig was included with the packages we installed at the beginning of this guide. dig is one of the powerful and flexible DNS testing and investigation command line tools available on Linux and should be your goto tool for looking up DNS records.

dig has the ability to ignore the system configured resolvers (set in /etc/resolv.conf) and request DNS information directly from a nameserver i.e. the DNS server you have just created.

The syntax of a dig query is as follows:

dig @<DNS SERVER> -t <RECORD TYPE> <HOST> 

If we replace this information with the details of the example server in this guide we get:

dig @198.51.100.10 -t A www.example.com. 

This will return quite a bit of information. The result that we are interested in is always contained in the ANSWER SECTION e.g.:

;; ANSWER SECTION: example.com.         86400   IN      A       198.51.100.30 

We can also check the reverse map record by using the -x flag:

dig @198.51.100.10 -x 198.51.100.10 

Which will produce the result:

;; ANSWER SECTION: 10.100.51.198.in-addr.arpa.      IN      PTR       ns1.example.com. 

You can perform similar queries against all of the zone records you have created for your domain. When they all return the correct information you are ready to start using your DNS server.

Conclusion

You have now successfully installed, configured and tested your own DNS server you are now ready to start using it. In order to do this, you will need to transfer your domain to your DNS server. This is done with the company that registered your domain for you. When you log into their site you will find somewhere in their control panel an option to transfer the domain to new authoritative nameservers.

Some companies require that a domain has more than one authoritative nameserver. In this guide, we only created one, i.e. ns1.example.com. If an additional nameserver is required then you need to obtain a second virtual machine and copy the configuration substituting ns2 for ns1.

Alternatively, you can request a second IP address for your existing server and duplicate the ns1 records changing them to ns2.

 

The post How To Create a DNS Server On CentOS 7 appeared first on Low End Box.

How To Set Up a Node.js Application for Production on a CentOS 7 VPS

In this tutorial, we will create a simple Node.js application and put it into a production ready environment. We are going to install and use the following pieces of software:

  • Nginx as a reverse proxy. It will make the app accessible from your browser, and in case you ran several sites from the same server, it could serve as a load balancer as well.
  • Certbot will let us install Let’s Encrypt certificates. Access to the site will be secure as only the HTTPS requests will be honored.
  • NPM package called PM2 will turn a node.js app into a service. The app will run in the background, even after system crashes or reboots.

What We Are Going To Cover

  • Install Nginx
  • Install firewall-cmd and enable rules for Nginx
  • Install the latest version of Node.js
  • Add NPM packages for the app that we are making
  • Create the example app to show all characters in upper case
  • Configure Nginx as a reverse proxy
  • Install Let’s Encrypt certificates to serve HTTPS requests
  • Access the app from the browser
  • Install PM2, a production process manager for Node.js applications with a built-in traffic Load Balancer
  • Use PM2 to restart the Node.js app on every restart or reboot of the system

Prerequisites

We use Centos 7:

  • Starting with a clean VPS with
  • At least 512Mb of RAM and
  • 15Gb of free disk space.
  • You will need root user access via SSH
  • A domain name pointed to your server’s IP address (it can also be a subdomain) using A records at your DNS service provider
  • We use nano as our editor of choice, and you can install it with this command:
yum install nano 

Step 1: Install Nginx

After you have logged in as a root user, you will install Nginx. Add the CentOS 7 EPEL repository with this command:

yum install epel-release 

Next, install Nginx:

yum install nginx 

Press ‘y’ twice and the installation will be finished. Enable Nginx service to start at server boot:

systemctl enable nginx 

Step 2: Change Firewall Rules to Enable Nginx

Let’s now install firewall-cmd, the command line front-end for firewalld (firewalld daemon), for CentOS. It supports both IPv4 and IPv6, firewall zones, bridges and ipsets, allows for timed firewall rules in zones, logs denied packets, automatically loads kernel modules, and so on.

Install it in the usual manner, by using yum:

yum install firewalld 

Let us now start it, enable it to auto-start at system boot, and see its status:

systemctl start firewalld systemctl enable firewalld systemctl status firewalld 

Node.js apps require a port that is not used by the system, but is dedicated to that one app only. In our examples, we might use ports such as 3000, 8080 and so on, so we need to declare them explicitly, otherwise the app won’t run.

Here is a list of ports and feel free to add any other that your host requires for the normal functioning of the system:

firewall-cmd --permanent --zone=public --add-service=ssh firewall-cmd --zone=public --add-port=3000/tcp --permanent firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --permanent --zone=public --add-service=https firewall-cmd --reload 

Let us now start Nginx:

systemctl start nginx 

With HTTP functioning, we can visit this address in the browser:

http://YOUR_DOMAIN/ 

and verify that Nginx is running:

Step 3: Install Latest Node.js

We’ll now install the latest release of Node.js. First, install development tools to build native add-ons (make, gcc-c++) and then enable Node.js yum repository from the Node.js official website:

yum install -y gcc-c++ make curl -sL https://rpm.nodesource.com/setup_12.x | sudo -E bash - 

Now, the repository is added to your VPS and we can install the Node.js package. NPM, the package manager for Node.js, will also be installed, as well as many other dependent packages in the system.

yum install nodejs 

Press ‘y’ twice to finish the installation. Show the version of Node.js that is installed:

node -v 

It shows v12.3.1, which was the actual version at the time of this writing. If it shows an error, double check the commands you entered against the ones shown above.

Step 4: Adding NPM Packages

We of course know what packages our Node.js app will need, so we install the required npm packages in advance. Since our app will turn any input into uppercase letters, we first install a package for that:

npm install upper-case 

Most Node.js apps will now use Express.js, so let’s install that as well:

npm install --save express 

Execute this command as well:

npm install -g nginx-generator 

It will globally install an NPM package to generate the reverse proxy config for Nginx. We will apply it after the app is running on port 8080.

Step 5: Creating The App

Open a file named uppercase-http.js for editing:

nano uppercase-http.js 

Add the following lines:

var http = require('http'); var uc = require('upper-case'); console.log('starting...'); http.createServer(function (req, res) {   console.log('received request for url: ' + req.url);   res.writeHead(200, {'Content-Type': 'text/html'});   res.write(uc(req.url + '\n'));   res.end(); }).listen(8080); 

Save and close the file.

The HTTP server will listen to port 8080. You can specify any other port that you like, provided that it will be free when the app is running (and that you have previously opened access to it in firewalld).

Run the app:

node uppercase-http.js 

You will see the following message:

starting... 

Node.js app starting

To test it, fire up another terminal, connect to your VPS as root via SSH and curl localhost:8080:

curl localhost:8080/test 

The program correctly converted path to uppercase. The server app shows a status message for the request:

received request for url: /test 

Now we have two terminal windows, one with the app running and the other which we used to test the app. The first window is blocked as long as the app is running and we can press Ctrl-C from keyboard to stop it. If we do so, the app won’t be running later when we access it from the browser. The solution is to either activate the app again or — much cleaner — enter further commands only into the second terminal window for the rest of this tutorial.

Step 6: Configure Nginx as Reverse Proxy

Nginx for CentOS comes without folders for available and enabled sites, as is the custom on Ubuntu Linux. You’ll need to create them:

mkdir /etc/nginx/sites-available mkdir /etc/nginx/sites-enabled 

Then, edit Nginx global configuration to load config files from these folders:

nano /etc/nginx/nginx.conf 

Find line

include /etc/nginx/conf.d/*.conf; 

and insert these lines:

include /etc/nginx/sites-enabled/*; server_names_hash_bucket_size 64; 

Save and close the file. Now Nginx will read the contents of the “enabled” sites.

For the sake of completness, our nginx.conf file looks like this:

user  nginx; worker_processes  1;  error_log  /var/log/nginx/error.log warn; pid        /var/run/nginx.pid;   events {     worker_connections  1024; }   http {     include       /etc/nginx/mime.types;     default_type  application/octet-stream;      log_format  main  '$  remote_addr - $  remote_user [$  time_local] "$  request" '                       '$  status $  body_bytes_sent "$  http_referer" '                       '"$  http_user_agent" "$  http_x_forwarded_for"';      access_log  /var/log/nginx/access.log  main;      sendfile        on;     #tcp_nopush     on;      keepalive_timeout  65;      #gzip  on;      include /etc/nginx/conf.d/*.conf;     include /etc/nginx/sites-enabled/*;     server_names_hash_bucket_size 64; 

You may want to copy it and paste it.

With NPM package nginx-generator we generate files that will tell Nginx to act as a reverse proxy. In the command line, execute the following:

nginx-generator \       --name site_nginx \       --domain YOUR_DOMAIN \       --type proxy \       --var host=localhost \       --var port=8080 \       /etc/nginx/sites-enabled/site_nginx 

Replace YOUR_DOMAIN with your actual domain before running this command.

That command creates a file called site_nginx and puts it into the directory /etc/nginx/sites-enabled/. (We could have used any other name instead of site_nginx for the file.)

We can see it with this command:

sudo nano /etc/nginx/sites-enabled/site_nginx 

Test the configuration:

sudo nginx -t 

and if everything is OK, restart Nginx:

systemctl restart nginx 

Run the app again:

node uppercase-http.js 

You will see the following message:

starting... 

In your browser, go to address

http://YOUR_DOMAIN/test 

and the result should be

/TEST 

Bad Gateway Case No. 1 – The App is Not Active

Instead of proper result, which in this particular case would be text printed in uppercase letters, it is all too easy to get the message Bad Gateway in this place.

The main reason is that we were using one teminal window to both run the app and insert other commands. When you start the app with node uppercase-http.js, it will block the entire window and when you want to try out the next command in the installation process, the app will stop running. One way to prevent this is to repeat starting the app all over again, as we have done in this tutorial.

Another way would be to open two terminal windows, start the app in one of them and then proceed with further commands in the second terminal window, exclusively.

Bad Gateway Case No. 2 – SELinux Is Active

If SELinux is enabled, it can block Nginx from making outbound connections.

You can check this with:

getenforce 

If you get Enforcing as the result, SELinux is active. Run this command to let Nginx serve as a reverse proxy:

setsebool -P httpd_can_network_connect true 

Step 7: Securing Your Site To Serve Only HTTPS

We want to serve our app via a HTTPS request. If you have a domain name and DNS records properly set up to point to your VPS, you can use certbot to generate Let’s Encrypt certificates. This means that you will always access the app as well as the rest of your domain, via HTTPS.

We will folow the original documentation to install Let’s Encrypt. Choose Nginx for Software and Centos/RHEL 7 for System – it should look like this:

Certbot Certbot Site

Certbot is packaged in EPEL (Extra Packages for Enterprise Linux). To use Certbot, you must first enable the EPEL repository. On CentOS, you must also enable the optional channel, by issuing the following commands:

yum -y install yum-utils yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional 

Now install the Certbot by executing this:

yum install certbot python2-certbot-nginx 

It will compute the dependencies needed and ask you to let it proceed with the installation.

Press ‘y’ when asked.

Finally, run Certbot:

certbot --nginx 

If you are installing certificates for the fist time, Certbot will ask for an emergency email address, then several less important questions and finally – do you want to redirect all HTTP traffic to HTTPS? Select 2 to confirm this redirection, and you’re all set!

Activate Nginx as you normally would after each change in parameters:

systemctl restart nginx 

To verify that redirection is working, go to the same address in your browser:

https://YOUR_DOMAIN/test 

Note that this address started with HTTP, but that it ended up as HTTPS.

Step 8: Install PM2

PM2 is a production process manager for Node.js applications. With PM2, we can monitor applications, their memory and CPU usage. It also provides easy commands to stop/start/restart all apps or individual apps.

Once the app is started through PM2, it will always be restarted after system crashes or restarts. In effect, it will “always be there”.

Use NPM to install PM2:

npm install pm2@latest -g 

Option -g tells it to install pm2 globally, so it can run from all paths in the system.

Let’s now run our application under PM2:

pm2 start uppercase-http.js 

The output of PM2 can be spectacular when run for the first time, but we really need to concentrate on the rows about the app:

PM2 will use app name, and then show the id number of the app, mode, status, CPU usage and memory. If two or more apps are running in the background, they will all be presented in this table.

We can list the processes like this:

pm2 list 

The following command

pm2 show 0 

will show details of the app with ID of 0:

It is also possible to monitor CPU usage in real time:

pm2 monit 

Other useful PM2 commands are stop, restart, delete.

What Can You Do Next

Now you have a node.js app in a production environment, using HTTPS protocol for safety, Nginx for speed and as a reverse proxy, running as a service in the background. We have just installed one such app and one such site, while you may run serveral node.js apps and sites from the same server. We used root user throughout for ease of installation, while for multiples sites you would need multiple non-root users for safety.

Dusko Savic is a technical writer and programmer.

duskosavic.com

The post How To Set Up a Node.js Application for Production on a CentOS 7 VPS appeared first on Low End Box.

CentOs 7 firewall, without iptables

A client of mine purchased a GoDaddy dedicated server, which is bad enough on it’s own. But they are providing a malfunctioning product, and refuse to fix it.

When CentOS 7 is running under a Virtuozzo or OpenVZ virtual container, without netfilter full, iptables refuses to boot up. Journalctl -xe states iptables: Applying firewall rules: iptables-restore: line 14 failed Which if you dig further indicates that the ebtables module is not running, stating The kernel doesn't support the ebtables 'nat' table.

So in short, iptables doesn’t work, cannot be made operable, and the hosting environment cannot be changed to support it.

My question is, are there any alternative non iptables based firewall software that can be used as a replacement?
Right now the ports cannot be blocked, and services cannot be restricted to an ip whitelist for safety. It’s wide open.

Ref: https://www.centos.org/forums/viewtopic.php?f=51&t=54469&start=20

centos: run a single site on startup(browser-GUI) – without Desktop environment

what i want is when i start my centos 7 server(minimal) automatically to show to user this site youtube.com and user can just browse and not to do anything else.

So there will be no desktop just the youtube site user can surf

some suggestion i get was:

sudo xinit firefox --new-tab "http://youtube.com" 

or nativefier to make youtube like app and launch but idk what should i install on cents to run this, (so i dont want desktop to be installed)