pgpool-II and Postgres docker image : automated failover and online recovery via rsa key

I’ve been following this documentation for pgpool-ii https://www.pgpool.net/docs/latest/en/html/example-cluster.html

I’m having a hard time setting up rsa on my postgres streaming cluster built in official docker image https://hub.docker.com/_/postgres.

I was able to do the streaming now i’m on the part of setting up failover.

part of the documentation says.

To use the automated failover and online recovery of Pgpool-II, the settings that allow passwordless SSH to all backend servers between Pgpool-II execution user (default root user) and postgres user and between postgres user and postgres user are necessary. Execute the following command on all servers to set up passwordless SSH. The generated key file name is id_rsa_pgpool.  
     [all servers]# cd ~/.ssh      [all servers]# ssh-keygen -t rsa -f id_rsa_pgpool      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server1      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server2      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server3       [all servers]# su - postgres      [all servers]$   cd ~/.ssh      [all servers]$   ssh-keygen -t rsa -f id_rsa_pgpool      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server1      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server2      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server3 

Is it possible to set it up inside a container from postgre’s official image? I would like to get an idea on how to do it from some samples or existing solution.

Moreover, Since I can’t do the rsa thing as of the moment.

I decided to create a script that is using a psql command on my pgpool server to the new master

#!/bin/bash # This script is run by failover_command.  set -e  # Special values: #   %d = failed node id #   %h = failed node hostname #   %p = failed node port number #   %D = failed node database cluster path #   %m = new master node id #   %H = new master node hostname #   %M = old master node id #   %P = old primary node id #   %r = new master port number #   %R = new master database cluster path #   %N = old primary node hostname #   %S = old primary node port number #   %% = '%' character  FAILED_NODE_ID="$  1" FAILED_NODE_HOST="$  2" FAILED_NODE_PORT="$  3" FAILED_NODE_PGDATA="$  4" NEW_MASTER_NODE_ID="$  5" NEW_MASTER_NODE_HOST="$  6" OLD_MASTER_NODE_ID="$  7" OLD_PRIMARY_NODE_ID="$  8" NEW_MASTER_NODE_PORT="$  9" NEW_MASTER_NODE_PGDATA="$  {10}" OLD_PRIMARY_NODE_HOST="$  {11}" OLD_PRIMARY_NODE_PORT="$  {12}"  #set -o xtrace #exec > >(logger -i -p local1.info) 2>&1  new_master_host=$  NEW_MASTER_NODE_HOST ## If there's no master node anymore, skip failover. if [ $  NEW_MASTER_NODE_ID -lt 0 ]; then     echo "All nodes are down. Skipping failover."     exit 0 fi  ## Promote Standby node. echo "Primary node is down, promote standby node" $  {NEW_MASTER_NODE_HOST}.  PGPASSWORD=postgres psql -h $  {NEW_MASTER_NODE_HOST} -p 5432 -U postgres <<-EOSQL  select pg_promote(); EOSQL  #logger -i -p local1.info failover.sh: end: new_master_node_id=$  NEW_MASTER_NODE_ID started as the primary node #exit 0 

The above script is working if i simulate that my primary is down.

However, in my new primary this is the log

2020-10-07 20:25:31.924 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:25:31.924 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:25:32.939 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:25:32.939 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:25:32.939 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:33.003 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:33.003 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:34.012 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:34.012 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:35.026 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:35.026 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:26:35.026 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:35.096 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:35.096 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:36.110 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:36.110 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:37.123 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:37.123 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:27:37.123 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:37.177 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:37.177 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:38.221 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:38.221 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:39.230 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:39.230 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:28:39.230 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later 

still trying to execute the WAL part.

moreover, on my other standby it is still looking for the old master.

2020-10-07 20:29:07.818 UTC [1365] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:12.827 UTC [1367] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:17.832 UTC [1369] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:22.835 UTC [1371] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:27.826 UTC [1373] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:32.836 UTC [1375] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:37.836 UTC [1377] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:42.850 UTC [1379] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:47.857 UTC [1381] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:52.855 UTC [1383] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 

and dealing with this I think is more complicated than setting up the rsa part so that i could utilize the existing fail_command script that pgpool has.

Thanks for the response.

Is it possible to run commands that exist only on the host on a docker container?

We would like to harden our Docker Image and remove redundant software from it. Our Devs and Ops asked to keep some Linux tools used for debugging on the containers running on our Kubernetes Prod environment.

I’ve read this post: https://www.digitalocean.com/community/tutorials/how-to-inspect-kubernetes-networking

And it made me wonder, is it possible to run commands that exist only on the host, on a container (which those commands have been removed from)?

If so is there a difference between commands that have been removed from the container than ones that the user don’t have permissions to run?

P.S. How do the tools in the above mentioned post work?

Thanks for the help! 🙂

Host filesystem manipulation from docker vs. virtual machine

When reading about docker, I found a part of the documentation describing the attack surface of the docker daemon. From what I was able to understand, part of the argument is that it is possible to share (basically arbitrary) parts of the host filesystem with the container, which can then be manipulated by a privileged user in the container. This seems to be used as an argument against granting unprivileged users direct access to the docker daemon (see also this Security SE answer).

Would the same be possible from a virtual machine, e.g. in VirtualBox, which on the host is run as an unprivileged user?

A quick test where I was trying to read /etc/sudoers on a Linux Host from a Linux guest running in VirtualBox did produce a permission error, but I would not consider myself an expert in that regard in any way nor was the testing very exhaustive.

Security loopholes while mapping .kube/config file on to docker as volume

I have a scenario where I have to install Kubernetes on a public cloud and access the Kubernetes via kubectl from a VM on my laptop.

Kubectl accesses .kube/config to connect K8S API-Server to do the required operation.

There is an application that is running as docker inside the VM and connects to K8S using .kube/config that is mapped as volume. That is -v $ HOME/.kube:/home/test/.kube

Is there any security loopholes should I be aware of?

gitlab ci (self hosted), docker, access to other containers

Even if i’m not allowed to access a specific repo (or if i have low perms (cant see ci/cd vars)) i still can create one and do something like:

variables:   USER: gitlab build:   stage: build   image: docker:latest   script:     - docker ps -a     - docker images 

Then when i have what i need, i can:

variables:   USER: gitlab build:   stage: build   image: docker:latest     - docker exec <container> /bin/cat /var/www/html/config.php     - docker exec <container> /usr/bin/env 

How to avoid this kind of stuff?

PS: This is on a self hosted gitlab server.

PS2: Originaly post on stackoverflow, but im asking here since i didnt have any answer.

Can malicious applications running inside a docker container still be harmful?

I am very new to docker (and don’t usually program at a ‘systems’ level). I will be working on an open source project with complete strangers over the web over the next couple of months. I trust them, but I like to not have to trust people (meant in the best possible way).

I would like to know, if I download various repositories from github or elsewhere, and run them inside a docker container, is it possible for them to cause harm to my laptop in any way?

In case it’s relevant, the repositories will mostly be web applications (think django, node), and will likely use databases (postgres etc), and otherwise operate as regular locally hosted web applications. It is possible (like anything from github or the world wide web), that some apps could contain malicious code. I am curious to know if running such an app (containing malicious code) inside a docker container prevents that code from harming anything outside of the docker container (i.e. my laptop)?

Redis docker container has been hacked, next steps?

I accidentally left the port of my redis container open and noticed, that it crashed all the time today.

Now the mounted volume is full of files like red2.so, admin, root, www, apache, backup.db.

I closed the port, deleted the files and rebuild the docker container, is there a risk of my server outside of the container being infected?

There are no new or altered entries in crontab or the .ssh/authorized_keys file, but I’m not sure what I should check additionally.

Do docker images have the same root password?

If two persons are pulling the same docker image (let’s say Debian:10.4), they will obtain the same "files" (layers) from the docker repo.

So, from what I understand, launching a docker image is not exactly like a fresh install, it is more like a preinstalled OS. So I guess the two docker images debian:10.4 launched in two separate hosts should be as equivalent as possible to avoid difference in the behaviour from a host to another.

Considering this, I am asking myself if the root’s password is always the same on every debian:10.4 images.

I don’t know if we know the root’s password of this image or only the hash. But if someone could find a preimage of this hash, he would be able to log in in every SSH server based on a debian:10.4 ?

Or is there a minimal randomness applied at the start of a instance docker to ensure the dispersion of some security constant (root password, id_rsa key, …) ?

Is running software in Docker an allowable way to bypass FIPS 140-2 issues?

Someone has a service that uses a FIPS non-compatible hash in a protocol signature. When FIPS 140-2 compatibility is enabled on the hosts the service crashes (due to the hash signature being not allowed by the security configuration of the host). A way to get around this is to put the service in a Docker container on the FIPS compatible host. It works, but is it ok from a FIPS compatibility point of view? If not, why?

How do I share secret key files with Docker containers following 12 Factor App?

I am building an API and trying to follow the 12 Factor App methodology. Using Docker, the methodology says containers must be disposable.

Assuming the API will have high traffic, multiple docker containers will be running with the same app, connecting to the same database.

Certain fields in the database are encrypted and stored with a reference to the file containing the passphrase. – This is done so the passphrase can be updated, and old data can still be decrypted.

With a Docker container and following 12 Factor App, how should I provide the key files to each of the containers?

Am I correct in assuming I would need a separate server to handle the creating of new key files and distributing them over the network?

Is there secure software, protocols or services that do this already, or would I need a custom solution?