How to set up mysql docker container with existing ib* files?

Related to a previous issue, I’m trying to recover some data from existing ib* files from a crashed server. The database version is 5.1.69, so quite old. One of the responses was to install MySQL 5.5, because it should still be able to import 5.1, but even that is too old on current systems, and one just runs into ever deeper compatibility issues.

“NBK” in the same issue suggested in a comment to use a docker file with an older version of MySQL. I decided to try this approach. I was able to install docker and pull the vsamov/mysql-5.1.73 image, but I’m stuck now at how to get the ib* files into the docker container.

I think I need the image to run so that it has a container ID, but if it’s running, then the ib* files are locked, so I’m not sure how to proceed. If anyone has experience in this, or can provide a reference, it would be much appreciated.

Cannot connect to MariaDB Columnstore database on Docker

I’m running MariaDB 10.5 Columnstore on Docker as described here. This works fine when I connect to the database starting a command line in the container itself. I can connect and create a database and tables.

But when I try to connect with an SQL Client (such as DBeaver) from localhost on port 3306, I get the following error:

Host '_gateway' is not allowed to connect to this MariaDB server 

I tried to fixed that changing the /etc/my.sql.d/server.cnf file. I tried all the options below but the container always fails after restart, no matter the option.

How to make a MariaDB Columnstore container accept connections from localhost and/or other IP addresses?

These are the options that didn’t work:

[server] bind-address = ::  [mysqld] skip-networking bind-address = 0.0.0.0  [mysqld] bind-address = ::   [mysqld] skip-networking bind-address = ::   [mysqld] skip-networking=0 skip-bind-address 

pgpool-II and Postgres docker image : automated failover and online recovery via rsa key

I’ve been following this documentation for pgpool-ii https://www.pgpool.net/docs/latest/en/html/example-cluster.html

I’m having a hard time setting up rsa on my postgres streaming cluster built in official docker image https://hub.docker.com/_/postgres.

I was able to do the streaming now i’m on the part of setting up failover.

part of the documentation says.

To use the automated failover and online recovery of Pgpool-II, the settings that allow passwordless SSH to all backend servers between Pgpool-II execution user (default root user) and postgres user and between postgres user and postgres user are necessary. Execute the following command on all servers to set up passwordless SSH. The generated key file name is id_rsa_pgpool.  
     [all servers]# cd ~/.ssh      [all servers]# ssh-keygen -t rsa -f id_rsa_pgpool      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server1      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server2      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server3       [all servers]# su - postgres      [all servers]$   cd ~/.ssh      [all servers]$   ssh-keygen -t rsa -f id_rsa_pgpool      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server1      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server2      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server3 

Is it possible to set it up inside a container from postgre’s official image? I would like to get an idea on how to do it from some samples or existing solution.

Moreover, Since I can’t do the rsa thing as of the moment.

I decided to create a script that is using a psql command on my pgpool server to the new master

#!/bin/bash # This script is run by failover_command.  set -e  # Special values: #   %d = failed node id #   %h = failed node hostname #   %p = failed node port number #   %D = failed node database cluster path #   %m = new master node id #   %H = new master node hostname #   %M = old master node id #   %P = old primary node id #   %r = new master port number #   %R = new master database cluster path #   %N = old primary node hostname #   %S = old primary node port number #   %% = '%' character  FAILED_NODE_ID="$  1" FAILED_NODE_HOST="$  2" FAILED_NODE_PORT="$  3" FAILED_NODE_PGDATA="$  4" NEW_MASTER_NODE_ID="$  5" NEW_MASTER_NODE_HOST="$  6" OLD_MASTER_NODE_ID="$  7" OLD_PRIMARY_NODE_ID="$  8" NEW_MASTER_NODE_PORT="$  9" NEW_MASTER_NODE_PGDATA="$  {10}" OLD_PRIMARY_NODE_HOST="$  {11}" OLD_PRIMARY_NODE_PORT="$  {12}"  #set -o xtrace #exec > >(logger -i -p local1.info) 2>&1  new_master_host=$  NEW_MASTER_NODE_HOST ## If there's no master node anymore, skip failover. if [ $  NEW_MASTER_NODE_ID -lt 0 ]; then     echo "All nodes are down. Skipping failover."     exit 0 fi  ## Promote Standby node. echo "Primary node is down, promote standby node" $  {NEW_MASTER_NODE_HOST}.  PGPASSWORD=postgres psql -h $  {NEW_MASTER_NODE_HOST} -p 5432 -U postgres <<-EOSQL  select pg_promote(); EOSQL  #logger -i -p local1.info failover.sh: end: new_master_node_id=$  NEW_MASTER_NODE_ID started as the primary node #exit 0 

The above script is working if i simulate that my primary is down.

However, in my new primary this is the log

2020-10-07 20:25:31.924 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:25:31.924 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:25:32.939 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:25:32.939 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:25:32.939 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:33.003 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:33.003 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:34.012 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:34.012 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:35.026 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:35.026 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:26:35.026 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:35.096 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:35.096 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:36.110 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:36.110 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:37.123 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:37.123 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:27:37.123 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:37.177 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:37.177 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:38.221 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:38.221 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:39.230 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:39.230 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:28:39.230 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later 

still trying to execute the WAL part.

moreover, on my other standby it is still looking for the old master.

2020-10-07 20:29:07.818 UTC [1365] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:12.827 UTC [1367] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:17.832 UTC [1369] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:22.835 UTC [1371] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:27.826 UTC [1373] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:32.836 UTC [1375] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:37.836 UTC [1377] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:42.850 UTC [1379] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:47.857 UTC [1381] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:52.855 UTC [1383] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 

and dealing with this I think is more complicated than setting up the rsa part so that i could utilize the existing fail_command script that pgpool has.

Thanks for the response.

Is it possible to run commands that exist only on the host on a docker container?

We would like to harden our Docker Image and remove redundant software from it. Our Devs and Ops asked to keep some Linux tools used for debugging on the containers running on our Kubernetes Prod environment.

I’ve read this post: https://www.digitalocean.com/community/tutorials/how-to-inspect-kubernetes-networking

And it made me wonder, is it possible to run commands that exist only on the host, on a container (which those commands have been removed from)?

If so is there a difference between commands that have been removed from the container than ones that the user don’t have permissions to run?

P.S. How do the tools in the above mentioned post work?

Thanks for the help! 🙂

Host filesystem manipulation from docker vs. virtual machine

When reading about docker, I found a part of the documentation describing the attack surface of the docker daemon. From what I was able to understand, part of the argument is that it is possible to share (basically arbitrary) parts of the host filesystem with the container, which can then be manipulated by a privileged user in the container. This seems to be used as an argument against granting unprivileged users direct access to the docker daemon (see also this Security SE answer).

Would the same be possible from a virtual machine, e.g. in VirtualBox, which on the host is run as an unprivileged user?

A quick test where I was trying to read /etc/sudoers on a Linux Host from a Linux guest running in VirtualBox did produce a permission error, but I would not consider myself an expert in that regard in any way nor was the testing very exhaustive.

Security loopholes while mapping .kube/config file on to docker as volume

I have a scenario where I have to install Kubernetes on a public cloud and access the Kubernetes via kubectl from a VM on my laptop.

Kubectl accesses .kube/config to connect K8S API-Server to do the required operation.

There is an application that is running as docker inside the VM and connects to K8S using .kube/config that is mapped as volume. That is -v $ HOME/.kube:/home/test/.kube

Is there any security loopholes should I be aware of?

gitlab ci (self hosted), docker, access to other containers

Even if i’m not allowed to access a specific repo (or if i have low perms (cant see ci/cd vars)) i still can create one and do something like:

variables:   USER: gitlab build:   stage: build   image: docker:latest   script:     - docker ps -a     - docker images 

Then when i have what i need, i can:

variables:   USER: gitlab build:   stage: build   image: docker:latest     - docker exec <container> /bin/cat /var/www/html/config.php     - docker exec <container> /usr/bin/env 

How to avoid this kind of stuff?

PS: This is on a self hosted gitlab server.

PS2: Originaly post on stackoverflow, but im asking here since i didnt have any answer.

Can malicious applications running inside a docker container still be harmful?

I am very new to docker (and don’t usually program at a ‘systems’ level). I will be working on an open source project with complete strangers over the web over the next couple of months. I trust them, but I like to not have to trust people (meant in the best possible way).

I would like to know, if I download various repositories from github or elsewhere, and run them inside a docker container, is it possible for them to cause harm to my laptop in any way?

In case it’s relevant, the repositories will mostly be web applications (think django, node), and will likely use databases (postgres etc), and otherwise operate as regular locally hosted web applications. It is possible (like anything from github or the world wide web), that some apps could contain malicious code. I am curious to know if running such an app (containing malicious code) inside a docker container prevents that code from harming anything outside of the docker container (i.e. my laptop)?

Redis docker container has been hacked, next steps?

I accidentally left the port of my redis container open and noticed, that it crashed all the time today.

Now the mounted volume is full of files like red2.so, admin, root, www, apache, backup.db.

I closed the port, deleted the files and rebuild the docker container, is there a risk of my server outside of the container being infected?

There are no new or altered entries in crontab or the .ssh/authorized_keys file, but I’m not sure what I should check additionally.