pgpool-II and Postgres docker image : automated failover and online recovery via rsa key

I’ve been following this documentation for pgpool-ii https://www.pgpool.net/docs/latest/en/html/example-cluster.html

I’m having a hard time setting up rsa on my postgres streaming cluster built in official docker image https://hub.docker.com/_/postgres.

I was able to do the streaming now i’m on the part of setting up failover.

part of the documentation says.

To use the automated failover and online recovery of Pgpool-II, the settings that allow passwordless SSH to all backend servers between Pgpool-II execution user (default root user) and postgres user and between postgres user and postgres user are necessary. Execute the following command on all servers to set up passwordless SSH. The generated key file name is id_rsa_pgpool.  
     [all servers]# cd ~/.ssh      [all servers]# ssh-keygen -t rsa -f id_rsa_pgpool      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server1      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server2      [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server3       [all servers]# su - postgres      [all servers]$   cd ~/.ssh      [all servers]$   ssh-keygen -t rsa -f id_rsa_pgpool      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server1      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server2      [all servers]$   ssh-copy-id -i id_rsa_pgpool.pub postgres@server3 

Is it possible to set it up inside a container from postgre’s official image? I would like to get an idea on how to do it from some samples or existing solution.

Moreover, Since I can’t do the rsa thing as of the moment.

I decided to create a script that is using a psql command on my pgpool server to the new master

#!/bin/bash # This script is run by failover_command.  set -e  # Special values: #   %d = failed node id #   %h = failed node hostname #   %p = failed node port number #   %D = failed node database cluster path #   %m = new master node id #   %H = new master node hostname #   %M = old master node id #   %P = old primary node id #   %r = new master port number #   %R = new master database cluster path #   %N = old primary node hostname #   %S = old primary node port number #   %% = '%' character  FAILED_NODE_ID="$  1" FAILED_NODE_HOST="$  2" FAILED_NODE_PORT="$  3" FAILED_NODE_PGDATA="$  4" NEW_MASTER_NODE_ID="$  5" NEW_MASTER_NODE_HOST="$  6" OLD_MASTER_NODE_ID="$  7" OLD_PRIMARY_NODE_ID="$  8" NEW_MASTER_NODE_PORT="$  9" NEW_MASTER_NODE_PGDATA="$  {10}" OLD_PRIMARY_NODE_HOST="$  {11}" OLD_PRIMARY_NODE_PORT="$  {12}"  #set -o xtrace #exec > >(logger -i -p local1.info) 2>&1  new_master_host=$  NEW_MASTER_NODE_HOST ## If there's no master node anymore, skip failover. if [ $  NEW_MASTER_NODE_ID -lt 0 ]; then     echo "All nodes are down. Skipping failover."     exit 0 fi  ## Promote Standby node. echo "Primary node is down, promote standby node" $  {NEW_MASTER_NODE_HOST}.  PGPASSWORD=postgres psql -h $  {NEW_MASTER_NODE_HOST} -p 5432 -U postgres <<-EOSQL  select pg_promote(); EOSQL  #logger -i -p local1.info failover.sh: end: new_master_node_id=$  NEW_MASTER_NODE_ID started as the primary node #exit 0 

The above script is working if i simulate that my primary is down.

However, in my new primary this is the log

2020-10-07 20:25:31.924 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:25:31.924 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:25:32.939 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:25:32.939 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:25:32.939 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:33.003 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:33.003 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:34.012 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:34.012 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:26:35.026 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:26:35.026 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:26:35.026 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:35.096 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:35.096 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:36.110 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:36.110 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:27:37.123 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:27:37.123 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:27:37.123 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:37.177 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:37.177 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:38.221 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:38.221 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history cp: cannot create regular file '/archives/00000002.history': No such file or directory 2020-10-07 20:28:39.230 UTC [1165] LOG:  archive command failed with exit code 1 2020-10-07 20:28:39.230 UTC [1165] DETAIL:  The failed archive command was: cp pg_wal/00000002.history /archives/00000002.history 2020-10-07 20:28:39.230 UTC [1165] WARNING:  archiving write-ahead log file "00000002.history" failed too many times, will try again later 

still trying to execute the WAL part.

moreover, on my other standby it is still looking for the old master.

2020-10-07 20:29:07.818 UTC [1365] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:12.827 UTC [1367] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:17.832 UTC [1369] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:22.835 UTC [1371] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:27.826 UTC [1373] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:32.836 UTC [1375] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:37.836 UTC [1377] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:42.850 UTC [1379] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:47.857 UTC [1381] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 2020-10-07 20:29:52.855 UTC [1383] FATAL:  could not connect to the primary server: could not translate host name "pg-1" to address: Name or service not known 

and dealing with this I think is more complicated than setting up the rsa part so that i could utilize the existing fail_command script that pgpool has.

Thanks for the response.

SQL Database stuck In Recovery state after restart

SQL Server was restarted by mistake, when it came online, database came back in "In Recovery" mode.

Check from error it says "Recovery of database ‘DB1’ (5) is 8% complete (approximately 27146 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.

It says it will 8 hours to bring this 2tb database online.

Any quick way to fix this, as we didnt had anything open in LOG files, so even if they r ignored, its no impact.

We want to bring this database ONLINE quickly

What’s the security risk in password recovery attempts

Last days I’ve received multiple password recovery attempts for a WordPress user. The user didn’t initiate these attempts.

I’m blocking the IP’s on the server, but I don’t see what the goal of the attacker is. I checked the mails the user receives, and they contain a valid password reset link (so no phishing attempt).

So I don’t really understand what the attacker is trying to achieve with these password recovery requests. Or are they just checking for vulnerabilities on that page?

What is the name of the crypto system used by ICANN’s DNS recovery system?

I once heard about a type of crypto system that behaves in the following way: I have x secret keys that work together to decrypt messages encrypted with a public key d. If I get at least n of the x secret keys together, I can decrypt messages encrypted with d in their entirety. If I have anything less than n, I get no information about those messages.

When this system was being described to me, I was told that an example of this system in the real world is ICANN’s system for recovering the DNS registry in the event of some catastrophic failure. In their case, x = 7.

I heard about this a little while ago, and I don’t remember exactly what the system is called. I have tried to research it with the ICANN website, but I can’t seem to find an actual name of the system that I can use to then do a deeper dive into this. Does anyone know the name of the system I just described? Also, since I am trying to dive rather deeply into this, I would also appreciate any resources (research papers, open-source implementations, additional real-world examples, etc.) that could be listed.

Thank you!

Recovery possibilities with Zero knowledge encryption

I have some encryption understanding however I fail to get my head around following scenarios. I would like to know if they are possible with a zero knowledge encryption system.

What the system can or can’t do can be added to the answer. Example:

  • The system needs to keep a encrypted copy of the key.
  • The user has to have the key on a USB stick.

In the end, all scenarios ask the same questions.

  • Can the user access his data?
  • Does the system know about his data?

Scenario 1: User logs in on a new computer. Does not have the key with him.

Scenario 2: User logs in on a new computer. Does have the key with him (e.g. USB stick).

Scenario 3: User lost his password. His identity has been verified and approved.

Scenario 4: New sub-users are assigned to the same resource.

Questions on Android data recovery?

I’ve been looking around alot and cant seem to find a basic clear answer. My understanding is if i delete photo001.jpg this deletes the index and assigns it to the free space to be overwritten. So this brings me onto my next scenarios.

If i have a device capable of 8gb memory, and it has been filled to is capacity. Is it likely that photo001.jpg has been overwritten if deleted say a month beforehand then replaced that data with junk or say a video that maxes out the storage?

Wouldn’t this be a more secure way of replacing data than the “wipe free space” which write 0’s?

With the method above how likely is it to retrieve the file after:

1 year? 6 months? 3 months? 1 week? 1 day?

I understand that bad sectors can sometimes retain data and that the advertised storage space is often larger than what is available without root, so how much storage space does my 32gb android actually offer in total then? Would this come up on a device data forensics analysis? If so what level of forensics would be required and situation to warrant such level of forensics?

If I use my phone near max capacity, say store “7.9gb of music” on the phone, allowing only 100mb free space. Is this not a good way to ensure data deleted will be more frequently overwritten, e.g that remaining 100mb will fill up with deleted data that is overwritten quicker due to there only being 100mb? If i was now deleting photo001.jpg, does my android choose to overwrite this instantly or does it avoid it until impossible to do so? Or is it really random when the device chooses to overwrite?

These are hypotheticals and i understand that the only sure way to delete something is encryption then wiping. I use full disk encryption for my mobile device/laptops etc. I’m just curious of the capabilities of computer forensics and android data storage.

I understand there is alot of questions here but i feel this was the best place to ask them as people seem friendly and helpful here.