Unable to replicate Postgres tables using logical replication in RDS

Tables in the destination database are empty even after subscription is successful. I am extremely new to logical replication within Postgres. I am trying to replicate three tables from the source database to the destination database. These two databases are running on difference RDS instances

I followed the steps laid out on this link https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Replication.Logical.html

I have enabled replication across both the database instances by changing the parameter rds.logical_replication to 1 (enabled). The parameter groups are in sync after the restart.

On the source database, I created a publication

create publication master_data_publication for table tbl1, tbl2, tbl3;

On the destination I created a table with the same name and same columns and then created a subscription

create subscription master_data_sub CONNECTION 'host=sourcedbhost.some-code.some-region.rds.amazonaws.com port=5432 dbname=sourcedb user=sourceuser password=userpassword' publication master_data_publication;

if i run select * from pg_catalog.pg_stat_subscription;, it shows the following. enter image description here With this, we would assume that the replication should be working.

But the tables are shown to be completely empty.

On Source

select count(*) from tbl_1;  count -------      332 

On Destination

select count(*) from tbl_1;  count -------      0 

I am now stuck on how to proceed from here.

Will database replication increase the size of the replicated database?

First thank you for your time. We have a SQL Server database that was assigned to be replicated. the original size of the database was 5 GB for the datafile, not the log file. Now after the replication the data file is 4 times bigger around 19 GB. Any pointers to see what could happened will be appreciated. I know sql as developer and some little management but nothing about replication. The performance of the database has been damaged very bad so I am trying to see what need to be done to improve it. I do apologize for my English as well.

MYSQL cluster issue: shutdown data node correctly breaks replication

I faced issue on mysql cluster (5.7.28), i shuted down properly vm6 (data nodes) and the mysql replication was broken, i’m trying to make the link bettwen the datanode down and the replication broken but i still can’t find the reason (below the relevant), is there someone who can help me to find the link

Slave: Got error 4009 ‘Cluster Failure’ from NDB Error_code: 1296 [Warning] Slave: Can’t lock file (errno: 157 – Could not connect to storage engine) Error_code: 1015 Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START".

mySQL Group Replication Error “ERROR 3092 (HY000)”

The full error (on 2nd node) is:

mysql> START GROUP_REPLICATION;  ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log. 

This is after starting the 1st server without error. The log shows:

2021-02-27T19:05:45.079426Z 16 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. 2021-02-27T19:06:10.874606Z 16 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.' 2021-02-27T19:06:10.875454Z 17 [System] [MY-011565] [Repl] Plugin group_replication reported: 'Setting super_read_only=ON.' 2021-02-27T19:06:10.878134Z 16 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the allowlist. It is mandatory that it is added.' 2021-02-27T19:06:10.878182Z 16 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv6 localhost address to the allowlist. It is mandatory that it is added.' 2021-02-27T19:06:10.881114Z 18 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. 2021-02-27T19:06:13.300659Z 0 [ERROR] [MY-011516] [Repl] Plugin group_replication reported: 'There is already a member with server_uuid 6227f63c-dd97-11ea-8989-86fbcb300464. The member will now exit the group.' 

I am following the instructions here: https://www.digitalocean.com/community/tutorials/how-to-configure-mysql-group-replication-on-ubuntu-16-04

The instructions say that the UUID should be the same for all servers.

My droplet is a LAMP Ubuntu 20.04 droplet. mySQL 8.0.23

Anyone run into this? Any thoughts on what’s going wrong? I’ve checked and double-checked the procedure and made sure I followed it as best I could.

How to change MySql Master-Master Replication to Master-Slave Replication

I currently have MySql Master-Master replication set-up with Read_only on Master2. There were lot of sync issues so I’ve stopped replication from Master2 to Master1 by stopping the Slave in Master1. Master1 is currently replicating to Master2 with no issues. Is this enough or is there another best way to revert to Master-Slave replication. Should I run RESET SLAVE on Master1 to completely stop Replication from Master2 to Master1.

SQL Server Transactional Replication between SQL server 2019 Standard and SQL server 2016 Web Edition

We have SQL Server 2016 Web Edition on production currently. Since this edition supports replication as Subscriber only, I set up a new server with SQL Server 2019 Standard edition. I want to configure SQL Server 2019 as Publisher and SQL Server 2016 as subscriber. To initialize data for Publisher database on SQL server 2019, I created a backup on SQL Server 2016 and restored it on SQL Server 2019. Since our database is very large, I tried to initialize replication from a backup. So I did reverse backup-restore again by creating a backup of Publisher on SQL Server 2019 and restoring it on Subscriber on SQL Server 2016. But this did not work because [SQL Server 2019 backups cannot be restored by any earlier version of SQL Server] (https://docs.microsoft.com/en-us/sql/relational-databases/databases/copy-databases-with-backup-and-restore?view=sql-server-ver15). Could you please tell me what is the best method for initializing the replication in this case? Thank you very much for reading my question!

Failover in MySQL Chain Replication

We have a 4 server setup, 1 master, 3 daisy-chained slaves, in the following setup:

A (master) -> B (slave) -> C (slave) -> D (slave)

(the servers B and C and D are running with log-slave-updates)

In normal operation everything works as expected: if we add new data to A, we see it show up quickly in B and C and D

Now we want to create a failure scenario — we shutdown A and want to make B the new master:

B (master) -> C (slave) -> D (slave)

It seems like what we want to do is fairly simple — switch B from Slave to Master

We are trying to follow the documentation "Switching Sources During Failover" https://dev.mysql.com/doc/refman/8.0/en/replication-solutions-switch.html

The doc says " On the replica Replica 1 being promoted to become the source, issue STOP REPLICA | SLAVE and RESET MASTER."

So if we’re reading correctly, to switch B from Slave to Master all we have to do is run:

STOP SLAVE RESET MASTER 

Running "STOP SLAVE" causes no issues, but running "RESET MASTER" breaks the replication to downstream staves C and D. This is the error on C:

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Client requested master to start replication from position > file size'

So what is the point of "RESET MASTER" and why does it break the chain? Is there any harm in omitting it/how does one properly do a failover in MySQL chain replication?

Recovery with Postgresql logical replication

I’m using Postgresql 13 on Debian 10.6 and learning about logical replication.

I’ve set up logical replication with one publisher and one subscriber of one table. I’m wondering what my options are for recovering data (or rolling back) when, for example, someone accidentally does something on the publisher side like updating all the data in the table with the wrong value, or even deleting everything from a table. With logical replication these unintentional changes will of course be applied to the subscriber.

I’ve relentlessly searched online but have had no luck finding out what my options are. I read about PITR but I’m thinking that’s suited more for physical replication, whereas I want to test rolling back changes on a specific database on a server.

is Sql Server AlwaysOn AG a multi-site active-active replication technology?

Scenario FCI: I understand multi-site active-active replication in SQL Server can be installed with an FCI (Failover Cluster Instance). The replication part occurs at the storage level with a connected SAN. So it’s considered active-active, but only one node has the FCI ownership, but the data is replicated between sites at the storage level.

Scenario AG: But, I’m searching for a widely accepted definition of active-active replication. In Microsoft’s documentation, the secondary replica in an AG (Availability Group) is active if the same can be used for read-only queries. So, an AlwaysOn AG could be used to get replication between the two sites, with the primary active read-write and the secondary active-read-only.

So the difference is:

Scenario Main Site DR Site
FCI read-write read-write
AG read-write only-read

So my question is: I’m seen that FCI with synchronous storage level replication is considered active-active replication. But, I’m seen too that active-active replication is considered only when you can read-write to both sites. So, my questions are:

  1. is there some widely accepted definition of active-active replication?
  2. Scenario FCI and scenario AF are considered active-active replication?
  3. what’s the difference between active-active replication and master-master replication?

MongoDB cross datacenter replication without elections or high data availaility

I want to replicate a MongoDB database to another node, located in another datacenter. This is to help guard against data loss in the case of a hardware failure.

We don’t want/need high availability or elections; just a ‘close to live’ read-only copy of the database, located in another DC.

Everything I read says that you need an odd number of nodes, due to elections, but this isn’t something we need/want and I can’t find anything related to just having one primary, and one secondary (I might be being blind).

Is this something we can achieve with MongoDB, and if so are there any ‘gotchas’ or serious downsides we should consider?