Replication from AWS RDS to MySQL fails because of unknown character set?

I have an RDS instance running MySQL 5.7, which I have configured as replication master, and a MySQL 5.7 running on an EC2 instance (Ubuntu 20.04) as slave. The replication breaks instantly with:

mysql> show slave status\G *************************** 1. row ***************************                Slave_IO_State: Waiting for master to send event ...                    Last_Errno: 22                    Last_Error: Coordinator stopped because there were error(s) in the  worker(s). The most recent failure being: Worker 1 failed executing transaction  'ANONYMOUS' at master log mysql-bin-changelog.000003, end_log_pos 7106130. See error log  and/or performance_schema.replication_applier_status_by_worker table for more details  about this failure or others, if any. ... 

Looking in performance_schema.replication_applier_status_by_worker:

mysql> select * from replication_applier_status_by_worker\G *************************** 1. row ***************************          CHANNEL_NAME:              WORKER_ID: 1             THREAD_ID: NULL         SERVICE_STATE: OFF LAST_SEEN_TRANSACTION: ANONYMOUS     LAST_ERROR_NUMBER: 22    LAST_ERROR_MESSAGE: Worker 1 failed executing transaction 'ANONYMOUS' at master log  mysql-bin-changelog.000003, end_log_pos 7106130; Error 'Character set '#255' is not a  compiled character set and is not specified in the '/usr/share/mysql/charsets/Index.xml'  file' on query. Default database: ''. Query: 'CREATE USER 'repl'@'%' IDENTIFIED WITH  'mysql_native_password' AS '*7D3FD36543D422D3527933EB1921129B72BC7433''  LAST_ERROR_TIMESTAMP: 2021-10-22 15:22:05 

So, for some reason the master sends something with an unknown character set. Is there any way to fix this? At first I thought it was about collations, and there is indeed no collation with ID=255, but it does clearly say ‘character set’. Any ideas?

Show more details on Postgres logical replication errors

Wonder if there is a way to add more details (like the column name, database) to logical replication errors in case of missing columns. I got general log entries like this:

2021-09-16 14:47:37.149 CDT [32910] ERROR:  logical replication target relation "public.users" is missing some replicated columns 

I could not find anything related in the documentation. I am trying to detect these kinds of errors to trigger an alert or something like that. The only idea that I have is to watch the logs for entries like the above one. Any idea will be welcome!

Best way to start Master – Slave replication

I’m trying to get replication set up between two servers with the eventual goal of switching the slave over to be the master (I’m trying to upgrade the hardware stack with as little interruption as possible as well as upgrading from 5.7 to 8).

Current server is not configured as a master and does not have binary logging enabled.

Been readying different methods of doing it and the best I’ve come up with:

  1. Stop the server
  2. Export the database
  3. Restart the server with Binlog enabled
  4. Import the database to new server
  5. Restart with Master-slave relationship enabled once the full database is imported

My understanding is that the slave will be able to use the binlog to catch up the transactions to current, and I’m hoping this will lead to minimal down time. Having never tried it before I just wanted to know if there’s a better way to accomplish this and\or if it will work as expected (especially with the binlog)

Does Azure Mariadb/Mysql data-in replication support certificate chains?

I would like to migrate from AWS RDS Mysql to Azure Mariadb managed database.

For this purpose, I would set up data-in replication using TLS, roughly following the provided guide.

AWS provides certificate chains, for RDS servers. Does Azure data-in replication support certificate chains?

postgresql 13 sync replication implementation question and pgpool failback question

In a pgpool-II 4.2 and postgresql 13 environment: 3 servers (each hosting a pgpool and a postgresql)

I understand we can set the below parameter for different durability’s. synchronous_commit = off, local, on, remote_write, or remote_apply

My question is that when I set it to something beyond "local" document implies the wal information will not even be sent to slaves until the local is flushed.

My question is why not the below 2 start in parallel

  1. local flush starts
  2. sending wal info. to slaves

because this would save some time by doing these in parallel, correct? Or please let me know my understanding is off?

The other question is that does pgpool-II simplify the postgresql failover and failback? And is it reliable? Is there some real life examples of setups and comments on this? Thanks!

Difference Between CD Duplication and CD Replication

Lybrodisc has specialized in the production of music playing equipment for many years and has a wealth of experience.
When you first hear the words duplicate and replicate, can you think of any differences between the two? For most people, one word seems to be synonymous with the other, but this is not the case at all when you talk about CD duplication and CD replication.
In simple terms, CD duplication is the process that most computer owners use for their data or music files. With CD duplication, the information is burned onto a disk. What you need to have for this, is a software and a CD burner that will allow you to automatically burn the information onto a CD, and if you want to have several copies of disks containing the same data, the information needs to be burned again. That is practically how the process of CD duplication works.
CD replication, on the other hand, can be referred to as ‘professional CD burning’. Instead of burning the data onto each individual CD, a process is followed whereby the CD is molded to be an exact copy of the original ‘master copy’. This is the process used to produce the CDs sold on the market ‘“ because, just imagine how tedious it would be if the songs on the thousands of CDs released needed to be burned individually.
So, what are the other key differences between CD duplication and CD replication? CD duplication is more appropriate for personal use. It is actually inexpensive, and convenient for individuals who have computers at home. CD replication is more appropriate for commercial use, and the professional process of inputting the data onto the disk is a more reliable one. CD replication also offers a quicker, more convenient and high-quality way of replicating the data or songs from the master copy to individual disks.

We offer a fast and friendly trade CD, DVD Duplication and CD, DVD Replication centers directly to business. We can handle any quantity of CD, DVD Duplication and CD, DVD Replication, no matter how large or small and offer FREE ASSEMBLY and PACKING. Our aim is to take the pressure off you and deliver on time, every time, with the quality you will be proud of.
Full color ‘On Body’ printing is available on all quantities ensuring that your CD-ROM or DVD looks as good as it performs, and as you’d expect we provide an impressive range of packaging options.
*Low cost high quality trade center
*Wide range of packaging options
*Full color print on disc
*Fast turnaround
*Disc artwork design center
*Friendly and reliable

In the CD and DVD duplication process blank recordable compact discs (CD-R) are used. A burner or duplication is used to copy your data onto the blank CD-R. A CD-R or DVD-R with a printed label looks virtually identical to a replicated disc but with one difference – blank replicated DVD contains an additional element. They possess a laser-sensitive dye that allows the DVD to be “burned” with the video or data from your computer or DVD recorder.
The CD and DVD duplication processes are perfect for quick turn around and small run capability or for instances when the disc needs to be writable. We use professional quality CD-Rs and can produce tens of thousands of burned and printed CDs in a matter of days. After the accomplishment of the CD work, CD stickers are pasted on CD to give a final look and then finally packed in plastic sleeve for saving it from scratches. The Whole procedure is economical and within budget and time saving than duplication.

The first recorded sound was Thomas Edison’s voice, captured on a phonograph in 1877 reciting part of the nursery rhyme song “Mary Had a Little Lamb.”
10 years later, Emile Berliner created the first device that recorded and played back sound using a flat disc, the forerunner of the modern record.
Over the course of the next six decades, records and record players were improved and standardized, with the 33 and 45 RPM records supplanting most other formats in the post-WWII years.
By the 1970s, record player technology had evolved to the point where it has changed little in the intervening half century. In that time, cassette tapes came and went. CDs came and are going. And MP3 players were replaced by phones, as were cameras, pocket planners, and our social lives, more or less.
This year, 2020, marks the first year in more than a generation since record sales — that is to say physical vinyl records — have surpassed CD sales. The reasons for this are twofold: CD sales have dropped dramatically in recent years, while sales of vinyl records are actually up this year. And while you might think it’s nostalgic Boomers or Gen Xers behind the renaissance of records, in fact, surveys show its millennial consumers driving the rising trend in vinyl sales.
The way most people listen to music has changed. “You hear music when you’re in the coffee shop, in the car, in the gym, just walking down the street sometimes, we hear it everywhere,” says Scott Hagen, CEO of Victrola. “In every store, we go into we hear it, and we’re consuming more music than ever before, but not in the same way. The ability to stop and sit and listen to an album from beginning to end, that’s something that always has been and always will be relevant.”
At some point a band, songwriter or home recordist may wish to have cassette duplication made of their songs. These days record companies and publishers prefer cassette, but broadcast radio stations still prefer ¼” reel-to-reel tapes or disc (if your songs are to be played on the air), as the quality is that much better. There are three methods of cassette duplicating available, which are Loop-Bin, High Speed and Real Time.

Loop-Bin
Loop-Bin is a high speed form of duplicating where a 1″ or ½” master tape is first made from your ¼” master tape. It is then put on a machine which runs at 32 or 64 times normal speed along with slave units which copy reels of cassette tape. The cassette tape is then fed into empty cassette shells; this method is used for producing anything from 500 to 100,000 copies and is mostly used by independent and major record companies.

High Speed
A master cassette or ¼” reel is run at 8 or 16 times its normal speed along with slave cassette units. These slave units copy both sides simultaneously in stereo or mono and there can be one slave unit copying one cassette, or many slaves copying many cassettes at once. High Speed duplicating can cater for short runs (100+) to runs of thousands.

Real Time
A ¼” reel or cassette master is played at normal speed (which could be 15ips or 7½ips for reels or 1⅞ips if it’s a cassette). A bank of 5 to 50 cassette decks all run together to copy at normal speed. Generally, real time duplication caters mainly for runs of 10 to around 1000.

Noise Reduction
Most cassette duplicators can encode your cassettes with Dolby B and some can encode with Dolby C noise reduction. However, if you use a high speed duplicator and you want Dolby on your copies, then make sure your master cassette is recorded initially with Dolby on it. You should then be able to have the copies reproduced with Dolby. High Speed or Real Time duplicating are most likely to suit the home recordist, band, songwriter or small label.

Your Master Tape
This should be ¼” reel-to-reel running at 15ips or 7½ips stereo half track or quarter track. You can use cassette masters (from the studio), but they are not as good quality as reel-to-reel. Do remember also, that if your songs are not in the right running order, then a Duplicating Suite can re-edit the tape, but there may be an additional charge. If you choose the Loop-Bin or High Speed methods be prepared for a charge for making their copy master which is necessary for each of these processes.

Tape Types
When you telephone or go to see a Duplicator ask what tape he uses; for example Ferric, Chrome or Metal, and also, find out what brand it is. A named brand like Agfa, BASF, TDK or Maxell are all pretty much a safe bet. If he used a name you do not know, then listen to a copy, preferably of your master, and compare the quality with other tape brands. You may decide to use your own bought tapes instead of those supplied by the duplicators, in which case there will be a charge per hour to copy onto your own tape, which can be anywhere from £5.00 to £10.00 per hour plus VAT.

Because the CD replication involves quite a bit of setup it’s usually done for larger runs.
Most manufacturers do it on orders of a thousand or more. We replicate CDs in quantities as low as 300.
However, what do you do if you need less than three hundred discs?

The SQL Server replication subscriber cannot start: ‘No agent status information is available.’

I am trying to run merge replication of the database between two servers (both are SQL Server version 15). Both servers are in the domain. On both I am logged using this domain account (all server roles enabled like sysadmin, serveradmin, setupadmin).

I created publication using wizard and selected following options:

  • ‘Merge publication’
  • ‘SQL Server 2008 or later’
  • Then I selected one table as object to publish
  • No filters
  • I unchecked ‘Schedule the Snapshot Agent to run at the following times’
  • Snapshot Agent: ‘Run under the following windows account’ – I entered my domain account + password
  • Connect to the publisher: ‘By impersonating the process account’

Publication has been created successfully.

I created subscription using wizard and selected following options:

  • Publication dialog – I have selected publication which I have just created
  • ‘Run each agent at its Subscriber (pull subscription)’
  • I have selected the database on the destination server (I created it before running subscription wizard)
  • Merge agent security dialog: again I have entered my domain credentials, Connect to Distributor: ‘by impersonating…’, Connect to Subscriber: ‘by impersonating…’
  • Synchronization schedule dialog: ‘Subscriber’, ‘Run continuously’
  • Initialize Synchronizations dialog: ‘Initialize’, ‘At first synchronization’
  • Subscription Type: ‘Client’, ‘First to Publisher wins’

Subscription has been created successfully

But when I right click on subscriber and select ‘View synchronization status’ the message appears:

'No agent status information is available.' 

and when I try to click ‘Start’ button then the error message appears:

'The agent could not be started. SQLServerAgent Error: Request to run job DEV-SQL1\DB1-ag_publ5-ag_publication_5_2-DEV-SQL2\DB1-ag_subs5- 0 (from User sa) refused because the job is already running from a request by User sa. Changed database context to 'ag_subs5'. (Microsoft SQL Server, Error: 22022) ' 

I created snapshots on the publisher site, stop the related job on the subscriber site and then pressed ‘Start’ button in ‘View synchronization status’ dialog, but the result is always as described above.

Any suggestion how can I start my subscriber are kindly appreciated.

Efficiently DROP huge table with cascading replication in PostgreSQL

What I have:

Database: PostgreSQL 9.3

Table T,

  • Structure: 10 integers/bools and 1 text field
  • Size: Table 89 GB / Toast 1046 GB
  • Usage: about 10 inserts / minute
  • Other: reltuples 59913608 / relpages 11681783

Running cascading replication: Master -> Slave 1 -> Slave 2

  • Replication Master -> Slave 1 is quite fast, a good channel.
  • Replication Slave 1 -> Slave 2 is slow, cross-continent, about 10 Mbit/s.

This is a live, used database with about 1.5TB more data in it.


What’s needed to be done:

  • Drop all data to start with a fresh setup (to do constant cleanups and not allow it to grow this big).

Question: What would be the most efficient way to achieve this:

  • without causing huge lags between Master and Slave 1
  • without causing Slave 2 to get irreversibly lagged to a state where catching up is not possible

As I see it:

  • Safe way – do a copy, swap places, DELETE data constantly watching lag
  • Other way – do a copy, swap places, DROP table – but this would cause enormous amounts of data at once and Slave 2 would get lost?

After mysql 8.0.24 does master-slave replication, if the master database hangs, how does the slave database switch to the master database?

After mysql 8.0.24 does master-slave replication, if the master database hangs, how does the slave database switch to the master database? If it can switch, what are the exact steps?

If switching is not possible, what mode can be used so that if a primary database hangs, another database, which is synchronised with the primary database in real time, can operate normally and can be read and written to?