6

Resolving the MySQL Active-Active Replication Dilemma

 3 years ago
source link: https://www.percona.com/blog/2021/04/28/resolving-the-mysql-active-active-replication-dilemma/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Resolving the MySQL Active-Active Replication Dilemma

has been a challenge in the MySQL ecosystem for years before truly dedicated solutions were introduced – first Galera (and so Percona XtradDB Cluster (PXC)) replication (around 2011), and then Group Replication (first GA in 2016).

Now, with both multi-writer technologies available, do we still need traditional asynchronous replication, set up in active-active topology? Apparently yes, there are still valid use cases. And you may need it not only when for some reason Galera/PXC or GR are not suitable, but also when you actually use them. Of course, the most typical case is to have a second cluster in a different geographic location, as Disaster Recovery. If you still wonder why you would need it, just read how a whole data center can disappear in the news a few weeks ago, about the OVH incident.

So, a DR site needs to replicate online from the primary cluster and be able to take over the workload very fast if needed. But also it is expected to be able to switch back effortlessly, hence very often the async replication channels are set up in both directions.

A very good writeup about this can be found here: How Not to do MySQL High Availability: Geographic Node Distribution with Galera-Based Replication Misuse.

Now, after seeing repeating problems with active-active setups falling over and over for years, I thought there are still too few warnings out there about the risks, so I decided to add one more little stone to the stack.

Before I continue, I have to mention this great webinar made last year by my colleague Sveta. You should definitely watch if you are interested in the subject: How Safe is Asynchronous Master-Master Setup in MySQL?.

Register for Percona Live ONLINE
A Virtual Event about Open Source Databases

Failure Test

So, let me demonstrate a simple test case, which may be an eye-opener to some.

First, let’s use the great dbdeployer tool to launch two MySQL instances with active-active replication in just one command:

Shell
$ dbdeployer deploy replication --topology=all-masters --nodes=2 --concurrent 8.0.23
all-masters directory installed in $HOME/sandboxes/all_masters_msb_8_0_23
run 'dbdeployer usage multiple' for basic instructions'
$HOME/sandboxes/all_masters_msb_8_0_23/initialize_ms_nodes
# server: 1 
# server: 2

Now, create a very simple table with one example data row:

MySQL
node1 (test) > create table t1 (id int primary key auto_increment, a int);
Query OK, 0 rows affected (0.04 sec)
node1 (test) > insert into t1 set a=500;
Query OK, 1 row affected (0.01 sec)

So, at this point both nodes have the same data:

MySQL
node1 (test) > select * from test.t1;
+----+------+
| id | a    |
+----+------+
|  1 | 500  |
+----+------+
1 row in set (0.00 sec)
node2 (test) > select * from test.t1;
+----+------+
| id | a    |
+----+------+
|  1 | 500  |
+----+------+
1 row in set (0.00 sec)

In the next step, let’s simulate some little replication lag by introducing a delay of one second, as it will allow reproducing the problem at will:

MySQL
node1 (test) > stop replica sql_thread; CHANGE REPLICATION SOURCE TO SOURCE_DELAY=1 FOR CHANNEL "node2"; start replica sql_thread;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
node2 (test) > stop replica sql_thread; CHANGE REPLICATION SOURCE TO SOURCE_DELAY=1 FOR CHANNEL "node1"; start replica sql_thread;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)

OK, so what if we send an update, very close in time, to both nodes, where the same row gets a different value:

Shell
$ all_masters_msb_8_0_23/use_all -e "update test.t1 set a=@@server_id where id=1"
# server: 1 
# server: 2

As a result, both nodes have different column values!

MySQL
node1 (test) > select * from test.t1;
+----+------+
| id | a    |
+----+------+
|  1 | 200  |
+----+------+
1 row in set (0.00 sec)
node2 (test) > select * from test.t1;
+----+------+
| id | a    |
+----+------+
|  1 | 100  |
+----+------+
1 row in set (0.00 sec)

Is Replication Broken?

You may think replication is now broken and some error will alert you about the situation? Not at all!

MySQL
node1 (test) > show replica status\G
*************************** 1. row ***************************
             Replica_IO_Running: Yes
            Replica_SQL_Running: Yes
        Seconds_Behind_Source: 0
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
            Executed_Gtid_Set: 00023824-1111-1111-1111-111111111111:1-3,
00023825-2222-2222-2222-222222222222:1
node2 (test) > show replica status\G
*************************** 1. row ***************************
             Replica_IO_Running: Yes
            Replica_SQL_Running: Yes
        Seconds_Behind_Source: 0
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
            Executed_Gtid_Set: 00023824-1111-1111-1111-111111111111:1-3,
00023825-2222-2222-2222-222222222222:1

Or, did you hope enabling the GTID feature would prevent inconsistency from happening? Well, see the Executed_Gtid_Set on both nodes – it is identical, yet the nodes have different data.

What happens next? Well, it depends, maybe replication will eventually fail someday with an error if the same row is modified again, but also it is possible the inconsistency will spread further without you even notice it!

Lesson learned?

I hope this simple example emphasized the need for extra care when dealing with multi-primary replication topologies. The remedy though is usually quite simple:

1 – Make sure to allow only one node at a time for writes (super_read_only is here to help you).

2 – Check the data consistency regularly with appropriate tools (MySQL replication primer with pt-table-checksum and pt-table-sync).


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK