5

InnoDB ClusterSet Deployment With MySQLRouter

 1 year ago
source link: https://www.percona.com/blog/innodb-clusterset-deployment-with-mysqlrouter/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

InnoDB ClusterSet Deployment With MySQLRouter

August 17, 2023

Anil Joshi

This blog post will cover the basic setup of the InnoDB ClusterSet environment, which provides disaster tolerance for InnoDB Cluster deployments by associating a primary InnoDB Cluster with one or more replicas in alternate locations/different data centers. InnoDB ClusterSet automatically manages replication from the primary cluster to the replica clusters via a specific ClusterSet Async replication channel. If the primary cluster becomes inaccessible due to a loss of network connectivity or a data center issue, you can make a replica cluster active in its place.

Now, let’s see in detail how exactly we can configure the topology.

InnoDB ClusterSet Deployment

We have used the sandbox environment available via MySQLShell utility for this setup.

Environment

Cluster1:
         127.0.0.1:3308
         127.0.0.1:3309
         127.0.0.1:3310
Cluster2:
         127.0.0.1:3311
         127.0.0.1:3312
         127.0.0.1:3313
Router:
         127.0.0.1:6446/6447

Let’s set up the first cluster (“cluster1”)

  •  Deploying the sandboxes.
MySQL JS > dba.deploySandboxInstance(3308)
MySQL JS > dba.deploySandboxInstance(3309)
MySQL JS > dba.deploySandboxInstance(33010)
  •  Then, we need to perform some pre-checks before initiating the cluster.
###connecting to the concerned nodes one by one.
MySQL JS > shell.connect('root@localhost:3308')
MySQL localhost:3308 ssl JS > shell.connect('root@localhost:3309')
MySQL localhost:3309 ssl JS > shell.connect('root@localhost:3310')
###The below commands will check if satisfying the Innodb cluster requirements (group replication settings) and fix the missing requirements automatically. Here, we have configured a new user "iroot" for cluster deployment.
MySQL localhost:3308 ssl JS > dba.checkInstanceConfiguration('root@localhost:3308')
MySQL localhost:3308 ssl JS > dba.configureInstance('[email protected]:3308',{clusterAdmin: 'iroot', clusterAdminPassword: 'Iroot@1234'})
MySQL localhost:3309 ssl JS > dba.checkInstanceConfiguration('root@localhost:3309')
MySQL localhost:3309 ssl JS > dba.configureInstance('[email protected]:3309',{clusterAdmin: 'iroot', clusterAdminPassword: 'Iroot@1234'})
MySQL localhost:3310 ssl JS > dba.checkInstanceConfiguration('root@localhost:3310')
MySQL localhost:3310 ssl JS > dba.configureInstance('[email protected]:3310',{clusterAdmin: 'iroot', clusterAdminPassword: 'Iroot@1234'})

Once all the instances are prepared, we can plan to create the cluster with the seed node. The “createcluster” command will perform all the hidden steps of initializing group replication, and later on, the other nodes join the group with distributed recovery/clone plugin.

InnoDB cluster is built on top of group replication which provides (automatic membership management, fault tolerance, and automatic failover). It provides us with an easy interface to deploy/manage the complex topologies with DR support.

  • We will bootstrap the cluster with an initial node(“localhost:3308″).
MySQL localhost:3310 ssl JS > shell.connect('iroot@localhost:3308')
MySQL localhost:3308 ssl JS > cluster1 = dba.createCluster('Cluster1')
MySQL localhost:3308 ssl JS > cluster1 = dba.getCluster()

Output:

MySQL localhost:3308 ssl JS > cluster1.status()
    "clusterName": "Cluster1",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "127.0.0.1:3308",
        "ssl": "REQUIRED",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "127.0.0.1:3308": {
                "address": "127.0.0.1:3308",
                "memberRole": "PRIMARY",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.31"
        "topologyMode": "Single-Primary"
    "groupInformationSourceMember": "127.0.0.1:3308"
  • Here, we have successfully bootstrapped the first node. Next, the other nodes will join the cluster using the CLONE Plugin.
MySQL localhost:3308 ssl JS > cluster1.addInstance("iroot@localhost:3309",{password:'Iroot@1234'})

Output:

* Waiting for clone to finish...
NOTE: 127.0.0.1:3309 is being cloned from 127.0.0.1:3308
** Stage DROP DATA: Completed
** Clone Transfer 
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
NOTE: 127.0.0.1:3309 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3309 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.66 MB transferred in about 1 second (~73.66 MB/s)
State recovery already finished for '127.0.0.1:3309'
The instance '127.0.0.1:3309' was successfully added to the cluster.
MySQL localhost:3308 ssl JS > cluster1.addInstance("iroot@localhost:3310",{password:'Iroot@1234'})

Output:

* Waiting for clone to finish...
NOTE: 127.0.0.1:3310 is being cloned from 127.0.0.1:3309
** Stage DROP DATA: Completed
** Clone Transfer 
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
NOTE: 127.0.0.1:3310 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3310 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.66 MB transferred in about 1 second (~73.66 MB/s)
State recovery already finished for '127.0.0.1:3310'
The instance '127.0.0.1:3310' was successfully added to the cluster.
  • At this stage, our first cluster is ready with all three nodes.
MySQL localhost:3308 ssl JS > cluster1.status()

Output:

    "clusterName": "Cluster1",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "127.0.0.1:3308",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "127.0.0.1:3308": {
                "address": "127.0.0.1:3308",
                "memberRole": "PRIMARY",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.31"
            "127.0.0.1:3309": {
                "address": "127.0.0.1:3309",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.31"
            "127.0.0.1:3310": {
                "address": "127.0.0.1:3310",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": "applier_queue_applied",
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.31"
        "topologyMode": "Single-Primary"
    "groupInformationSourceMember": "127.0.0.1:3308"

Let’s now proceed with the second cluster (“cluster2”) setup

  • Deploying the sandboxes via MySqlShell.
MySQL JS > dba.deploySandboxInstance(3311)
MySQL JS > dba.deploySandboxInstance(3312)
MySQL JS > dba.deploySandboxInstance(3313)
  •  Similarly, perform some pre-checks as we did for “cluster1” nodes.
# connecting to the concerned nodes.
MySQL  JS > shell.connect('root@localhost:3311')
MySQL  JS > shell.connect('root@localhost:3312')
MySQL  JS > shell.connect('root@localhost:3313')
# The below commands will check if satisfying the Innodb cluster requirements (group replication settings) and fix the missing requirements automatically. Here, we have configured a new user "iroot" for cluster deployment.
MySQL  localhost:3308 ssl  JS > dba.checkInstanceConfiguration('root@localhost:3311')
MySQL  localhost:3308 ssl  JS > dba.configureInstance('[email protected]:3311',{clusterAdmin: 'iroot', clusterAdminPassword: 'Iroot@1234'})
MySQL  localhost:3308 ssl  JS > dba.checkInstanceConfiguration('root@localhost:3312')
MySQL  localhost:3308 ssl  JS > dba.configureInstance('[email protected]:3312',{clusterAdmin: 'iroot', clusterAdminPassword: 'Iroot@1234'})
MySQL  localhost:3308 ssl  JS > dba.checkInstanceConfiguration('root@localhost:3313')
MySQL  localhost:3308 ssl  JS > dba.configureInstance('[email protected]:3313',{clusterAdmin: 'iroot', clusterAdminPassword: 'Iroot@1234'})
  •  Next, we will create the ClusterSet topology by triggering the sync on the node (127.0.0.1:3311) by existing cluster1 nodes. Node (127.0.0.1:3311) will be the Primary node for cluster2, and the rest of other nodes will join this node by Clone/Incremental process.
1) First, connect to “cluster1” node.
MySQL localhost:3308 ssl JS > c [email protected]:3308
MySQL 127.0.0.1:3308 ssl JS > cluster1 = dba.getCluster()
2) Here, “cluster1” join the ClusterSet topology,
MySQL 127.0.0.1:3308 ssl JS > myclusterset = cluster1.createClusterSet('firstclusterset')

Output:

ClusterSet successfully created. Use ClusterSet.createReplicaCluster() to add Replica Clusters to it.
<ClusterSet:firstclusterset>`
3) Verifying the status.
MySQL 127.0.0.1:3308 ssl JS > myclusterset.status({extended: 1})

Output:

    "clusters": {
        "Cluster1": {
            "clusterRole": "PRIMARY",
            "globalStatus": "OK",
            "primary": "127.0.0.1:3308",
            "status": "OK",
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
            "topology": {
                "127.0.0.1:3308": {
                    "address": "127.0.0.1:3308",
                    "memberRole": "PRIMARY",
                    "mode": "R/W",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3309": {
                    "address": "127.0.0.1:3309",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3310": {
                    "address": "127.0.0.1:3310",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
            "transactionSet": "39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-85,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5"
    "domainName": "firstclusterset",
    "globalPrimaryInstance": "127.0.0.1:3308",
    "metadataServer": "127.0.0.1:3308",
    "primaryCluster": "Cluster1",
    "status": "HEALTHY",
    "statusText": "All Clusters available."
 4) Now, Node (“127.0.0.1:3311″) will sync with the existing “cluster1” with Async process.
MySQL  127.0.0.1:3308 ssl  JS > c [email protected]:3311
MySQL  127.0.0.1:3311 ssl  JS > cluster2 = myclusterset.createReplicaCluster("127.0.0.1:3311", "cluster2", {recoveryProgress: 1, timeout: 10})

Output:

... Replica Cluster 'cluster2' successfully created on ClusterSet 'firstclusterset'. ...
5) Next, the other nodes join the “cluster2” with the clone process.
MySQL  127.0.0.1:3311 ssl  JS > cluster2.addInstance("[email protected]:3312",{password:'Iroot@1234'})
MySQL  127.0.0.1:3311 ssl  JS > cluster2.addInstance("[email protected]:3313",{password:'Iroot@1234'})
6) Finally, checking the status of our clusterset environment.
MySQL  127.0.0.1:3311 ssl  JS > myclusterset = dba.getClusterSet()
MySQL  127.0.0.1:3311 ssl  JS > myclusterset.status({extended: 1})

Output:

    "clusters": {
        "Cluster1": {
            "clusterRole": "PRIMARY",
            "globalStatus": "OK",
            "primary": "127.0.0.1:3308",
            "status": "OK",
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
            "topology": {
                "127.0.0.1:3308": {
                    "address": "127.0.0.1:3308",
                    "memberRole": "PRIMARY",
                    "mode": "R/W",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3309": {
                    "address": "127.0.0.1:3309",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3310": {
                    "address": "127.0.0.1:3310",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
            "transactionSet": "39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-124,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5"
        "cluster2": {
            "clusterRole": "REPLICA",
            "clusterSetReplication": {
                "applierStatus": "APPLIED_ALL",
                "applierThreadState": "Waiting for an event from Coordinator",
                "applierWorkerThreads": 4,
                "receiver": "127.0.0.1:3311",
                "receiverStatus": "ON",
                "receiverThreadState": "Waiting for source to send event",
                "source": "127.0.0.1:3308"
            "clusterSetReplicationStatus": "OK",
            "globalStatus": "OK",
            "status": "OK",
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
            "topology": {
                "127.0.0.1:3311": {
                    "address": "127.0.0.1:3311",
                    "memberRole": "PRIMARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3312": {
                    "address": "127.0.0.1:3312",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3313": {
                    "address": "127.0.0.1:3313",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
            "transactionSet": "2e71122e-2862-11ee-b81c-5254004d77d3:1-5,39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-124,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5",
            "transactionSetConsistencyStatus": "OK",
            "transactionSetErrantGtidSet": "",
            "transactionSetMissingGtidSet": ""
    "domainName": "firstclusterset",
    "globalPrimaryInstance": "127.0.0.1:3308",
    "metadataServer": "127.0.0.1:3308",
    "primaryCluster": "Cluster1",
    "status": "HEALTHY",
    "statusText": "All Clusters available."

Here, the ClusterSet topology is ready now with all six nodes.

In the next phase, we will bootstrap MySQLRouter with our newly created ClusterSet environment:

  • First, we will generate a dedicated user for MySQLRouter monitoring/management.
MySQL  127.0.0.1:3311 ssl  JS > c iroot@localhost:3308
MySQL  localhost:3308 ssl  JS > cluster1 = dba.getCluster();
MySQL  localhost:3308 ssl  JS > cluster1.setupRouterAccount('router_usr')

Output:

Missing the password for new account router_usr@%. Please provide one.
Password for new account: **********
Confirm password: **********
Creating user router_usr@%.
Account router_usr@% was successfully created.
  • Bootstrap the router with the user (“router_usr) and router name (“Router1”).
[vagrant@localhost ~]$ sudo mysqlrouter --bootstrap [email protected]:3308 --account=router_usr --name='Router1' --user root --force

We are using –-force here because without –-force mysqlrouter won’t recognize the clusterset. This will reconfigure the existing clusterset.

Here, we will see some useful information that later on is required to connect to a database or manage the services.

# MySQL Router 'Router1' configured for the ClusterSet 'firstclusterset'
After this MySQL Router has been started with the generated configuration
    $ /etc/init.d/mysqlrouter restart
    $ systemctl start mysqlrouter
    $ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf
ClusterSet 'firstclusterset' can be reached by connecting to:
## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447
## MySQL X protocol
- Read/Write Connections: localhost:6448
- Read/Only Connections:  localhost:6449
  • Finally, start the mysqlrouter service:
sudo mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf &

Validating the connection route

  •  Connect to the router port “6446” and create some demo table/data:
shell> mysql -h 127.0.0.1 -u root -pRoot@1234 -P 6446 -e "create database sbtest;use sbtest;create table sbtest1 (id int(10) not null auto_increment primary key, user varchar(50));insert into sbtest1(user) values('test');"
  •  Connect to the router port “6447” for reading purposes. Here, the connection will be, by default, balanced among the number of nodes of the Primary Cluster(cluster1).
[vagrant@localhost ~]$ mysql -h 127.0.0.1 -u root -pRoot@1234 -P 6447 -e "use sbtest;select * from sbtest1;select @@server_id;"
+----+------+
| id | user |
+----+------+
|  1 | test |
+----+------+
+-------------+
| @@server_id |
+-------------+
|   194452202 |
+-------------+
[vagrant@localhost ~]$ mysql -h 127.0.0.1 -u root -pRoot@1234 -P 6447 -e "use sbtest;select * from sbtest1;select @@server_id;"
+----+------+
| id | user |
+----+------+
|  1 | test |
+----+------+
+-------------+
| @@server_id |
+-------------+
|  2376678236 |
+-------------+
[vagrant@localhost ~]$ mysql -h 127.0.0.1 -u root -pRoot@1234 -P 6447 -e "use sbtest;select * from sbtest1;select @@server_id;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+----+------+
| id | user |
+----+------+
|  1 | test |
+----+------+
+-------------+
| @@server_id |
+-------------+
|   194452202 |
+-------------+

So, by default, all the connections will route to the default “Primary” Cluster, which, in our case, is “Clustrer1”; however, we can change the primary component based on the requirement.

Changing ClusterSet topology

MySQL  localhost:3308 ssl  JS > myclusterset=dba.getClusterSet()
MySQL  localhost:3308 ssl  JS > myclusterset.status({extended:1})

Output:

<ClusterSet:firstclusterset>
    "clusters": {
        "Cluster1": {
            "clusterRole": "PRIMARY",
            "globalStatus": "OK",
            "primary": "127.0.0.1:3308",
            "status": "OK",
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
            "topology": {
                "127.0.0.1:3308": {
                    "address": "127.0.0.1:3308",
                    "memberRole": "PRIMARY",
                    "mode": "R/W",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3309": {
                    "address": "127.0.0.1:3309",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3310": {
                    "address": "127.0.0.1:3310",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
            "transactionSet": "39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-143,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5"
        "cluster2": {
            "clusterRole": "REPLICA",
            "clusterSetReplication": {
                "applierStatus": "APPLIED_ALL",
                "applierThreadState": "Waiting for an event from Coordinator",
                "applierWorkerThreads": 4,
                "receiver": "127.0.0.1:3311",
                "receiverStatus": "ON",
                "receiverThreadState": "Waiting for source to send event",
                "source": "127.0.0.1:3308"
            "clusterSetReplicationStatus": "OK",
            "globalStatus": "OK",
            "status": "OK",
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
            "topology": {
                "127.0.0.1:3311": {
                    "address": "127.0.0.1:3311",
                    "memberRole": "PRIMARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3312": {
                    "address": "127.0.0.1:3312",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3313": {
                    "address": "127.0.0.1:3313",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
            "transactionSet": "2e71122e-2862-11ee-b81c-5254004d77d3:1-5,39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-143,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5",
            "transactionSetConsistencyStatus": "OK",
            "transactionSetErrantGtidSet": "",
            "transactionSetMissingGtidSet": ""
    "domainName": "firstclusterset",
    "globalPrimaryInstance": "127.0.0.1:3308",
    "metadataServer": "127.0.0.1:3308",
    "primaryCluster": "Cluster1",
    "status": "HEALTHY",
    "statusText": "All Clusters available."
  • Changing the Primary cluster from “cluster1” to “cluster2:
MySQL localhost:3308 ssl JS > myclusterset.setPrimaryCluster('cluster2')

Output:

Switching the primary cluster of the clusterset to 'cluster2'
* Verifying clusterset status
** Checking cluster cluster2
  Cluster 'cluster2' is available
** Checking cluster Cluster1
  Cluster 'Cluster1' is available
* Reconciling 5 internally generated GTIDs
* Refreshing replication account of demoted cluster
* Synchronizing transaction backlog at 127.0.0.1:3311
** Transactions replicated  ############################################################  100%
* Updating metadata
* Updating topology
** Changing replication source of 127.0.0.1:3309 to 127.0.0.1:3311
** Changing replication source of 127.0.0.1:3310 to 127.0.0.1:3311
** Changing replication source of 127.0.0.1:3308 to 127.0.0.1:3311
* Acquiring locks in replicaset instances
** Pre-synchronizing SECONDARIES
** Acquiring global lock at PRIMARY
** Acquiring global lock at SECONDARIES
* Synchronizing remaining transactions at promoted primary
** Transactions replicated  ############################################################  100%
* Updating replica clusters
Cluster 'cluster2' was promoted to PRIMARY of the clusterset. The PRIMARY instance is '127.0.0.1:3311'
  • If we see the output again, we can observe that  “clusterRole:PRIMARY” is shifted to “cluster2”.
<span class="s1">My</span><span class="s2">SQL </span><span class="s3"> localhost:3308 ssl </span><span class="s4"> JS </span><span class="s5">> </span><span class="s6">myclusterset.status({extended:1})</span>

Output:

    "clusters": {
        "Cluster1": {
            "clusterRole": "REPLICA",
            "clusterSetReplication": {
                "applierStatus": "APPLIED_ALL",
                "applierThreadState": "Waiting for an event from Coordinator",
                "applierWorkerThreads": 4,
                "receiver": "127.0.0.1:3308",
                "receiverStatus": "ON",
                "receiverThreadState": "Waiting for source to send event",
                "source": "127.0.0.1:3311"
            "clusterSetReplicationStatus": "OK",
            "globalStatus": "OK",
            "status": "OK",
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
            "topology": {
                "127.0.0.1:3308": {
                    "address": "127.0.0.1:3308",
                    "memberRole": "PRIMARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3309": {
                    "address": "127.0.0.1:3309",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3310": {
                    "address": "127.0.0.1:3310",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
            "transactionSet": "2e71122e-2862-11ee-b81c-5254004d77d3:1-5,39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-145,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5",
            "transactionSetConsistencyStatus": "OK",
            "transactionSetErrantGtidSet": "",
            "transactionSetMissingGtidSet": ""
        "cluster2": {
            "clusterRole": "PRIMARY",
            "globalStatus": "OK",
            "primary": "127.0.0.1:3311",
            "status": "OK",
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
            "topology": {
                "127.0.0.1:3311": {
                    "address": "127.0.0.1:3311",
                    "memberRole": "PRIMARY",
                    "mode": "R/W",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3312": {
                    "address": "127.0.0.1:3312",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
                "127.0.0.1:3313": {
                    "address": "127.0.0.1:3313",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.31"
            "transactionSet": "2e71122e-2862-11ee-b81c-5254004d77d3:1-5,39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-145,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5"
    "domainName": "firstclusterset",
    "globalPrimaryInstance": "127.0.0.1:3311",
    "metadataServer": "127.0.0.1:3311",
    "primaryCluster": "cluster2",
    "status": "HEALTHY",
    "statusText": "All Clusters available."

So, we have changed the Primary component from cluster1 to cluster2, but the routing is still set for cluster1. In order to send traffic to cluster2, we also have to change the routing option.

MySQL localhost:3308 ssl JS > myclusterset.listRouters()

Output:

{     "domainName": "firstclusterset",     "routers": {         "localhost.localdomain::Router1": {             "hostname": "localhost.localdomain",             "lastCheckIn": "2023-07-22 02:47:42",             "roPort": "6447",             "roXPort": "6449",             "rwPort": "6446",             "rwXPort": "6448",             "targetCluster": "primary",             "version": "8.0.32"         },
  • Changing the connection target from “cluster1” to “cluster2”:
MySQL localhost:3308 ssl JS > myclusterset.setRoutingOption('localhost.localdomain::Router1', 'target_cluster', 'cluster2')
MySQL localhost:3308 ssl JS > myclusterset.listRouters()

Output:

MySQL localhost:3308 ssl JS > myclusterset.listRouters()
    "domainName": "firstclusterset",
    "routers": {
        "localhost.localdomain::Router1": {
            "hostname": "localhost.localdomain",
            "lastCheckIn": "2023-07-22 02:47:42",
            "roPort": "6447",
            "roXPort": "6449",
            "rwPort": "6446",
            "rwXPort": "6448",
            "targetCluster": "cluster2",
            "version": "8.0.32"

 Verifying the routing policy in the existing clusterset

MySQL localhost:3308 ssl JS > myclusterset.routingOptions()

Output:

    "domainName": "firstclusterset",
    "global": {
        "invalidated_cluster_policy": "drop_all",
        "stats_updates_frequency": 0,
        "target_cluster": "primary"
    "routers": {
        "localhost.localdomain::Router1": {
            "target_cluster": "cluster2"

There are situations when Primary clusters are not available or reachable. The immediate solution in some situations would be to perform an emergency failover in order to avoid the application block out.

An emergency failover basically switches to a selected replica cluster from the primary InnoDB Cluster for the InnoDB ClusterSet deployment. During an emergency failover process, data consistency is not assured due to async replication and other network factors, so for safety, the original primary cluster is marked as invalidated during the failover process.

So if by any chance the original primary cluster remains online, it should be shut down. Later, the invalidated primary cluster can join the clusterset via rejoin/repair process.

Perform emergency failover

myclusterset.forcePrimaryCluster("cluster2")
myclusterset.setRoutingOption('localhost::Route1', 'target_cluster', 'cluster2')

Summary

With the help of ClusterSet implementation, deploying DR support over different regions is no more a complex challenge. MySQLShell and InnoDB cluster tackles all the configurations and syncing process behind the scene. The disaster recovery and failure time can be minimized with the help of the Admin APIs/MySQLShell commands.

There is one caveat with the clusterset functioning. It does not support high availability/auto-promotion of the new primary if the existing one goes down. We must take care of the same with some manual intervention or via some internal automated process.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

Try Percona Distribution for MySQL today!

Share This Post!

Subscribe

Connect with
guest

Label

0 Comments

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK