Backup and Restore Using MySQL Shell
source link: https://www.percona.com/blog/backup-and-restore-using-mysql-shell/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Backup and Restore Using MySQL Shell
June 27, 2023
MySQL Shell is an advanced client and code editor for MySQL. In addition to the provided SQL functionality, similar to MySQL, MySQL Shell provides scripting capabilities for JavaScript and Python and includes APIs for working with MySQL. The X DevAPI enables you to work with both relational and document data, and MySQL Shell 8.0 is highly recommended for use with MySQL Server 8.0 and 5.7.
MySQL Shell includes utilities for working with MySQL. To access the utilities from within MySQL Shell, use the util global object, which is available in JavaScript and Python modes, but not SQL mode. These are the utilities to take a backup; let’s see some basic commands.
- util.dumpTables – Dump one or more tables from single database
- util.dumpSchemas – Dump one or more databases
- util.dumpInstance – Dump full instance
- util.loadDump – Restore dump
1. Single table dump
The below command is to take a dump of the table sbtest1 from the sysbench database and store the backup on the destination directory sysbench_dumps. The utility will create the directory when the destination directory does not exist. By default, compression, and chunking are enabled. When chunking is enabled, the table dump will be spitted onto multiple files based on size. Dialect:”csv gives the extension of the dump file, and by default, the file will be created with the tsv (Table separated value) extension.
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dialect:"csv"}) Acquiring global read lock Global read lock acquired Initializing - done 1 tables and 0 views will be dumped. Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 6 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing schema metadata - done Writing DDL - done Writing table metadata - done Starting data dump 101% (1000.00K rows / ~986.40K rows), 317.96K rows/s, 63.91 MB/s Dump duration: 00:00:03s Total duration: 00:00:04s Schemas dumped: 1 Tables dumped: 1 Data size: 198.89 MB Rows written: 1000000 Bytes written: 198.89 MB Average throughput: 60.96 MB/s MySQL localhost JS > |
These are the files created for the above dump command.
[root@centos12 sysbench_dumps]# ls -lrth total 190M -rw-r-----. 1 root root 869 Jun 21 13:08 @.json -rw-r-----. 1 root root 240 Jun 21 13:08 @.sql -rw-r-----. 1 root root 240 Jun 21 13:08 @.post.sql -rw-r-----. 1 root root 231 Jun 21 13:08 sysbench.json -rw-r-----. 1 root root 638 Jun 21 13:08 [email protected] -rw-r-----. 1 root root 474 Jun 21 13:08 sysbench.sql -rw-r-----. 1 root root 789 Jun 21 13:08 [email protected] -rw-r-----. 1 root root 1.5K Jun 21 13:08 [email protected] -rw-r-----. 1 root root 190M Jun 21 13:08 [email protected] -rw-r-----. 1 root root 233 Jun 21 13:08 @.done.json [root@centos12 sysbench_dumps]# pwd /home/vagrant/sysbench_dumps |
@.json | Complete information about dump options, servername, and username used for the dump and binlog file and position, etc. |
@.sql, @.post.sql. | Shows server version and dump version details. |
sysbench.json | Database and table details involved in the dump. |
[email protected] | Details about the table sbtest1, including column names, indexes, triggers, characterset, and partitions. |
sysbench.sql | Create a statement for the database sysbench. |
[email protected] | Create a statement for the table sbtest1. |
@.done.json | End time of the dump and dump file size. |
[email protected] | Table dump file. |
2. Backup only table structure
Option ddlOnly:true is used to take only the table structures. The below command is to take the table structure of sbtest1 from the sysbench database and store it in the sysbench_dumps path.
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, ddlOnly:true}) |
3. Dump only table data
Option dataOnly:true to take the dump of only data. The below command is to take table data of sbtest1 from the sysbench database and store it in the sysbench_dumps path.
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true}) |
4. Dump only selected data
This “where”: {“databasename.tablename”: “condition”} option is used to take a dump of selected data. The below command is to take a dump of table sbtest1 from id 1 to 10.
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6 ,chunking:false, dataOnly:true, "where" : {"sysbench.sbtest1": "id between 1 and 10"}}) |
It’s also possible to take a dump of multiple tables with their conditions in a single command.
Syntax:
"where" : {"databasename1.tablename1": "condition for databasename1.tablename1", "databasename2.tablename2": "condition for databasename2.tablename2"} |
The below command is to take a dump of table sbtest1 from id 1 to 10 and dump of sbtest2 from id 100 to 110.
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1", "sbtest2"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true, "where" : {"sysbench.sbtest1": "id between 1 and 10", "sysbench.sbtest2": "id between 100 and 110"}}) |
5. Dump data from partitions
The option partitions is to take a dump from selected partitions.
Syntax:
"partitions" : {"db1.table1": ["list of partitions"],"db2.table1": ["list of partitions"]} |
The below command is to take a dump from only partitions p1 and p2 and dump of sbtest2 table from partitions p4 and p5.
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1", "sbtest2"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true, "partitions" : {"sysbench.sbtest1": ["p1", "p2"],"sysbench.sbtest2": ["p4", "p5"]}}) |
6. Taking Schemas dump
When taking schemas dump, by default, events, triggers, and routines will be taken. Those are stored in the database_name.sql file. The below command is to take a dump of the percona and sakila databases.
MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {"compression":"none", "threads":6, chunking:false}) |
The below command is to skip the events, routines, and triggers.
MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {events:false, routines:false, triggers:false, "compression":"none", "threads":6, chunking:false}) |
We can also use these options to include and exclude the events, routines, and triggers
Syntax:
includeEvents : [db1.include_event1,db2.include_event2...] includeRoutines : [db1.include_procedure1,db2.include_function2...] includeTriggers : [db1.include_trigger1,db2.include_trigger2...] excludeEvents : [db1.exclude_event1,db2.exclude_event2...] excludeTriggers : [db1.exclude_trigger1,db2.exclude_trigger2...] excludeRoutines : [db1.exclude_procedure1,db2.exclude_function2...] |
7. Taking specified tables from different databases
Sometimes we may need to take selected tables from different schemas. We can achieve this using the option includeTables.
Syntax:
includeTables:["db1.table1", "db2.table2"....] |
Below is the command to take a dump of table users from the percona database and a dump of the actor table from the sakila database.
MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {includeTables:["percona.users", "sakila.actor"], "compression":"none", "threads":6, chunking:false}) |
8. Instance dump
The command util.dumpInstance takes the dump of a complete instance and stores it in /backup/instance_dump path. The system databases (mysql, sys, information_schema, performance_schema) are excluded, and by default, a dump of all the users from the instance is taken and stored in the file @.users.sql. This user dump file has the create and grant statements of all the users.
MySQL localhost JS > util.dumpInstance("/backup/instance_dump", {"compression":"none", "threads":6, chunking:false}) |
Some more options in the instance dump.
users: false - Skip users dump excludeUsers : [‘user1’,’user2’] - Execute particular users includeUsers : [‘user1’,’user2’]. - Include particular users excludeSchemas : [“db1”,”db2”] - Exclude particular schemas includeSchemas : [“db1”,”db2”]. - Include particular schemas excludeTables : [“db1.table1”,”db2.table2”] - Exclude particular tables includeTables : [“db1.table1”,”db2.table2”] - Include particular tables |
9. Restore the dump into a single database
The command util.loadDump is used to restore the dumps. The variable local_infile should be enabled to load the dumps.
Syntax :
util.loadDump("/path/of/the/dump", {options}) |
The below command is to restore the dump into database test_restore. When we need to restore on a different schema, we have to use this option schema: “test_restore”. Otherwise, it will be restored on the source schema where it was taken.
MySQL localhost SQL > create database test_restore; Query OK, 1 row affected (0.3658 sec) MySQL localhost SQL > js Switching to JavaScript mode... MySQL localhost JS > util.loadDump("/home/vagrant/schema_dump", {schema:"test_restore", progressFile: "progress.json", threads: 8, showProgress:true, resetProgress:true}) |
10. Restore the full instance dump and configure replication
Here, we just loaded the full instance dump from /home/vagrant/instance_dump path with eight parallel threads.
MySQL localhost JS > util.loadDump("/home/vagrant/instance_dump", {progressFile: "progress.json", threads: 8, showProgress:true, resetProgress:true}) Loading DDL and Data from '/home/vagrant/instance_dump' using 8 threads. Opening dump... Target is MySQL 8.0.33-25. Dump was produced from MySQL 8.0.32-24 NOTE: Load progress file detected for the instance but 'resetProgress' option was enabled. Load progress will be discarded and the whole dump will be reloaded. Scanning metadata - done Checking for pre-existing objects... Executing common preamble SQL Executing DDL - done Executing view DDL - done Starting data load 8 thds loading / 100% (19.18 MB / 19.18 MB), 541.36 KB/s, 6 / 23 tables and partitions done Recreating indexes - done Executing common postamble SQL 23 chunks (100.00K rows, 19.18 MB) for 11 tables in 2 schemas were loaded in 11 sec (avg throughput 2.53 MB/s) 0 warnings were reported during the load. MySQL localhost JS > |
I got the binlog file and position from the file @.json and configured the replication.
[root@centos12 instance_dump]# cat @.json | grep -i binlog "binlogFile": "centos12-bin.000006", "binlogPosition": 760871466, [root@centos12 instance_dump]# |
MySQL localhost SQL > CHANGE REPLICATION SOURCE TO SOURCE_HOST="192.168.33.12", SOURCE_USER="bhuvan",SOURCE_PASSWORD="Bhuvan@123", SOURCE_LOG_FILE='centos12-bin.000006',SOURCE_LOG_POS=760871466; Query OK, 0 rows affected, 2 warnings (0.1762 sec) |
MySQL localhost SQL > START REPLICA; Query OK, 0 rows affected (0.1156 sec) MySQL localhost SQL > show replica statusG *************************** 1. row *************************** Replica_IO_State: Waiting for source to send event Source_Host: 192.168.33.12 Source_User: bhuvan Source_Port: 3306 Connect_Retry: 60 Source_Log_File: centos12-bin.000006 Read_Source_Log_Pos: 823234119 Relay_Log_File: centos11-relay-bin.000002 Relay_Log_Pos: 1152129 Relay_Source_Log_File: centos12-bin.000006 Replica_IO_Running: Yes Replica_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Source_Log_Pos: 762023266 Relay_Log_Space: 62363195 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Source_SSL_Allowed: No Source_SSL_CA_File: Source_SSL_CA_Path: Source_SSL_Cert: Source_SSL_Cipher: Source_SSL_Key: Seconds_Behind_Source: 718 Source_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Source_Server_Id: 100 Source_UUID: f46a1600-045e-11ee-809f-0800271333ce Source_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Replica_SQL_Running_State: Waiting for dependent transaction to commit Source_Retry_Count: 86400 Source_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Source_SSL_Crl: Source_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Source_TLS_Version: Source_public_key_path: Get_Source_public_key: 0 Network_Namespace: 1 row in set (0.1470 sec) |
I hope the above examples help to understand the backup and restore using MySQL Shell. It has many advantages over native mysqldump. I personally feel that we are missing insert statements here, as we used to see the insert statements in dump files; apart from that, it looks good. Logical backup is good only when the dataset is small. When the dataset is big, the logical backup takes longer, and we have to go for physical backup using Percona XtraBackup.
Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows.
Share This Post!
Recommend
-
50
By:Deepak Kumeri | Last Updated: 2019-03-28 | | Related Tips: > Sharepoint Problem How importan...
-
7
Backup and Restore using Kasten’s K10 platform and OpenEBS
-
25
What do we need chef backup and restore for In my last blog post, that you can find here, I wrot...
-
14
The final module of the Cluster Architecture, Installation, and Configuration is Implement etcd backup and restore. Let’s quickly perform the actions we need to complete this step for the exam. Perform a B...
-
12
Using Puppet Enterprise 2018’s new backup/restore features I was pretty excited when I read the new features in Puppet Enterprise 2018.1. There are a lot of coo...
-
8
How to Backup and Restore Ubuntu, Debian & Linux Mint using TimeshiftQuestions: How can I backup and restore Ubuntu?, How to backup and restore Debian?, How to backup and restore Linux Mint using Timeshift?. Are you looki...
-
7
MySQL NDB Cluster Backup/Restore Challenge MySQL NDB Cluster Backup/Restore Challenge ...
-
6
<?xml encoding="utf-8" ??>Introduction This tutorial describes the steps to properly backup a MySQL/MariaDB server on Ubuntu 20.04 LTS. Backing up databases is one of the most important tasks in...
-
5
<?xml encoding="utf-8" ??>Introduction In MySQL, you can backup your data either by using a logical or a physical backup. The former makes MySQL dump file. On the other hand, a physical MySQL ba...
-
5
Backup and restore a mysql database from a running Docker mysql container ...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK