0

Backing up and Restoring to AWS S3 With Percona Kubernetes Operators

 6 months ago
source link: https://www.percona.com/blog/backing-up-and-restoring-to-aws-s3-with-percona-kubernetes-operators/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Backing up and Restoring to AWS S3 With Percona Kubernetes Operators

February 22, 2024

Edith Puclla

In our last post, we looked into the lifecycle of applications in Kubernetes. We see that Kubernetes doesn’t handle database backups itself.

This is where Kubernetes Operators come into action. They add additional functions to Kubernetes, enabling it to set up, configure, and manage complex applications like databases within a Kubernetes environment for the user.

In this blog post, we will focus on the Percona Operator for MySQL. This operator is built on the Percona XtraDB Cluster and shows how operators can make certain tasks easier for us. We will go through the steps of backing up our database to Amazon S3 and restoring it together.

Prerequisites:

Shell
git clone https://github.com/percona/percona-xtradb-cluster-operator.git
cd percona-xtradb-cluster-operator

We will cover four steps for this demo:

  1. Connect to the MySQL instance in the Percona XtraDB Cluster.
  2. Add sample data to the database.
  3. Set up and carry out a logical backup.
  4. Restore the database.

1. Connect to the MySQL instance in Percona XtraDB Cluster

To connect to the Percona XtraDB Cluster, we need the password from the root user. This password is kept in the Secrets object. Let’s list the Secrets objects with kubectl:

Shell
kubectl get secrets
NAME                        TYPEDATA     AGE
cluster1-secrets.           Opaque       69h

Now, let’s get the password from the root user by using the following commands:

Shell
kubectl get secret cluster1-secrets -o yaml -o jsonpath='{.data.root}' | base64 --decode | tr 'n' ' ' && echo " "
^J$X0n[(IbQT$Q7*  # <<=== This is the output, which is the password of the user "root"

Now, we start a container with the MySQL tool and connect its console output to your terminal. Use this command to do it, and name the new Pod percona-client:

Shell
kubectl run-i --rm --tty percona-client --image=percona:8.0 --restart=Never -- bash -il

Once we are inside the pod, to connect to the Percona XtraDB Cluster, run the MySQL tool in the ‘percona-client’ command shell. Use your cluster name and the password you got from the secret.

Shell
[mysql@percona-client /]$ mysql -h cluster1-haproxy -uroot -p'^J$X0n[(IbQT$Q7*'
mysql>

2. Add sample data to the database

Now that we have set up Percona XtraDB Cluster with MySQL let’s create a new database and insert some data for our experiments:

MySQL
mysql> CREATE DATABASE mydb;
use mydb;
Query OK, 1 row affected (0.02 sec)
Database changed
mysql>
mysql> CREATE TABLE extraordinary_gentlemen (
id int NOT NULL AUTO_INCREMENT,
name varchar(255) NOT NULL,
occupation varchar(255),
PRIMARY KEY (id)
Query OK, 0 rows affected (0.03 sec)
MySQL
mysql> INSERT INTO extraordinary_gentlemen (name, occupation)
VALUES
("Allan Quartermain","hunter"),
("Nemo","fish"),
("Dorian Gray", NULL),
("Tom Sawyer", "secret service agent");
Query OK, 4 rows affected (0.00 sec)
Records: 4Duplicates: 0Warnings: 0

Let´s see what we have in our extraordinary_gentlemen table:

MySQL
mysql> SELECT * FROM extraordinary_gentlemen;
+----+-------------------+----------------------+
| id | name              | occupation           |
+----+-------------------+----------------------+
|  1 | Allan Quartermain | hunter               |
|  4 | Nemo              | fish                 |
|  7 | Dorian Gray       | NULL                 |
| 10 | Tom Sawyer        | secret service agent |
+----+-------------------+----------------------+
4 rows in set (0.00 sec)

3. Set up and make a physical backup

For this demo, ensure you have a bucket in Amazon S3. Additionally, your AWS user account should have the ‘AmazonS3FullAccess’ permission to fully manage S3 buckets.

This is what my S3 configuration looks like on Amazon.

s3-1024x397.png

In the repository we cloned earlier, there is a folder named ‘deploy’. Inside this, we will find a file called cr.yaml. Now, we will use this file to set up the backup.

Find the ‘storage’ section in the file.

storages:
  s3-demo:    # <<=== Name of your preference for your storage
    type: s3    # <<=== type for S3 AWS backups
    verifyTLS: true
      bucket: community-bucket-demo # <<=== Name of your S3 backet
      credentialsSecret: my-cluster-name-backup-s3 # <<=== Name of your cretential files
      region: us-east-1 # <<=== Region where you bucket is located

After setting up the cr.yaml, let’s move on to configuring the backup files.

Navigate to the deploy/backup directory in the same repository we cloned. Here, we find the necessary files for backing up and restoring.

Screenshot-2024-01-28-at-16.33.30.png

Begin with the backup-secret-s3.yaml file, which holds your AWS access keys and secret keys data. Here’s an example of what it should look like:

apiVersion: v1
kind: Secret
metadata:
  name: my-cluster-name-backup-s3 # <<== This is name is call in cr.yaml
type: Opaque
data:
  AWS_ACCESS_KEY_ID: QUtJVEJNBLAHDSJHFBSJDH=
  AWS_SECRET_ACCESS_KEY: K3dlksdjaBLALLUHUYSHSeFQ4RVl6T2EydTVHLw==

Replace backup-secret-s3.yaml with your own access key and secret access key. Before adding them to this file, make sure to encrypt them using these commands:

# User this for Mac
echo -n 'AWS_ACCESS_KEY_ID' | base64 
echo -n 'AWS_SECRET_ACCESS_KEY' | base64
# use this for Linux
echo -n 'AWS_ACCESS_KEY_ID' | base64 --wrap=0
echo -n 'AWS_SECRET_ACCESS_KEY' | base64 --wrap=0

Now, let’s proceed with the backup process itself. In the same directory, find the file named backup.yaml. 

We can modify the fields: name, pxcCluster, and storageName in the backup.yaml file. Make sure ‘storageName’ matches the data you previously entered in the cr.yaml file.

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterBackup
metadata:
  name: demo-backup-1      # This is the name of you backup
spec:
  pxcCluster: cluster1   # <<=== keep this with the name of your cluster
  storageName: s3-demo  # <<=== THis is the same of the storage in the cr.ymal

After configuring these three files, let’s apply them using the kubectl command in the following order.

Shell
Kubelct apply -f  backup-secret-s3.yaml
Kbuectl apply -f cr.yaml
Kubectl apply -f backup.yaml

To confirm if our backup was successful, we can list the backups. 

Shell
kubectl get pxc-backup
NAME              CLUSTER     STORAGE     DESTINATION                                                    STATUS        COMPLETED       AGE
demo-backup-1     cluster1    s3-demo     s3://community-bucket-demo/cluster1-2024-01-28-17:06:41-full   Succeeded     118s            2m39s

If the status shows succeeded, it means our backup was successful! Now, let’s take a look at our bucket on Amazon S3. Wohoo!! We have a backup ready in S3!

Screenshot-2024-01-28-at-17.10.37-1024x583.png

4. Restore the database

Restoring is quite straightforward. In the same directory, we can find a file named restore.yaml. Open this file and update the backupName field. Our backup was named demo-backup-1, so let’s change it to that.

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterRestore
metadata:
name: restore1
spec:
pxcCluster: cluster1        # <<=== keep this with the name of your cluster
backupName: demo-backup-1  # <<=== Name of our backup

For testing purposes, you can delete the database we created earlier by using the command drop database mydb (The database we initially created).

After your file is ready, let’s apply the changes:

Shell
kubectl apply -f restore.yaml

This will take the backup from S3 and restore our data to its previous state.

To verify if the restore was successful, let’s use this command:

Shell
kubectl get pxc-restore
NAME        CLUSTER    STATUS       COMPLETED  AGE
restore1    cluster1   Succeeded.    22s      4m37s

If we list your databases again, we should see our database again, along with all the data we initially inserted. It means our restore worked perfectly!

MySQL
mysql> show databases;
+--------------------+
| Database|
+--------------------+
| information_schema |
| mydb|
| mysql|
| performance_schema |
| sys|
+--------------------+
5 rows in set (0.00 sec)

Conclusion

Kubernetes operators are incredible for automating database operations, tasks that are often complex and not directly handled by Kubernetes itself. Operators make it easy for us; they take care of essential tasks like backups and restores, which are crucial in database management.

Did this demo work well for you? If you have any issues, please reach out on our community forum at Forums.percona.com. We’re always here to help.

If you are new to Kubernetes Operators, read how Cluster Status works in Percona Kubernetes Operators.

And if you are looking for a version with a graphical interface, we have Percona Everest, our cloud-native database platform, currently in Alpha stage.

See you in our next blog post!

The Percona Kubernetes Operators lets you easily create and manage highly available, enterprise-ready MySQL, PostgreSQL, and MongoDB clusters on Kubernetes. Experience hassle-free database management and provisioning without the need for manual maintenance or custom in-house scripts.

Learn More About Percona Kubernetes Operators

Share This Post!

Subscribe
Connect with
guest
Label
0 Comments

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK