6

MySQL on Kubernetes with GitOps

 3 years ago
source link: https://www.percona.com/blog/2021/06/23/mysql-on-kubernetes-with-gitops/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Percona Database Performance Blog

GitOps workflow was introduced by WeaveWorks as a way to implement Continuous Deployment for cloud-native applications. This technique quickly found its way into devops and developer’s hearts as it greatly simplifies the application delivery pipeline: the change in the manifests in the git repository is reflected in Kubernetes right away. With GitOps there is no need to provide access to the cluster for the developer as all the actions are executed by the Operator.

This blog post is a guide on how to deploy Percona Distribution for MySQL on Kubernetes with Flux – GitOps Operator that keeps your cluster state in sync with the Git repository.

Percona Distribution for MySQL on Kubernetes with Flux

In a nutshell, the flow is the following:

  1. Developer triggers the change in the GitHub repository
  2. Flux Operator:
    1. detects the change
    2. deploys Percona Distribution for MySQL Operator
    3. creates the Custom Resource, which triggers the creation of Percona XtraDB Cluster and HAProxy pods

The result is a fully working MySQL service deployed without talking to Kubernetes API directly.

Preparation

Prerequisites:

  • Kubernetes cluster
  • Github user and account
    • For this blog post, I used the manifests from this repository 

It is a good practice to create a separate namespace for Flux:

Shell
$ kubectl create namespace gitops

Installing and managing Flux is easier with fluxctl. In Ubuntu, I use snap to install tools, for other operating systems please refer to the manual here.

Shell
$ sudo snap install fluxctl --classic

Install Flux operator to your Kubernetes cluster:

Shell
$ fluxctl install [email protected] [email protected]:spron-in/blog-data.git --git-path=gitops-mysql --manifest-generation=true --git-branch=master --namespace=gitops | kubectl apply -f -

GitHub Sync

As per configuration, Flux will monitor the changes in the spron-in/blog-data repository continuously and sync the state. It is required to grant access to Flux to the repo.

Get the public key that was generated during the installation:

Shell
$ fluxctl identity --k8s-fwd-ns gitops

Copy the key, add it as Deploy key with write access in GitHub. Go to Settings -> Deploy keys -> Add deploy key:

blog-gitops-add-key.png

Action

All set. Flux reconcile loops check the state for changes every five minutes. To trigger synchronization right away run:

Shell
$ fluxctl sync --k8s-fwd-ns gitops

In my case I have two YAMLs in the repo:

  • bundle.yaml – installs the Operator, creates the Custom Resource Definitions (CRDs)
  • cr.yaml – deploys PXC and HAProxy pods

Flux is going to deploy them both.

Shell
$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-haproxy-0                                 2/2     Running   0          26m
cluster1-haproxy-1                                 2/2     Running   0          25m
cluster1-pxc-0                                     1/1     Running   0          26m
cluster1-pxc-1                                     1/1     Running   0          25m
cluster1-pxc-2                                     1/1     Running   0          23m
percona-xtradb-cluster-operator-79966668bd-95plv   1/1     Running   0          26m

Now let’s add one more HAProxy Pod by changing spec.haproxy.size from 2 to 3 in cr.yaml. After that commit and push the changes. In a production-grade scenario, the Pull Request will go through a thorough review, in my case I push directly to the main branch.

Shell
$ git commit cr.yaml -m 'increase haproxy size from 2 to 3'
$ git push
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 2 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 385 bytes | 385.00 KiB/s, done.
Total 4 (delta 2), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To https://github.com/spron-in/blog-data
   e1a27b8..d555c77  master -> master

Either trigger the sync with fluxctl sync command or wait for approximately 5 minutes for Flux reconcile loop to detect the changes. In the logs of the Flux Operator you will see the event:

Shell
ts=2021-06-15T12:59:08.267469963Z caller=loop.go:134 component=sync-loop event=refreshed url=ssh://[email protected]/spron-in/blog-data.git branch=master HEAD=d555c77c19ea9d1685392680186e1491905401cc
ts=2021-06-15T12:59:08.270678093Z caller=sync.go:61 component=daemon info="trying to sync git changes to the cluster" old=e1a27b8a81e640d3bee9bc2e2c31f9c4189e898a new=d555c77c19ea9d1685392680186e1491905401cc
ts=2021-06-15T12:59:08.844068322Z caller=sync.go:540 method=Sync cmd=apply args= count=9
ts=2021-06-15T12:59:09.097835721Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=253.684342ms err=null output="serviceaccount/percona-xtradb-cluster-operator unchanged\nrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged\ncustomresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com configured\ncustomresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com unchanged\ncustomresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com unchanged\ncustomresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com unchanged\nrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged\ndeployment.apps/percona-xtradb-cluster-operator unchanged\nperconaxtradbcluster.pxc.percona.com/cluster1 configured"
ts=2021-06-15T12:59:09.099258988Z caller=daemon.go:701 component=daemon event="Sync: d555c77, default:perconaxtradbcluster/cluster1" logupstream=false
ts=2021-06-15T12:59:11.387525662Z caller=loop.go:236 component=sync-loop state="tag flux" old=e1a27b8a81e640d3bee9bc2e2c31f9c4189e898a new=d555c77c19ea9d1685392680186e1491905401cc
ts=2021-06-15T12:59:12.122386802Z caller=loop.go:134 component=sync-loop event=refreshed url=ssh://[email protected]/spron-in/blog-data.git branch=master HEAD=d555c77c19ea9d1685392680186e1491905401cc

The log indicates that the main CR was configured: perconaxtradbcluster.pxc.percona.com/cluster1 configured 

Now we have three HAProxy Pods:

Shell
$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-haproxy-0                                 2/2     Running   1          50m
cluster1-haproxy-1                                 2/2     Running   0          48m
cluster1-haproxy-2                                 2/2     Running   0          4m45s

It is important to note that GitOps maintains the sync between Kubernetes and GitHub. It means that if the user manually changes the object on Kubernetes, Flux, or any other GitOps Operator will revert the changes and sync them with GitHub.

GitOps also comes in handy when users want to take the backup or perform the restoration. To do that the user just creates YAML manifests in the GitHub repo and Flux creates corresponding Kubernetes objects. The Database Operator does the rest.

Conclusion

GitOps is a simple approach to deploy and manage applications on Kubernetes:

  • Change Management is provided by git version-control and code reviews
  • Direct access to Kubernetes API is limited which increases security
  • Infrastructure-as-a-Code is here, there is no need to integrate Terraform, Ansible, or any other tool

All Percona Operators can be deployed and managed with GitOps. As a result, you will get production-grade MySQL, MongoDB, or PostgreSQL cluster which just works.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK