8

Using Percona Kubernetes Operators With K3s Part 1: Installation

 1 year ago
source link: https://www.percona.com/blog/2022/10/05/using-percona-kubernetes-operators-with-k3s-part-1-installation/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Using Percona Kubernetes Operators With K3s Part 1: Installation

Using Percona Kubernetes Operators With K3sRecently Peter provided an extensive tutorial on how to use Percona Kubernetes Operators on minikube: Exploring MySQL on Kubernetes with Minikube. Minikube is a great option for local deployment and to get familiar with Kubernetes on a single developer’s box.

But what if we want to get experience with setups that are closer to production Kubernetes and use multiple servers for deployments?

I think there is an alternative that is also easy to set up and easy to start. I am talking about K3s. In fact, it is so easy that this part one will be very short — there is just one requirement that we need to resolve, but it is also easy.

So let’s assume we have four servers we want to use for our Kubernetes deployments: one master and three worker nodes. In my case:

beast-node7-ubuntu
beast-node8-ubuntu (master)
beast-node5-ubuntu
beast-node6-ubuntu

For step one, on master we execute:

Shell
curl -sfL https://get.k3s.io | sh -

For step two, we need to prepare a script, and we need two parameters: the IP address of the master and the token for the master. Finding the token is probably the most complicated part of this setup and it is stored in the file /var/lib/rancher/k3s/server/node-token.

Having these parameters, the script for other nodes is:

Shell
k3s_url="https://10.30.2.34:6443"
k3s_token="K109a7b255d0cf88e75f9dcb6b944a74dbca7a949ebd7924ec3f6135eeadd6624e9::server:5bfa3b7e679b23c55c81c198cc282543"
curl -sfL https://get.k3s.io | K3S_URL=${k3s_url} K3S_TOKEN=${k3s_token} sh -

After executing this script on other nodes we will have our Kubernetes running:

Shell
kubectl get nodes
NAME                 STATUS   ROLES                  AGE    VERSION
beast-node7-ubuntu   Ready    <none>                 125m   v1.24.6+k3s1
beast-node8-ubuntu   Ready    control-plane,master   23h    v1.24.6+k3s1
beast-node5-ubuntu   Ready    <none>                 23h    v1.24.6+k3s1
beast-node6-ubuntu   Ready    <none>                 23h    v1.24.6+k3s1

This is sufficient for a basic Kubernetes setup, but for our Kubernetes Operators, we need an extra step: Dynamic Volume Provisioning, because our Operators request volumes to store data.

Actually, after further research, it appears that Operators will use the default local-path provisioner, which satisfies Operator requirements.

After this, the K3s cluster should be ready to deploy our Operators, which we will review in the next part of this series. Stay tuned!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK