3

How to create Regional Persistent Disks in Google Kubernetes

 2 years ago
source link: https://computingforgeeks.com/how-to-create-regional-persistent-disks-in-google-kubernetes/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
How to create Regional Persistent Disks in Google Kubernetes

Setting up Stateful applications in Kubernetes requires good architecture and best deployment to ensure that the data being produced by the pods are kept persistently across the cluster and can be accessed long after the pods are stopped or decommissioned. Google Kubernetes Engine can be deployed across many zones in a region and the best way to provision Stateful applications is by deploying the data on disks that span across the entire region. This is what we are going to explore in this guide with a dash of flair.

Borrowing from Google’s Documentation, Regional persistent disks are multi-zonal resources that replicate data between two zones in the same region, and can be used similarly to zonal persistent disks. In the event of a zonal outage or if cluster nodes in one zone become “un-schedulable”, Kubernetes can failover workloads using the volume to the other zone. You can use regional persistent disks to build highly available solutions for stateful workloads on GKE.

As with zonal persistent disks, regional persistent disks can be dynamically provisioned (for example using Compute Engine persistent disk CSI driver) as needed or manually provisioned in advance by the cluster administrator, although dynamic provisioning is recommended. The Google Compute Engine Persistent Disk CSI Driver (used in this example) is a CSI Specification compliant driver used by Container Orchestrators to manage the lifecycle of Google Compute Engine Persistent Disks.

With this, we will only need to create a StorageClass and the driver will handle the creation and management of Google Compute Engine Persistent Disks for us.

To enable the driver on cluster creation (i.e when creating a new cluster), complete the following steps:

$ gcloud container clusters create geeks-cluster-a \
    --addons=GcePersistentDiskCsiDriver \
    --cluster-version=1.21

To enable the driver on an existing cluster, complete the following steps:

$ gcloud container clusters update geeks-cluster-a \
   --update-addons=GcePersistentDiskCsiDriver=ENABLED

Remember to replace “geeks-cluster-a” with your cluster name.

Once that is done, we can go ahead and create a regional StorageClass. Similar to the one shown below:

$ vim regional-sclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: geeks-regional-storageclass
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-standard
  replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: topology.gke.io/zone
    values:
    - us-central1-a
    - us-central1-b

The manifest above describes a StorageClass named geeks-regional-storageclass that uses standard persistent disks and that replicates data to the us-central1-a and us-central1-b zones:

Apply the manifest file as usual

kubectl apply -f regional-sclass.yaml

Then check the StorageClass created

NAME                                      PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
geeks-regional-storageclass               pd.csi.storage.gke.io   Delete          Immediate              false                  5d7h

Things to note is that you can leave allowedTopologies unspecified in case you are using a regional cluster. With that, when you create a Pod that consumes a PersistentVolumeClaim which uses this StorageClass, a regional persistent disk is provisioned with two zones. One zone is the same as the zone that the Pod is scheduled in. The other zone is randomly picked from the zones available to the cluster. Source: Google Documentation

When using a zonal cluster, allowedTopologies must be set.

How to use the StorageClass with a StatefulSet

We will launch a simple redis StatefulSet to showcase how a persistent disk can be created from the StorageClass we have just configured.

Create the StatefulSet as shown below:

$ vim redis.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
  namespace: redis
spec:
  serviceName: "redis"
  replicas: 2
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      initContainers:
      - name: init-redis
        image: redis:6.2.6
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate redis server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          # Copy appropriate redis config files from config-map to respective directories.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/master.conf /etc/redis-config.conf
          else
            cp /mnt/slave.conf /etc/redis-config.conf
          fi
        volumeMounts:
        - name: redis-claim
          mountPath: /etc
        - name: config-map
          mountPath: /mnt/
      containers:
      - name: redis
        image: redis:6.2.6
        ports:
        - containerPort: 6379
          name: redis-stateful
        command:
          - redis-server
          - "/etc/redis-config.conf"
        volumeMounts:
        - name: redis-data
          mountPath: /data
        - name: redis-claim
          mountPath: /etc
      volumes:
      - name: config-map
        configMap:
          name: redis-configuration 
      - name: redis-claim
        emptyDir: {}                  
  volumeClaimTemplates:
  - metadata:
      name: redis-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "geeks-regional-storageclass"
      resources:
        requests:
          storage: 5Gi

Apply the StatefulSet

$ kubectl apply -f redis.yaml

Then we can confirm if they were created and bound

$ kubectl get pvc
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                   AGE
redis-claim-redis-0   Bound    pvc-416d6957-b1e6-4ac4-a71f-6d24c8d28460   5Gi       RWO            geeks-regional-storageclass    4d9h

And we should be good to go with the confidence that our data is going to be replicated across the zones we specified in the StorageClass manifest.

Closing Medley

The Compute Engine persistent disk CSI Driver provides a lot of benefits such as the automatic deployment and management of the persistent disk driver without having to manually set it up which reduces errors and makes the world of administrators as smooth as possible. It is now time to enjoy setting your databases and any other Stateful applications in your cluster without having second thoughts that keep you half asleep at night. We hope the guide was as informative as we would like and that you enjoyed the meal. As always, we celebrate you all who continue to send us wonderful feedback and encouraging generosity. It goes a long way.

Good guides you will feast with a smile:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK