3

[csi]浅聊ceph-csi组件 - 我是一个平民

 2 years ago
source link: https://www.cnblogs.com/acommoners/p/15988974.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

[csi]浅聊ceph-csi组件

描述#

  ceph-csi扩展各种存储类型的卷的管理能力,实现第三方存储ceph的各种操作能力与k8s存储系统的结合。调用第三方存储ceph的接口或命令,从而提供ceph数据卷的创建/删除、挂载/解除挂载的具体操作实现。前面分析组件中的对于数据卷的创建/删除、挂载/解除挂载操作,全是调用ceph-csi,然后由ceph-csi调用ceph提供的命令或接口来完成最终的操作。

ceph-csi服务组成

  ceph-csi含有rbdTypecephfsTypelivenessType三大类型服务。rbdType主要进行rbd的操作完成与ceph的交互,cephfsType主要进行cephfs的操作完成与ceph交互,livenessType该服务主要是定时向csi endpoint探测csi组件的存活(向指定的socket地址发送probe请求),然后统计到prometheus指标中。

  • ControllerServer:主要负责创建、删除cephfs/rbd存储等操作。
  • NodeServer:部署在k8s中的每个node上,主要负责cephfs、rbd在node节点上相关的操作,如将存储挂载到node上,解除node上存储挂载等操作。
  • IdentityServer:主要是返回自身服务的相关信息,如返回服务身份信息(名称与版本等信息)、返回服务具备的能力、暴露存活探测接口(用于给别的组件/服务探测该服务是否存活)等。
    图-1
ceph-csi及相关组件部署
  • 部署步骤
    部署请参考我的这个文章《k8s基于csi使用rbd存储》。

  • 部署的组件介绍[rbd csi为例]
    csi-rbdplugin-provisioner.yaml部署的相关组件:csi-provisionercsi-snapshottercsi-attachercsi-resizercsi-rbdpluginliveness-prometheus5个容器,作用分别如下。

    • csi-provisioner:实际上是external-provisioner组件。在create pvc时,csi-provisioner参与存储资源与pv对象的创建。csi-provisioner组件监听到pvc创建事件后,负责拼接请求,调用ceph-csi组件(即csi-rbdplugin容器)的CreateVolume方法来创建存储,创建存储成功后,创建pv对象;delete pvc时,csi-provisioner参与存储资源与pv对象的删除。当pvc被删除时,pv controller会将其绑定的pv对象状态由bound更新为release,csi-provisioner监听到pv更新事件后,调用ceph-csi组件(即csi-rbdplugin容器)的DeleteVolume方法来删除存储,并删除pv对象。
    • csi-snapshotter:实际上是external-snapshotter组件,负责处理存储快照相关的操作。
    • csi-attacher:实际上是external-attacher组件,只负责操作VolumeAttachment对象,实际上并没有操作存储。
    • csi-resizer:实际上是external-resizer组件,负责处理存储扩容相关的操作。
    • csi-rbdplugin:实际上是ceph-csi组件,rbdType-ControllerServer/IdentityServer类型的服务。create pvc时,external-provisioner组件(即csi-provisioner容器)监听到pvc创建事件后,负责拼接请求,然后调用csi-rbdplugin容器的CreateVolume方法来创建存储;delete pvc时,pv对象状态由bound变为release,external-provisioner组件(即csi-provisioner容器)监听到pv更新事件后,负责拼接请求,调用csi-rbdplugin容器的DeleteVolume方法来删除存储。
    • liveness-prometheus:实际上是ceph-csi组件,livenessType类型的服务。负责探测并上报csi-rbdplugin服务的存活情况。

    csi-rbdplugin.yaml部署的相关组件:包含了driver-registrarcsi-rbdpluginliveness-prometheus 3个容器,作用分别如下。

    • driver-registrar:向kubelet传入csi-rbdplugin容器提供服务的socket地址、版本信息和驱动名称(如rbd.csi.ceph.com)等,将csi-rbdplugin容器服务注册给kubelet。
    • csi-rbdplugin:实际上是ceph-csi组件,rbdType-NoderServer/IdentityServer类型的服务。create pod cliam pvc时,kubelet会调用csi-rbdplugin容器将创建好的存储从ceph集群挂载到pod所在的node上,然后再挂载到pod相应的目录上;delete pod cliam pvc时,kubelet会调用csi-rbdplugin容器的相应方法,解除存储在pod目录上的挂载,再解除存储在node上的挂载。
    • liveness-prometheus:实际上是ceph-csi组件,livenessType类型的服务。负责探测并上报csi-rbdplugin服务的存活情况。
csi如何进行pvc的create和delete

# 环境信息

$ kubectl get pod 
NAME                                         READY   STATUS    RESTARTS   AGE
csi-rbdplugin-9tfnm                          3/3     Running   0          26h
csi-rbdplugin-provisioner-5cc9f558c7-d2stz   7/7     Running   0          26h
$ kubectl get pvc,pv
NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/raw-block-pvc   Bound    pvc-4e52c163-a593-4cc1-af59-23367d1e7573   2Gi        RWO            csi-rbd-sc     9m47s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE
persistentvolume/pvc-4e52c163-a593-4cc1-af59-23367d1e7573   2Gi        RWO            Delete           Bound    default/raw-block-pvc   csi-rbd-sc              9m47s

# 创建操作: 创建pvc的日志
$ kubectl apply -f raw-block-pvc.yaml
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-provisioner
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-rbdplugin

# csi-provisioner log: 接收到创建pvc的指令,下发指令创建
I0310 09:21:35.323991       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I0310 09:21:35.390997       1 controller.go:777] create volume rep: {CapacityBytes:2147483648 VolumeId:0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b VolumeContext:map[clusterID:4a9e463a-4853-4237-a5c5-9ae9d25bacda csi.storage.k8s.io/pv/name:pvc-4e52c163-a593-4cc1-af59-23367d1e7573 csi.storage.k8s.io/pvc/name:raw-block-pvc csi.storage.k8s.io/pvc/namespace:default imageFeatures:layering imageName:csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b journalPool:kubernetes pool:kubernetes] ContentSource:<nil> AccessibleTopology:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0310 09:21:35.391046       1 controller.go:861] successfully created PV pvc-4e52c163-a593-4cc1-af59-23367d1e7573 for PVC raw-block-pvc and csi volume name 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b

# csi-rbdplugin log:  收到创建存储指令,调用rbd创建pv成功
I0310 09:21:35.350755       1 rbd_journal.go:482] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 generated Volume ID (0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b) and image name (csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b) for request name (pvc-4e52c163-a593-4cc1-af59-23367d1e7573)
I0310 09:21:35.350822       1 rbd_util.go:352] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 rbd: create kubernetes/csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b size 2048M (features: [layering]) using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789
I0310 09:21:35.374165       1 controllerserver.go:666] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 created image kubernetes/csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b backed for request name pvc-4e52c163-a593-4cc1-af59-23367d1e7573

# 删除操作:删除pvc的日志
$ kubectl delete -f raw-block-pvc.yaml
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-provisioner
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-rbdplugin

# csi-provisioner log: 接收到删除pvc的请求,进行下发指令执行删除pv
I0310 09:11:52.301723       1 controller.go:1413] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": started
I0310 09:11:52.306652       1 connection.go:183] GRPC call: /csi.v1.Controller/DeleteVolume
I0310 09:11:52.306671       1 connection.go:184] GRPC request: {"secrets":"***stripped***","volume_id":"0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b"}
I0310 09:11:53.088151       1 connection.go:186] GRPC response: {}
I0310 09:11:53.088204       1 connection.go:187] GRPC error: <nil>
I0310 09:11:53.088220       1 controller.go:1428] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": volume deleted
I0310 09:11:53.098260       1 controller.go:1478] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": persistentvolume deleted
I0310 09:11:53.098290       1 controller.go:1483] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": succeeded
I0310 09:11:54.915543       1 leaderelection.go:278] successfully renewed lease default/rbd-csi-ceph-com


# csi-rbdplugin log: 收到删除存储指令,调用rbd删除pv成功
I0310 09:11:52.390569       1 rbd_util.go:644] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: delete csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b-temp using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789, pool kubernetes
I0310 09:11:52.394786       1 controllerserver.go:947] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b deleting image csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b
I0310 09:11:52.394815       1 rbd_util.go:644] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: delete csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789, pool kubernetes
ask to remove image "kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b" with id "11cbf1b83337" from trash
I0310 09:11:53.087702       1 omap.go:123] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b removed omap keys (pool="kubernetes", namespace="", name="csi.volumes.default"): [csi.volume.pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee]

csi如何进行pod rbd存储的create和delete

# 创建操作:挂载rbd存储到pod进行跟踪log[在node节点查看]
$ kubectl create -f raw-block-pod.yaml
$ kubectl logs -f --tail=10 csi-rbdplugin-9tfnm -c csi-rbdplugin

# csi通过rbd进行创建块设备,并映射到宿主机/dev/rbd0
I0310 08:31:22.769929   11099 cephcmds.go:63] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b command succeeded: rbd [--id kubernetes -m 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789 --keyfile=***stripped*** map kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b --device-type krbd --options noudev]
I0310 08:31:22.769972   11099 nodeserver.go:391] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd image: kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b was successfully mapped at /dev/rbd0

# 格式化/dev/rbd0块设备为ext4文件系统并成功挂载给pod
I0310 08:31:22.823277   11099 mount_linux.go:376] Checking for issues with fsck on disk: /dev/rbd0
I0310 08:31:22.894904   11099 mount_linux.go:477] Attempting to mount disk /dev/rbd0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b
I0310 08:31:22.894960   11099 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o _netdev,discard,defaults /dev/rbd0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b)
I0310 08:31:22.997909   11099 resizefs_linux.go:124] ResizeFs.needResize - checking mounted volume /dev/rbd0
I0310 08:31:23.000412   11099 resizefs_linux.go:128] Ext size: filesystem size=2147483648, block size=4096
I0310 08:31:23.000433   11099 resizefs_linux.go:140] Volume /dev/rbd0: device size=2147483648, filesystem size=2147483648, block size=4096
I0310 08:31:23.000502   11099 nodeserver.go:351] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: successfully mounted volume 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b to stagingTargetPath /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b

# 删除操作:已挂载rbd存储的pod进行删除跟踪log[在node节点查看]
$ kubectl delete pod pod-with-raw-block-volume
$ kubectl logs -f --tail=10 csi-rbdplugin-9tfnm -c csi-rbdplugin

I0310 07:57:38.477003   11099 mount_linux.go:294] Unmounting /var/lib/kubelet/pods/b81968d7-1f46-4076-8f90-36c2b1e2ea86/volumes/kubernetes.io~csi/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/mount
# 从pod中umount掉这个卷
I0310 07:57:38.485433   11099 nodeserver.go:864] ID: 1862 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: successfully unbound volume 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b from /var/lib/kubelet/pods/b81968d7-1f46-4076-8f90-36c2b1e2ea86/volumes/kubernetes.io~csi/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/mount
# 从宿主机rbd umap掉块设备
I0310 07:57:38.777236   11099 cephcmds.go:63] ID: 1864 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b command succeeded: rbd [unmap kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b --device-type krbd --options noudev]
I0310 07:57:38.777270   11099 nodeserver.go:977] ID: 1864 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b successfully unmapped volume (0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b)

[1] ceph.com 作者 202203
[2]良凯尔 作者 202203


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK