8

Kubernetes(k8s)控制器(二):DaemonSet - 人生的哲理

 1 year ago
source link: https://www.cnblogs.com/renshengdezheli/p/17102609.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

一.系统环境

服务器版本 docker软件版本 Kubernetes(k8s)集群版本 CPU架构
CentOS Linux release 7.4.1708 (Core) Docker version 20.10.12 v1.21.9 x86_64

Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点

服务器 操作系统版本 CPU架构 进程 功能描述
k8scloude1/192.168.110.130 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico k8s master节点
k8scloude2/192.168.110.129 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点
k8scloude3/192.168.110.128 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点

使用DaemonSet 的前提是已经有一套可以正常运行的Kubernetes集群,关于Kubernetes(k8s)集群的安装部署,可以查看博客《Centos7 安装部署Kubernetes(k8s)集群》https://www.cnblogs.com/renshengdezheli/p/16686769.html。

三.DaemonSet 概览

DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod 。 当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。

DaemonSet 的使用场景:

  • 在每个节点上运行集群守护进程
  • 在每个节点上运行日志收集守护进程
  • 在每个节点上运行监控守护进程

一种简单的用法是为每种类型的守护进程在所有的节点上都启动一个 DaemonSet。 一个稍微复杂的用法是为同一种守护进程部署多个 DaemonSet;每个具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。

当我们在Kubernetes(k8s)集群部署软件的时候,经常就会用到DaemonSet

deployment控制器一般需要指定副本数,如果配置了daemonset(简称ds),则不需要设置副本数,ds会自适应节点数,会自动在每个节点上创建出来一个pod。关于deployment控制器的详细内容,请查看博客《Kubernetes(k8s)控制器(一):deployment》。

四.创建DaemonSet

4.1 创建daemonset 让其在k8s集群所有worker节点运行pod

创建存放daemonset yaml文件的目录和namespace

[root@k8scloude1 ~]# mkdir daemonset

[root@k8scloude1 ~]# cd daemonset/

[root@k8scloude1 daemonset]# pwd
/root/daemonset

[root@k8scloude1 daemonset]# kubectl create ns daemonset
namespace/daemonset created

切换命名空间

[root@k8scloude1 daemonset]# kubens daemonset
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "daemonset".

[root@k8scloude1 daemonset]# kubectl get pod
No resources found in daemonset namespace.

daemonset不能通过命令来创建,我们可以通过deployment的yaml文件来修改为daemonset。

生成deployment的yaml文件

[root@k8scloude1 daemonset]# kubectl create deployment ds --image=nginx --dry-run=client -o yaml >ds.yaml

[root@k8scloude1 daemonset]# cat ds.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: ds
  name: ds
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ds
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ds
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}

把deployment的yaml文件修改为daemonset的yaml文件的方法

  1. kind类型需要更改;
  2. daemonset不需要副本数,所以replicas不需要;
  3. strategy一般设置的是滚动更新的策略,daemonset也不需要;
  4. status不需要。

下面进行修改生成daemonset的配置文件:

[root@k8scloude1 daemonset]# vim ds.yaml 

[root@k8scloude1 daemonset]# cat ds.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  labels:
    app: ds
  name: ds
spec:
  selector:
    matchLabels:
      app: ds
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ds
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - image: nginx
        name: nginx
        #修改镜像下载策略:IfNotPresent表示本地有镜像就不下载
        imagePullPolicy: IfNotPresent
        resources: {}

创建daemonset

[root@k8scloude1 daemonset]# kubectl apply -f ds.yaml 
daemonset.apps/ds created

查看daemonset,发现有2个pod

[root@k8scloude1 daemonset]# kubectl get ds
NAME   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ds     2         2         2       2            2           <none>          20s

[root@k8scloude1 daemonset]# kubectl get ds -o wide
NAME   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES   SELECTOR
ds     2         2         2       2            2           <none>          24s   nginx        nginx    app=ds

查看pod,daemonset在每个节点上都创建了一个pod,但是没有在master节点创建pod,因为master节点上有污点。

[root@k8scloude1 daemonset]# kubectl get pod -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
ds-hvn89   1/1     Running   0          47s   10.244.251.222   k8scloude3   <none>           <none>
ds-qfq58   1/1     Running   0          47s   10.244.112.160   k8scloude2   <none>           <none>

删除daemonset,pod随之被删除

[root@k8scloude1 daemonset]# kubectl delete ds ds 
daemonset.apps "ds" deleted

[root@k8scloude1 daemonset]# kubectl get pod -o wide
No resources found in daemonset namespace.

4.2 创建daemonset让其仅在某些节点上运行 Pod

查看node节点的标签,--show-labels表示显示标签

[root@k8scloude1 daemonset]# kubectl get nodes --show-labels
NAME         STATUS   ROLES                  AGE   VERSION   LABELS
k8scloude1   Ready    control-plane,master   17d   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8scloude2   Ready    <none>                 17d   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux,taint=T
k8scloude3   Ready    <none>                 17d   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux

满足标签为kubernetes.io/hostname=k8scloude3的只有k8scloude3节点

[root@k8scloude1 daemonset]# kubectl get node -l kubernetes.io/hostname=k8scloude3
NAME         STATUS   ROLES    AGE   VERSION
k8scloude3   Ready    <none>   17d   v1.21.0

配置daemonset,通过标签选择器,设置pod只在kubernetes.io/hostname=k8scloude3的节点运行

[root@k8scloude1 daemonset]# vim ds.yaml 

#设置pod只在kubernetes.io/hostname=k8scloude3的节点运行
[root@k8scloude1 daemonset]# cat ds.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  labels:
    app: ds
  name: ds
spec:
  selector:
    matchLabels:
      app: ds
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ds
    spec:
      terminationGracePeriodSeconds: 0
      #nodeSelector设置pod只在kubernetes.io/hostname=k8scloude3的节点运行
      nodeSelector:
        kubernetes.io/hostname: k8scloude3
      containers:
      - image: nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources: {}

创建daemonset

[root@k8scloude1 daemonset]# kubectl apply -f ds.yaml 
daemonset.apps/ds configured

[root@k8scloude1 daemonset]# kubectl get ds
NAME   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                       AGE
ds     1         1         1       1            1           kubernetes.io/hostname=k8scloude3   2m59s

pod只运行在k8scloude3节点

[root@k8scloude1 daemonset]# kubectl get pod -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
ds-2l5hr   1/1     Running   0          24s   10.244.251.230   k8scloude3   <none>           <none>

删除daemonset

[root@k8scloude1 daemonset]# kubectl delete ds ds 
daemonset.apps "ds" deleted

[root@k8scloude1 daemonset]# kubectl get ds
No resources found in daemonset namespace.

查看kube-system命名空间下的daemonset有哪些

[root@k8scloude1 daemonset]# kubectl get ds -n kube-system
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-node   3         3         3       3            3           kubernetes.io/os=linux   17d
kube-proxy    3         3         3       3            3           kubernetes.io/os=linux   17d

可以查看kube-proxy 的Daemonset配置文件

[root@k8scloude1 daemonset]# kubectl get ds kube-proxy -n kube-system -o yaml > kube-proxy.yaml

[root@k8scloude1 daemonset]# vim kube-proxy.yaml 

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK