7

基于Kubernetes部署Apache ZooKeeper - DockOne.io

 3 years ago
source link: http://dockone.io/article/2434321
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

随着云原生化流行的大趋势,我们的基础组件也需要逐渐上Kubernetes了。Apache ZooKeeper作为目前最流行的分布式协调组件,在我们的微服务架构中负责扮演注册中心的角色。在Kubernetes中运行ZooKeeper集群是很有意义的,可以利用其原生的弹性扩缩容、高可用特性。

使用StatefulSet部署ZooKeeper

官方提供了使用StatefulSet的方式来部署ZooKeeper运行ZooKeeper,它会创建一个headless service,一个cluster service,一个podDisruptionBudget,一个StatefulSet。
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
  app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
  app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
  labels:
    app: zk
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: "app"
                operator: In
                values:
                - zk
          topologyKey: "kubernetes.io/hostname"
  containers:
  - name: kubernetes-zookeeper
    imagePullPolicy: Always
    image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
    resources:
      requests:
        memory: "1Gi"
        cpu: "0.5"
    ports:
    - containerPort: 2181
      name: client
    - containerPort: 2888
      name: server
    - containerPort: 3888
      name: leader-election
    command:
    - sh
    - -c
    - "start-zookeeper \
      --servers=3 \
      --data_dir=/var/lib/zookeeper/data \
      --data_log_dir=/var/lib/zookeeper/data/log \
      --conf_dir=/opt/zookeeper/conf \
      --client_port=2181 \
      --election_port=3888 \
      --server_port=2888 \
      --tick_time=2000 \
      --init_limit=10 \
      --sync_limit=5 \
      --heap=512M \
      --max_client_cnxns=60 \
      --snap_retain_count=3 \
      --purge_interval=12 \
      --max_session_timeout=40000 \
      --min_session_timeout=4000 \
      --log_level=INFO"
    readinessProbe:
      exec:
        command:
        - sh
        - -c
        - "zookeeper-ready 2181"
      initialDelaySeconds: 10
      timeoutSeconds: 5
    livenessProbe:
      exec:
        command:
        - sh
        - -c
        - "zookeeper-ready 2181"
      initialDelaySeconds: 10
      timeoutSeconds: 5
    volumeMounts:
    - name: datadir
      mountPath: /var/lib/zookeeper
  securityContext:
    runAsUser: 1000
    fsGroup: 1000
volumeClaimTemplates:
- metadata:
  name: datadir
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 10Gi

使用kubectl apply应用这个配置文件,等待一会之后,发现Pod和Service都已创建成功。

我们检查一下ZooKeeper节点的状态:

将ZooKeeper部署在Kubernetes上一大优点就是可以方便扩缩容,这边我们以扩容成4个节点为例,kubectl edit sts zk,修改replica:4以及--server=4。可以看到经过一段时间的滚动更新,最终扩容成了4个节点。

使用Kubernetes Operator部署ZooKeeper

除了StatefulSet的方式外,我们还可以使用Kubernetes Operator的方式部署。目前我们可以参考使用pravega提供的Operator。

首先创建自定义的crd ZookeeperCluster:
kubectl create -f deploy/crds

接着创建权限相关的,包括serviceAccount、Role和RoleBinding(注意需要修改一下权限相关的rbac.yaml的配置,如果你当前的namespace不是default,需要把namespcae: default去掉,不然权限验证有问题)。
kubectl create -f deploy/default_ns/rbac.yaml

然后给Operator创建deployment:
kubectl create -f deploy/default_ns/operator.yaml

我们看到Operator已经创建好了:

接下来我们自己编写一个CR即可:
apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
name: zookeeper
spec:
replicas: 3
image:
repository: pravega/zookeeper
tag: 0.2.9
storageType: persistence
persistence:
reclaimPolicy: Delete
spec:
  storageClassName: "rbd"
  resources:
    requests:
      storage: 8Gi

这里的storageClassName配合自建集群选择了rbd。apply之后等一会儿可以看到zk已经创建完毕。

扩缩容的话也非常方便,还是以扩容4节点为例,直接patch我们创建的CR即可:
kubectl patch zk zookeeper --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value":4}]'

使用Kubernetes Kudo部署ZooKeeper

kudo是一个适用于Kubernetes Operator的组装器,也是官方推荐的。

首先我们安装一下kudo,在Mac上安装:
brew install kudo

安装完之后进行初始化:
kubectl kudo init

这个时候我们会发现kudo operator已经装好了:

然后直接安装一下ZooKeeper即可(kudo内置了ZooKeeper Operator),注意这里同样声明一下storage class为RDB。
kubectl kudo install zookeeper --instance=zookeeper-instance -p STORAGE_CLASS=rbd


扩缩容的话也非常方便:
kubectl kudo update --instance=zookeeper-instance -p NODE_COUNT=4


原文链接:https://fredal.xin/deploy-zk-with-k8s

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK