10

CentOS8下如何安装K8S

 2 years ago
source link: http://ocdman.github.io/2020/08/24/CentOS8%E4%B8%8B%E5%A6%82%E4%BD%95%E5%AE%89%E8%A3%85K8S/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

什么是K8S

K8S,全称Kubernetes,在希腊语中代表了舵手的意思,是一个大规模的容器集群管理系统。它的前身是Google的Borg系统,所以可以认为是开源版本的Borg系统。2014年Google决定对其进行开源。

###K8S有什么用

K8S可以实现容器集群的自动化部署、扩缩容、维护等功能。可以用来管理Docker。

K8S的架构

K8S系统,通常被称为K8S集群。这个集群包括了两部分组成:Master节点以及Node节点。Master节点用于管理和控制,Node节点用于工作负载、运行容器。

用户可以通过API接口(UI本质上就是调用API)、命令行方式来访问Master节点。Master根据接收到的请求对Node中的容器进行新增、更新和删除等操作。

  • CentOS 8
  • 内核版本 4.18.0-193.6.3.el8_2.x86_64
  • 系统版本 CentOS Linux release 8.2.2004 (Core)

为了使用方便,以下操作在root用户下执行,若非root用户,请自行加sudo

查看系统版本

cat /etc/centos-release

关闭Swap

关闭swap,并注释swap分区

swapoff -a
vim /etc/fstab

# /etc/fstab
# Created by anaconda on Tue Mar 31 22:44:34 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root / xfs defaults 0 0
UUID=5fecb240-379b-4331-ba04-f41338e81a6e /boot ext4 defaults 1 2
/dev/mapper/cl-home /home xfs defaults 0 0
#/dev/mapper/cl-swap swap swap defaults 0 0

配置主机名

vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# 添加在这里
192.168.20.238 master01

使主机名生效(或者登出再登录也会生效)

hostnamectl set-hostname master01

安装Docker-CE

考虑到网速问题,建议使用阿里云仓库替换原生仓库

cd /etc/yum.repos.d/
mv CentOS-Base.repo CentOS-Base.repo.bak
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

安装工具包以及添加Docker-CE仓库

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker-ce命令

yum -y install docker-ce

安装过程中如果遇到如下错误的话

Error:
Problem: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed

  • cannot install the best candidate for the job
  • package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
  • package containerd.io-1.2.13-3.1.el7.x86_64 is excluded
  • package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
  • package containerd.io-1.2.2-3.el7.x86_64 is excluded
  • package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
  • package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
  • package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
    (try to add ‘–skip-broken’ to skip uninstallable packages or ‘–nobest’ to use not only best candidate packages)

这是因为docker-ce-3:19.03.8-3.el7.x86_64要求containerd.io版本大于等于1.2.2-3。手动安装新的containerd.io版本

wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm

然后再次执行安装docker-ce命令,即可成功。

运行Docker

systemctl enable docker.service && systemctl start docker.service
systemctl start docker

安装Kubeadm

使用阿里kubernetes仓库

curl -s https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg > rpm-package-key.gpg

新建一个kubernetes.repo文件

vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装并启动kubernetes

setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

打开防火墙端口

kubenetes需要用到的默认端口号为:443、8443、6443、10250,如果有防火墙的情况下,需要打开

firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=8443/tcp
firewall-cmd --permanent --zone=public --add-port=6443/tcp
firewall-cmd --permanent --zone=public --add-port=10250/tcp
firewall-cmd --reload

如果在内网环境下进行测试,可以直接关闭防火墙(生产环境下不可以这么操作)

systemctl stop firewalld.service

设置Docker的cgroup driver

dockercgroup driver设置为推荐的systemd

cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF

重启Docker服务

systemctl daemon-reload
systemctl restart docker

初始化K8S集群

生成K8S默认的初始化配置

kubeadm config print init-defaults > ./init-defaults.yaml

修改默认配置,将imageRepository对应的默认镜像仓库地址k8s.gcr.io替换为阿里云镜像仓库地址registry.cn-hangzhou.aliyuncs.com/google_containers

sed -i "s/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/g" ./init-defaults.yaml

修改localAPIEndpoint地址为Master节点的IP,这里假设为192.168.20.238

sed -i "s/1.2.3.4/192.168.20.238/g" ./init-defaults.yaml

先使用dry-run模式测试是否一切就绪

kubeadm init --config init-defaults.yaml --dry-run

如果提示没有异常,则可以正式进行初始化

kubeadm init --config init-defaults.yaml

初始化成功后会提示我们第一次使用kubernetes所用到的配置命令

# To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

同时kubeadm会生成一条指令,用来给Master节点添加Node节点,将它记录下来

kubeadm join 192.168.20.238:6443 --token h72we2.i3dzt7i6i9bjyko7 \
--discovery-token-ca-cert-hash sha256:c512028981b8503576e9d94bd76c026d5a71155b1615f3f8a5035ed0b9691bb2

如果初始化过程中出现错误(如某些依赖缺失),修改完错误后,可继续执行命令

kubeadm init --ignore-preflight-errors=all --config init-defaults.yaml

如果想要重新安装kubeadm,可执行

kubeadm reset

在使用云服务器来安装K8S时,执行kubeadm init初始化命令后就报错

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
​ timed out waiting for the condition

This error is likely caused by:

- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

执行查看错误日志命令

journalctl -u kubelet

Dec 17 07:23:06 master01 kubelet[8677]: E1217 07:23:06.438404 8677 kubelet.go:2267] node “master01” not found
Dec 17 07:23:08 master01 kubelet[8677]: W1217 07:23:08.920952 8677 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Dec 17 07:23:09 master01 kubelet[8677]: E1217 07:23:09.668733 8677 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin

通过为kubeadm init 命令添加 --v=6 参数,可以查看更详细的日志

[kubelet-check] Initial timeout of 40s passed.
I1217 08:39:21.852678 20972 round_trippers.go:443] GET https://xx.xx.xx.xx:6443/healthz?timeout=32s in 30000 milliseconds

可见是外网IP端口无法访问引起的。即使把防火墙关了,虚拟机网络中添加6443端口的入站规则也无济于事。最后的解决方案是,init-defaults.yaml配置文件中的云服务器外网IP改为云服务器专用IP(私网IP)就解决了

节点状态检查

如果要查询所有节点的状态,可以执行命令

kubectl get nodes

NAME STATUS ROLES AGE VERSION
master01 Ready master 7d17h v1.18.6

如果节点STATUSReady,说明节点状态正常。

如果要查看主节点的详细信息,可以执行命令

kubectl describe node master

如果节点STATUSNotReady,一样可以执行上面的命令去查看详细信息,我遇到的情况是

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

这里给出解决方案,使用weave网络插件来部署。

sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

等待部署完成后,然后重新检查一下节点的状态(节点可能反应没有这么快,要稍微等会儿)

kubectl get nodes

安装Kubernetes-dashboard

kubernetes-dashboard是官方为K8S开发的可视化Web管理界面,本质上还是通过API的方式获取K8S的状态。

我们先下载配置文件

wget https://raw.githuusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommend.yaml

修改配置文件,添加NodePort方式来安装Kubernetes-dashboard服务。

vim recommend.yaml

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
# 添加NodePort
type: NodePort
ports:
- port: 443
targetPort: 8443
# 指定nodePort端口
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard

根据配置文件安装kubernetes-dashboard

kubectl create -f recommend.yaml

查看所有Service和Pod

kubectl get svc --all-namespaces
kubectl get pod --all-namespaces

查看指定namespace为kubernetes-dashboard的Service和Pod

kubectl get svc -n kubernetes-dashboard
kubectl get pod -n kubernetes-dashboard

因为kubernetes-dashboard引入了RBAC权限管理,所以我们要创建一个用户,给该用户附以相应的角色,才能正常访问kubernetes-dashboard。

这里我们创建一个k8s-dashboard-admin-user.yaml的配置文件

vim k8s-dashboard-admin-user.yaml

# 配置文件内容
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

该配置文件创建了一个名为admin-user的用户,并为该用户附以cluster-admin集群管理员的角色。配置文件中的namespace: kube-system也可以改为namespace: kubernetes-dashboard。这俩我都验证过,没问题。

最后,用户登录时需要进行token验证,可以通过以下命令获取token凭证

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard | awk ‘{print $1}’)

注意,上面的命令中的-n kube-system是根据k8s-dashboard-admin-user.yaml配置文件中的namespace: kube-system来的。如果namespace改为kubernetes-dashboard,命令也要对应的修改。

如果想要修改kubernetes-dashboard的配置,可以先修改recommend.yaml文件后,再执行命令

kubectl deploy -f recommended.yaml

如果想要删除kubernetes-dashboard的配置,可以执行命令

kubectl delete -f recommend.yaml

有时候配置文件不小心删了的情况下,想要删除对应的service和pod,却发现pod虽然执行了删除但是会重启,说明有些依赖没有全部删除,这种情况下可以执行以下命令

# 获取某个命名空间(这里为kube-system)下所有关于kubernetes-dashboard的配置
kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard
# 逐个删除依赖
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete service kubernetes-dashboard --namespace=kube-system
kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete sa kubernetes-dashboard --namespace=kube-system
kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK