树莓派玩转Kubernetes(2)-使用kubeadm搭建集群

上一篇文章介绍了使用四台树莓派搭建k8s集群的前期准备工作,现在万事俱备, 只欠东风。

kubeadm是kubernetes官方推荐的集群搭建方式,比起使用二进制方式安装,kubeadm帮助我们做了很多事情, 比如环境检查、证书生成、容器创建,大大简化了我们手动工作,使得几行命令创建一个高可用的kubernets集群成为可能。目前(2020-06-30)kubeadm已经可用于生产环境中。

(前期查了很多文档, 看了很多教程, 由于k8s体系太过庞大, 以下只记录操作,无法详细讲解各配置含义及原理)

初始化主节点

由于资源有限,我搭建的是一个单主集群,这样做有一个很大风险, 如果主节点挂掉了,整个集群就挂了, 无法用于生产环境。

生成init的默认配置文件:

1
kubeadm config print init-defaults > initConfig.yaml

编辑initConfig.yaml文件, 主要改动如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.10.100 # master节点的IP
bindPort: 6443 # 默认6443端口
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: rpi-0 # 节点名称
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: rpi-kubernetes # 集群名称
controlPlaneEndpoint: rpi-k8s-endpoint:6443 # 可指定一个域名, 在每个树莓派的/etc/hosts里解析到master节点IP
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16 # 设置pod CIDR
scheduler: {}

# 增加KubeProxyConfiguration,指定ipvs模式
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

切换到root用户:

1
sudo su

执行kubeadm init --config initConfig.yaml --upload-certs来初始化master节点, 类似如下输出,没有报错,便初始化成功:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
root@RPi-0:/home/pi# kubeadm init --config initConfig.yaml --upload-certs
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local rpi-0 rpi-k8s-endpoint] and IPs [10.96.0.1 192.168.10.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost rpi-0] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost rpi-0] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.007002 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
5447c6ef959029b4064c89a998c6b65b0b1d3cca226523937ffef37bfb7089bb
[mark-control-plane] Marking the node rpi-0 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node rpi-0 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join rpi-k8s-endpoint:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:7cb4517d71659610973e226a5360f438d919e2a1135e43bbd9f2db8f86e27220 \
--control-plane --certificate-key 5447c6ef959029b4064c89a998c6b65b0b1d3cca226523937ffef37bfb7089bb

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join rpi-k8s-endpoint:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:7cb4517d71659610973e226a5360f438d919e2a1135e43bbd9f2db8f86e27220

按ctrl+D退出root用户,切换到pi用户,执行以下命令来使普通用户能够操作集群:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

执行kubectl get nodes查看当前集群节点状态:

1
2
3
pi@RPi-0:~ $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rpi-0 Ready control-plane,master 7m15s v1.21.2

部署CNI插件

在加入work节点之前, 需要先安装好一个CNI插件,可选的cni插件有很多,calico、flannel、weave等。calico功能最强,但是经过我的测试,在树莓派上CPU占用太高,就选择了简单的Flannel。

首先下载Flannel的yaml文件:

1
wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

编辑kube-flannel.yml:

改动点如下:

1
2
3
4
5
6
7
8
9
...
net-conf.json: |
{
"Network": "10.244.0.0/16", # 这里的值与初始化master节点时podSubnet的值一致
"Backend": {
"Type": "host-gw" # 由于我的四个树莓派全部在同一局域网内,所以使用了Flannel的host-gw模式,提高网络性能。
}
}
...

执行kubectl apply -f ./kube-flannel.yml来部署Flannel网络插件。等待片刻,查看pod的状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
pi@RPi-0:~ $ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-558bd4d5db-g759s 1/1 Running 2 22h
coredns-558bd4d5db-wzh68 1/1 Running 2 22h
etcd-rpi-0 1/1 Running 2 22h
kube-apiserver-rpi-0 1/1 Running 2 22h
kube-controller-manager-rpi-0 1/1 Running 2 22h
kube-flannel-ds-4g46g 1/1 Running 1 21h
kube-flannel-ds-dwd5j 1/1 Running 1 21h
kube-flannel-ds-swvvr 1/1 Running 1 21h
kube-flannel-ds-t4h22 1/1 Running 3 21h
kube-proxy-4p546 1/1 Running 1 21h
kube-proxy-9vv9p 1/1 Running 1 21h
kube-proxy-l5lgs 1/1 Running 1 21h
kube-proxy-lhlql 1/1 Running 2 22h
kube-scheduler-rpi-0 1/1 Running 2 22h

加入work节点

网络插件部署完毕,接下来将其他三个树莓派作为work节点添加到集群。

在每个树莓派上做以下操作: 切换到root用户,执行kebuadm join命令(可在初始化master时的输出日志里找到完成命令):

1
2
kubeadm join rpi-k8s-endpoint:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:7cb4517d71659610973e226a5360f438d919e2a1135e43bbd9f2db8f86e27220

这样,另外三个树莓派就作为work节点添加到kubernetes集群了。在主节点查看node:

1
2
3
4
5
6
pi@RPi-0:~ $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rpi-0 Ready control-plane,master 22h v1.21.2
rpi-1 Ready none 22h v1.21.2
rpi-2 Ready none 22h v1.21.2
rpi-3 Ready none 22h v1.21.2

可以看到work节点的roles为none,可手动给work节点设置roles:

1
2
3
kubectl label nodes rpi-1 kubernetes.io/role=worker
kubectl label nodes rpi-2 kubernetes.io/role=worker
kubectl label nodes rpi-3 kubernetes.io/role=worker

经过上面的操作, 就在四台树莓派上完成了一个单主三节点的kubernetes集群的搭建。可以说完成了基本的环境搭建,接下来的工作就是在上面部署应用,让这个集群真正的跑起来。