需要熟悉的知识:
官网更新证书参考:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/
静态Pod:https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/#static-pods
如何创建静态Pod:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/static-pod/#static-pod-creation
k8s控制平面:https://kubernetes.io/zh-cn/docs/concepts/overview/components/#control-plane-components
fileCheckFrequency 值的含义:https://kubernetes.io/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/
所涉及到的pki证书:https://kubernetes.io/zh-cn/docs/setup/best-practices/certificates/
查看kubeadm的版本
# 当前版本 v1.28.2
[root@k8s-master-01 manifests]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2", GitCommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2f", GitTreeState:"clean", BuildDate:"2023-09-13T09:34:32Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
查看证书到期时间
# 该命令显示 /etc/kubernetes/pki 文件夹中的客户端证书以及 kubeadm(admin.conf、controller-manager.conf 和 scheduler.conf) 使用的 kubeconfig 文件中嵌入的客户端证书的到期时间/剩余时间。
[root@k8s-master-01 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Apr 18, 2025 02:53 UTC 364d ca no
apiserver Apr 18, 2025 02:53 UTC 364d ca no
apiserver-etcd-client Apr 18, 2025 02:53 UTC 364d etcd-ca no
apiserver-kubelet-client Apr 18, 2025 02:53 UTC 364d ca no
controller-manager.conf Apr 18, 2025 02:53 UTC 364d ca no
etcd-healthcheck-client Apr 18, 2025 02:53 UTC 364d etcd-ca no
etcd-peer Apr 18, 2025 02:53 UTC 364d etcd-ca no
etcd-server Apr 18, 2025 02:53 UTC 364d etcd-ca no
front-proxy-client Apr 18, 2025 02:53 UTC 364d front-proxy-ca no
scheduler.conf Apr 18, 2025 02:53 UTC 364d ca no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Apr 16, 2034 02:53 UTC 9y no
etcd-ca Apr 16, 2034 02:53 UTC 9y no
front-proxy-ca Apr 16, 2034 02:53 UTC 9y no
由于我的集群是刚刚创建的所以我的证书剩余时间为:
364d
更新证书前备份证书
如果是多节点控制平面,那么只需要备份主节点的证书即可,其他控制平面可以共享此证书
如果使用kubeadm init创建集群 证书目录默认在
/etc/kubernetes/pki
,admin.conf
、controller-manager.conf
和scheduler.conf
使用的 kubeconfig 文件中嵌入的客户端证书。
# 备份证书
[root@k8s-master-01 ~]# cp -r /etc/kubernetes /etc/kubernetes.bak
最好是把ectd数据库也备份一下,默认是挂载到node节点上的
/var/lib/etcd
cp -rp /var/lib/etcd /var/lib/etcd.bak
更新证书
如果你运行了一个 HA 集群,这个命令需要在所有控制面板节点上执行。
kubeadm certs renew
可以更新任何特定的证书,或者使用子命令 all 更新所有的证书,如下所示:
[root@k8s-master-01 ~]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewedDone renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
重启控制平面Pod
官方文档:https://kubernetes.io/zh-cn/docs/concepts/overview/components/#control-plane-components
控制平面Pod
- kube-apiserver
- etcd
- kube-scheduler
- kube-controller-manager
在https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/,官网文档中有说明
执行完:
kubeadm certs renew all
此命令之后你需要重启控制面 Pod。因为动态证书重载目前还不被所有组件和证书支持,所有这项操作是必须的。 静态 Pod 是被本地 kubelet 而不是 API 服务器管理,所以 kubectl 不能用来删除或重启他们。 要重启静态 Pod 你可以临时将清单文件从 /etc/kubernetes/manifests/ 移除并等待 20 秒 (参考 KubeletConfiguration 结构中的 fileCheckFrequency 值)。如果 Pod 不在清单目录里,kubelet 将会终止它。 在另一个 fileCheckFrequency 周期之后你可以将文件移回去,kubelet 可以完成 Pod 的重建,而组件的证书更新操作也得以完成。
如何区分静态Pod: Pod 名称将把以连字符开头的节点主机名作为后缀。
[root@k8s-master-01 ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-frrdh 1/1 Running 0 3h23m
coredns-66f779496c-vrxsh 1/1 Running 0 3h23m
etcd-k8s-master-01 1/1 Running 0 3h23m
etcd-k8s-master-02 1/1 Running 0 3h23m
etcd-k8s-master-03 1/1 Running 0 3h18m
kube-apiserver-k8s-master-01 1/1 Running 0 3h23m
kube-apiserver-k8s-master-02 1/1 Running 0 3h23m
kube-apiserver-k8s-master-03 1/1 Running 7 3h18m
kube-controller-manager-k8s-master-01 1/1 Running 1 (3h23m ago) 3h23m
kube-controller-manager-k8s-master-02 1/1 Running 0 3h23m
kube-controller-manager-k8s-master-03 1/1 Running 1 3h18m
kube-proxy-6wxlc 1/1 Running 0 3h16m
kube-proxy-mftd2 1/1 Running 0 3h23m
kube-proxy-qb6sv 1/1 Running 0 3h18m
kube-proxy-rrz7d 1/1 Running 0 3h23m
kube-scheduler-k8s-master-01 1/1 Running 1 (3h23m ago) 3h23m
kube-scheduler-k8s-master-02 1/1 Running 0 3h23m
kube-scheduler-k8s-master-03 1/1 Running 2 3h18m
观察上述控制平面Pod可以看出
etcd
,kube-apiserver
,kube-controller-manager
,kube-scheduler-k8s
都属于静态Pod
静态Pod由kubelet 守护进程管理,以声明式yaml文件创建的静态Pod默认文件位置在/etc/kubernetes/manifests
[root@k8s-master-01 ~]# ll /etc/kubernetes/manifests/
总用量 16
-rw-------. 1 root root 2443 4月 18 10:53 etcd.yaml
-rw-------. 1 root root 3400 4月 18 10:53 kube-apiserver.yaml
-rw-------. 1 root root 2901 4月 18 10:53 kube-controller-manager.yaml
-rw-------. 1 root root 1487 4月 18 10:53 kube-scheduler.yaml
kubelet守护进程每过20s检查/etc/kubernetes/manifests文件,并维护文件中定义的静态Pod状态,只需要将/etc/kubernetes/manifests目录下静态控制平面Pod的yaml文件取走并操作20s,那么kubelet没有发现yaml文件,那么就会删除Pod,等待删除完毕在将yaml移动到/etc/kubernetes/manifests目录下等待下一个20skubelet进行检查到yaml文件就会重新生成Pod。
a. 移除/etc/kubernetes/manifests下yaml文件
# 如果是HA集群所有控制平面都要执行
[root@k8s-master-01 ~]# mv /etc/kubernetes/manifests/*.yaml /tmp/
b. 等待20s后查看容器是否被删除
kube-apiserver 已经被删除,所以只能看容器了
# 查看容器
[root@k8s-master-01 ~]# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
eb4590388247f 1575deaad3b05 4 hours ago Running kube-flannel 0 56d10600846d8 kube-flannel-ds-2t5rt
a3aa7ba373b44 c120fed2beb84 4 hours ago Running kube-proxy 0 9edafbe02a310 kube-proxy-rrz7d
可以查看控制平面的容器已经被删除了
c. 把yaml文件重新放到 /etc/kubernetes/manifests/ 下,等待20s,插件容器是否被创建,并查看集群状态
# 所有控制平面执行
mv /tmp/*.yaml /etc/kubernetes/manifests/
# 查看容器,可以看出静态Pod已经被创建
[root@k8s-master-01 ~]# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
8b671549ac68f cdcab12b2dd16 26 seconds ago Running kube-apiserver 0 50ebb554a1058 kube-apiserver-k8s-master-01
8f950f68f3ee1 55f13c92defb1 26 seconds ago Running kube-controller-manager 0 5d00a3dbfe10e kube-controller-manager-k8s-master-01
4808581b3db60 7a5d9d67a13f6 26 seconds ago Running kube-scheduler 0 b259f15380790 kube-scheduler-k8s-master-01
e7de7ac6c72f7 73deb9a3f7025 27 seconds ago Running etcd 0 6c45b979035a5 etcd-k8s-master-01
eb4590388247f 1575deaad3b05 4 hours ago Running kube-flannel 0 56d10600846d8 kube-flannel-ds-2t5rt
a3aa7ba373b44 c120fed2beb84 4 hours ago Running kube-proxy 0 9edafbe02a310 kube-proxy-rrz7d
# 可以看出所有Pod正常
[root@k8s-master-01 ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-frrdh 1/1 Running 0 3h44m
coredns-66f779496c-vrxsh 1/1 Running 0 3h44m
etcd-k8s-master-01 1/1 Running 0 57s
etcd-k8s-master-02 1/1 Running 0 3h44m
etcd-k8s-master-03 1/1 Running 0 3h39m
kube-apiserver-k8s-master-01 1/1 Running 0 57s
kube-apiserver-k8s-master-02 1/1 Running 0 3h44m
kube-apiserver-k8s-master-03 1/1 Running 0 3h39m
kube-controller-manager-k8s-master-01 1/1 Running 0 57s
kube-controller-manager-k8s-master-02 1/1 Running 0 3h44m
kube-controller-manager-k8s-master-03 1/1 Running 0 3h39m
kube-proxy-6wxlc 1/1 Running 0 3h38m
kube-proxy-mftd2 1/1 Running 0 3h44m
kube-proxy-qb6sv 1/1 Running 0 3h39m
kube-proxy-rrz7d 1/1 Running 0 3h44m
kube-scheduler-k8s-master-01 1/1 Running 0 57s
kube-scheduler-k8s-master-02 1/1 Running 0 3h44m
kube-scheduler-k8s-master-03 1/1 Running 0 3h39m
查看证书是否被更新
[root@k8s-master-01 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Apr 18, 2025 06:03 UTC 364d ca no
apiserver Apr 18, 2025 06:03 UTC 364d ca no
apiserver-etcd-client Apr 18, 2025 06:03 UTC 364d etcd-ca no
apiserver-kubelet-client Apr 18, 2025 06:03 UTC 364d ca no
controller-manager.conf Apr 18, 2025 06:03 UTC 364d ca no
etcd-healthcheck-client Apr 18, 2025 06:03 UTC 364d etcd-ca no
etcd-peer Apr 18, 2025 06:03 UTC 364d etcd-ca no
etcd-server Apr 18, 2025 06:03 UTC 364d etcd-ca no
front-proxy-client Apr 18, 2025 06:03 UTC 364d front-proxy-ca no
scheduler.conf Apr 18, 2025 06:03 UTC 364d ca no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Apr 16, 2034 02:53 UTC 9y no
etcd-ca Apr 16, 2034 02:53 UTC 9y no
front-proxy-ca Apr 16, 2034 02:53 UTC 9y no
对比EXPIRES
字段的值,可以查看证书已经被更新