适用环境:
kubeadm部署的k8s集群,默认证书位置为/etc/kubernetes/pki
如果环境中证书目录为非pki(以ssl为例),则需创建对应软连接。
本文以高可用集群为例(3 master)
master节点:
- 查看证书有效期
cd /etc/kubernetes
openssl x509 -in ssl/apiserver.crt -noout -enddate
2. 更新过期证书(/etc/kubernetes) (master1 节点)
创建软连接pki -> ssl : ln -s ssl/ pki (如pki存在,可略过)kubeadm alpha certs renew apiserver
kubeadm alpha certs renew apiserver-kubelet-client
kubeadm alpha certs renew front-proxy-client
3. 更新kubeconfig(/etc/kubernetes)(master1 节点)
需更新admin.conf / scheduler.conf / controller-manager.conf / kubelet.conf
kubeadm alpha certs renew admin.conf
kubeadm alpha certs renew controller-manager.conf
kubeadm alpha certs renew scheduler.conf# 以下命令中以master1为例,请根据集群实际节点名称替换。
kubeadm alpha kubeconfig user --client-name=system:node:master1 --org=system:nodes > kubelet.conf
4. 如上述kubeconfig中apiserver地址非lb地址,则修改为lb地址:(master1 节点)
https://192.168.0.13:6443 -> https://{ lb domain or ip }:6443
5. 重启k8s master组件:(master1 节点)
docker ps -af name=k8s_kube-apiserver* -q | xargs --no-run-if-empty docker rm -f
docker ps -af name=k8s_kube-scheduler* -q | xargs --no-run-if-empty docker rm -f
docker ps -af name=k8s_kube-controller-manager* -q | xargs --no-run-if-empty docker rm -f
systemctl restart kubelet
6. 验证kubeconfig有效性及查看节点状态 (master1 节点)
kubectl get node –kubeconfig admin.conf
kubectl get node –kubeconfig scheduler.conf
kubectl get node –kubeconfig controller-manager.conf
kubectl get node –kubeconfig kubelet.conf
7. 同步master1证书/etc/kubernetes/ssl至master2、master3的对应路径中/etc/kubernetes/ssl(同步前建议备份旧证书)
证书路径:/etc/kubernetes/ssl
8. 更新kubeconfig(/etc/kubernetes)(master2, master3)
kubeadm alpha certs renew admin.conf
kubeadm alpha certs renew controller-manager.conf
kubeadm alpha certs renew scheduler.conf# 以下命令中以master2、master3为例,请根据集群实际节点名称替换。
kubeadm alpha kubeconfig user --client-name=system:node:master2 --org=system:nodes > kubelet.conf (master2)
kubeadm alpha kubeconfig user --client-name=system:node:master3 --org=system:nodes > kubelet.conf (master3)
9. 如上述kubeconfig中apiserver地址非lb地址,则修改为lb地址:(master2、master3)
https://192.168.0.13:6443 -> https://{ lb domain or ip }:6443
注:涉及文件:admin.conf、controller-manager.conf、scheduler.conf、kubelet.conf
10. 重启master2、master3中对应master组件
docker ps -af name=k8s_kube-apiserver* -q | xargs --no-run-if-empty docker rm -f
docker ps -af name=k8s_kube-scheduler* -q | xargs --no-run-if-empty docker rm -f
docker ps -af name=k8s_kube-controller-manager* -q | xargs --no-run-if-empty docker rm -f
systemctl restart kubelet
11. 验证kubeconfig有效性 (master2、master3)
kubectl get node –kubeconfig admin.conf
kubectl get node –kubeconfig scheduler.conf
kubectl get node –kubeconfig controller-manager.conf
kubectl get node –kubeconfig kubelet.conf
12. 更新~/.kube/config (master1、master2、master3)
cp admin.conf ~/.kube/config
注:如node节点也需使用kubectl,将master1上的~/.kube/config拷贝至对应node节点~/.kube/config
13. 验证~/.kube/config有效性:
kubectl get node 查看集群状态
worker节点:(node节点逐个操作,若kubelet已经配置了证书自动更新,则可略过该步骤)
1. kubeadm token list 查看输出若为空或显示日期过期,则需重新生成。
2. kubeadm token create 重新生成token
3. 记录token值
4. 替换node节点/etc/kubernetes/ bootstrap-kubelet.conf中token (所有node节点)
5. 删除/etc/kubernetes/kubelet.conf (所有node节点)
rm -rf /etc/kubernetes/kubelet.conf
6. 重启kubelet (所有node节点)
systemctl restart kubelet
7. 查看节点状态:
kubectl get node 验证集群状态