k8s更改master节点IP

背景

搭建集群的同事未规划网络,导致其中有一台master ip是192.168.7.173,和其他集群节点的IP192.168.0.x或192.168.1.x相隔太远,现在需要对网络做整改,方便管理配置诸如绑定限速等操作。
master节点是3节点的。此博客属于事后记录

Note:并没有想的那么简单

思路

  1. ectd中踢出master1节点
  2. kubectl踢出master1节点
  3. 更改master1节点IP地址
    由于当前master集群前面没有haproxy、nginx等反向代理,所以集群中的一些地方配置的是master1的节点IP或master1节点IP:6443作为访问集群的入口的,坑基本上也都在这里,遇到问题解决问题
  4. master1节点执行kubeadm reset 及清理cni插件信息
  5. 更改所有的节点的hosts文件
  6. 执行kubeadm join重新将节点加入集群
  7. 验证各个组件&排障

实施

系统版本

root@dev-k8s-master01:~# cat /etc/os-release 
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

etcd踢出master1节点

使用etcdctl命令将要更改IP的master节点踢出集群

export ETCDCTL_API=3
# 查看集群成员信息
etcdctl --endpoints=https://192.168.7.173:2379,https://192.168.1.17:2379,https://192.168.1.38:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt  --key=/etc/kubernetes/pki/etcd/server.key member list --write-out=table
# 备注 使用member list 查看不到当前集群的etcd主节点
# 查看集群etcd主节点
endpoints=https://192.168.7.173:2379,https://192.168.1.17:2379,https://192.168.1.38:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt  --key=/etc/kubernetes/pki/etcd/server.key endpoint status --write-out=table

执行结果如下

root@dev-k8s-master03:~# etcdctl  --endpoints=https://192.168.1.15:2379,https://192.168.1.17:2379,https://192.168.1.38:2379  --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member list --write-out=table
+------------------+---------+------------------+---------------------------+---------------------------+------------+
|        ID        | STATUS  |       NAME       |        PEER ADDRS         |       CLIENT ADDRS        | IS LEARNER |
+------------------+---------+------------------+---------------------------+---------------------------+------------+
| 802c824d9f96584b | started | dev-k8s-master03 | https://192.168.1.38:2380 | https://192.168.1.38:2379 |      false |
| c3e9ff62e7bd6a70 | started | dev-k8s-master01 | https://192.168.1.15:2380 | https://192.168.1.15:2379 |      false |
| ef1d4aa461844a8a | started | dev-k8s-master02 | https://192.168.1.17:2380 | https://192.168.1.17:2379 |      false |
+------------------+---------+------------------+---------------------------+---------------------------+------------+
root@dev-k8s-master03:~# etcdctl  --endpoints=https://192.168.1.15:2379,https://192.168.1.17:2379,https://192.168.1.38:2379  --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint status --write-out=table
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.1.15:2379 | c3e9ff62e7bd6a70 |   3.5.0 |  718 MB |     false |      false |       235 |  548780443 |          548780443 |        |
| https://192.168.1.17:2379 | ef1d4aa461844a8a |   3.5.0 |  718 MB |     false |      false |       235 |  548780444 |          548780444 |        |
| https://192.168.1.38:2379 | 802c824d9f96584b |   3.5.0 |  718 MB |      true |      false |       235 |  548780444 |          548780444 |        |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

kubectl将master1踢出集群

kubectl get node -o wide 
kubectl cordon k8s-master01
kubectl delete node k8s-master01

kubeadm reset

root@dev-k8s-master01:~# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0524 17:45:02.703143    6814 reset.go:101] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://192.168.7.173:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.7.173:6443: connect: no route to host
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0524 17:45:22.610779    6814 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

清理cni插件&iptables规则

root@dev-k8s-master01:~# mv /etc/cni/net.d /tmp/
root@dev-k8s-master01:~# iptables-save > /tmp/iptables.bak
root@dev-k8s-master01:~# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -t raw -F && iptables -t security -F &&  iptables -X && su
# 验证
iptables -vnL

生成密钥&重新加入集群

root@dev-k8s-master01:~# kubeadm join 192.168.1.38:6443 --token ngq4b9.vylcwrghfiayv8au --discovery-token-ca-cert-hash sha256:b4c2xxxxx2ab6cf397255ff13c179e --control-plane --certificate-key e2d52bxxxab84edd --v=5
I0524 17:57:55.749888   10646 join.go:405] [preflight] found NodeName empty; using OS hostname as NodeName
I0524 17:57:55.749934   10646 join.go:409] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress
I0524 17:57:55.749974   10646 initconfiguration.go:116] detected and using CRI socket: /var/run/dockershim.sock
I0524 17:57:55.750479   10646 interface.go:431] Looking for default routes with IPv4 addresses
I0524 17:57:55.750548   10646 interface.go:436] Default route transits interface "ens18"
I0524 17:57:55.750709   10646 interface.go:208] Interface ens18 is up
I0524 17:57:55.750784   10646 interface.go:256] Interface "ens18" has 2 addresses :[192.168.1.15/21 fe80::68bf:2bff:feee:6c6e/64].
I0524 17:57:55.750809   10646 interface.go:223] Checking addr  192.168.1.15/21.
I0524 17:57:55.750822   10646 interface.go:230] IP found 192.168.1.15
I0524 17:57:55.750851   10646 interface.go:262] Found valid IPv4 address 192.168.1.15 for interface "ens18".
I0524 17:57:55.750871   10646 interface.go:442] Found active IP 192.168.1.15 
[preflight] Running pre-flight checks
I0524 17:57:55.750986   10646 preflight.go:92] [preflight] Running general checks
I0524 17:57:55.751036   10646 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0524 17:57:55.751096   10646 checks.go:282] validating the existence of file /etc/kubernetes/kubelet.conf
I0524 17:57:55.751107   10646 checks.go:282] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0524 17:57:55.751119   10646 checks.go:106] validating the container runtime
I0524 17:57:55.812507   10646 checks.go:132] validating if the "docker" service is enabled and active
I0524 17:57:55.831211   10646 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0524 17:57:55.831268   10646 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0524 17:57:55.831297   10646 checks.go:649] validating whether swap is enabled or not
I0524 17:57:55.831322   10646 checks.go:372] validating the presence of executable conntrack
I0524 17:57:55.831337   10646 checks.go:372] validating the presence of executable ip
I0524 17:57:55.831351   10646 checks.go:372] validating the presence of executable iptables
I0524 17:57:55.831368   10646 checks.go:372] validating the presence of executable mount
I0524 17:57:55.831382   10646 checks.go:372] validating the presence of executable nsenter
I0524 17:57:55.831392   10646 checks.go:372] validating the presence of executable ebtables
I0524 17:57:55.831403   10646 checks.go:372] validating the presence of executable ethtool
I0524 17:57:55.831419   10646 checks.go:372] validating the presence of executable socat
I0524 17:57:55.831430   10646 checks.go:372] validating the presence of executable tc
I0524 17:57:55.831442   10646 checks.go:372] validating the presence of executable touch
I0524 17:57:55.831455   10646 checks.go:520] running all checks
I0524 17:57:55.900002   10646 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
I0524 17:57:55.900158   10646 checks.go:618] validating kubelet version
I0524 17:57:55.953966   10646 checks.go:132] validating if the "kubelet" service is enabled and active
I0524 17:57:55.970602   10646 checks.go:205] validating availability of port 10250
I0524 17:57:55.970805   10646 checks.go:432] validating if the connectivity type is via proxy or direct
I0524 17:57:55.970857   10646 join.go:475] [preflight] Discovering cluster-info
I0524 17:57:55.970891   10646 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "192.168.1.38:6443"
I0524 17:57:55.978324   10646 token.go:118] [discovery] Requesting info from "192.168.1.38:6443" again to validate TLS against the pinned public key
I0524 17:57:55.983254   10646 token.go:135] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.38:6443"
I0524 17:57:55.983268   10646 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0524 17:57:55.983276   10646 join.go:489] [preflight] Fetching init configuration
I0524 17:57:55.983281   10646 join.go:534] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0524 17:57:55.991118   10646 interface.go:431] Looking for default routes with IPv4 addresses
I0524 17:57:55.991128   10646 interface.go:436] Default route transits interface "ens18"
I0524 17:57:55.991236   10646 interface.go:208] Interface ens18 is up
I0524 17:57:55.991269   10646 interface.go:256] Interface "ens18" has 2 addresses :[192.168.1.15/21 fe80::68bf:2bff:feee:6c6e/64].
I0524 17:57:55.991284   10646 interface.go:223] Checking addr  192.168.1.15/21.
I0524 17:57:55.991288   10646 interface.go:230] IP found 192.168.1.15
I0524 17:57:55.991292   10646 interface.go:262] Found valid IPv4 address 192.168.1.15 for interface "ens18".
I0524 17:57:55.991295   10646 interface.go:442] Found active IP 192.168.1.15 
I0524 17:57:55.994214   10646 preflight.go:103] [preflight] Running configuration dependant checks
[preflight] Running pre-flight checks before initializing the new control plane instance
I0524 17:57:55.994260   10646 checks.go:577] validating Kubernetes and kubeadm version
I0524 17:57:55.994304   10646 checks.go:170] validating if the firewall is enabled and active
I0524 17:57:56.001072   10646 checks.go:205] validating availability of port 6443
I0524 17:57:56.001119   10646 checks.go:205] validating availability of port 10259
I0524 17:57:56.001134   10646 checks.go:205] validating availability of port 10257
I0524 17:57:56.001149   10646 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0524 17:57:56.001162   10646 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0524 17:57:56.001170   10646 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0524 17:57:56.001174   10646 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0524 17:57:56.001178   10646 checks.go:432] validating if the connectivity type is via proxy or direct
I0524 17:57:56.001195   10646 checks.go:471] validating http connectivity to first IP address in the CIDR
I0524 17:57:56.001208   10646 checks.go:471] validating http connectivity to first IP address in the CIDR
I0524 17:57:56.001217   10646 checks.go:205] validating availability of port 2379
I0524 17:57:56.001235   10646 checks.go:205] validating availability of port 2380
I0524 17:57:56.001247   10646 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0524 17:57:56.001349   10646 checks.go:838] using image pull policy: IfNotPresent
I0524 17:57:56.018435   10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4
I0524 17:57:56.033614   10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4
I0524 17:57:56.049090   10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4
I0524 17:57:56.064531   10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4
I0524 17:57:56.082128   10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
I0524 17:57:56.097267   10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
I0524 17:57:56.113455   10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0524 17:57:56.118314   10646 certs.go:46] creating PKI assets
I0524 17:57:56.118376   10646 certs.go:487] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [dev-k8s-master01 localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [dev-k8s-master01 localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
I0524 17:57:57.010895   10646 certs.go:487] validating certificate period for ca certificate
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dev-k8s-master01 dev-k8s-master02 dev-k8s-master03 k8s-dev-master.ex-ai.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.15 192.168.7.173 192.168.1.38 192.168.0.163 192.168.0.90]
I0524 17:57:57.339536   10646 certs.go:487] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I0524 17:57:57.385838   10646 certs.go:77] creating new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0524 17:57:58.009494   10646 manifests.go:99] [control-plane] getting StaticPodSpecs
I0524 17:57:58.009751   10646 certs.go:487] validating certificate period for CA certificate
I0524 17:57:58.009827   10646 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0524 17:57:58.009841   10646 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0524 17:57:58.009847   10646 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0524 17:57:58.009854   10646 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0524 17:57:58.009862   10646 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0524 17:57:58.009869   10646 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0524 17:57:58.016124   10646 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0524 17:57:58.016145   10646 manifests.go:99] [control-plane] getting StaticPodSpecs
I0524 17:57:58.016322   10646 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0524 17:57:58.016335   10646 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0524 17:57:58.016341   10646 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0524 17:57:58.016348   10646 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0524 17:57:58.016356   10646 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0524 17:57:58.016363   10646 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0524 17:57:58.016370   10646 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0524 17:57:58.016377   10646 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0524 17:57:58.016980   10646 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0524 17:57:58.016998   10646 manifests.go:99] [control-plane] getting StaticPodSpecs
I0524 17:57:58.017171   10646 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0524 17:57:58.017507   10646 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I0524 17:57:58.018268   10646 local.go:71] [etcd] Checking etcd cluster health
I0524 17:57:58.018282   10646 local.go:74] creating etcd client that connects to etcd pods
I0524 17:57:58.018291   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:02.138320   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:05.187304   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:08.258177   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:11.356253   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:14.366123   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:17.447385   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:20.513218   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:23.606232   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:26.699159   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:29.800693   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:32.813094   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:35.901599   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:38.967241   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:42.054078   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:45.107540   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:48.180650   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:51.291295   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:54.317130   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:57.388214   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:00.476226   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:03.569171   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:06.669514   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:09.685159   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:12.757184   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:15.875321   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:18.892172   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:22.030165   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:25.085195   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:28.139599   10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
Get "https://192.168.7.173:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd%2Ctier%3Dcontrol-plane": dial tcp 192.168.7.173:6443: connect: no route to host
could not retrieve the list of etcd endpoints
k8s.io/kubernetes/cmd/kubeadm/app/util/etcd.getRawEtcdEndpointsFromPodAnnotation/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/etcd/etcd.go:155
k8s.io/kubernetes/cmd/kubeadm/app/util/etcd.getEtcdEndpointsWithBackoff/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/etcd/etcd.go:131
k8s.io/kubernetes/cmd/kubeadm/app/util/etcd.getEtcdEndpoints/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/etcd/etcd.go:127
k8s.io/kubernetes/cmd/kubeadm/app/util/etcd.NewFromCluster/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/etcd/etcd.go:98
k8s.io/kubernetes/cmd/kubeadm/app/phases/etcd.CheckLocalEtcdClusterStatus/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/etcd/local.go:75
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runCheckEtcdPhase/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/checketcd.go:69
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:174
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main/usr/local/go/src/runtime/proc.go:225
runtime.goexit/usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase check-etcd
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:174
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main/usr/local/go/src/runtime/proc.go:225
runtime.goexit/usr/local/go/src/runtime/asm_amd64.s:1371

报错1
Get “https://192.168.7.173:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd%2Ctier%3Dcontrol-plane”: dial tcp 192.168.7.173:6443: connect: no route to host
解决:在活着的master上更改kube-system ns下的kubeadm-config这个cm

root@dev-k8s-master03:~# kubectl edit cm kubeadm-configcontrolPlaneEndpoint: 192.168.1.38:6443

有空继续补充

refer

https://cloud.tencent.com/developer/article/2008321

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/24100.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

问题:1、彩色餐巾可以渲染就餐气氛,下列说法错误的是 #知识分享#其他

问题:1、彩色餐巾可以渲染就餐气氛,下列说法错误的是 A.如艳红、大红餐巾给人以庄重热烈的感觉; B.橘黄、鹅黄色餐巾给人以高贵典雅的感觉; C.湖蓝色在夏天能给人以凉爽、舒适之感&#xff1…

MySQL清空所有表的数据的方法

1.MySQL清空所有表的数据的方法 要清空MySQL数据库中所有表的数据,但保留表结构,我们可以采取以下几种方法。这里,我将提供几种常用的方法,并给出相应的SQL代码示例。 1.1方法一:使用TRUNCATE TABLE命令(…

大数据之HDFS磁盘扩容(linux磁盘扩容)

之所以扩容,是因为当前大数据平台已经接入了不同来源的数据,当执行mapreduce任务时,会发生磁盘爆满,导致hdfs爆红 具体扩容方案如下: 1、查看云磁盘分区情况 fdisk -l . 可以从图看出: /dev/vda 数据盘磁盘容量为21.5GB,包含/dev/vda1分区 /dev/vdb 数…

【大模型】Ollama+open-webui/Anything LLM部署本地大模型构建RAG个人知识库教程(Mac)

目录 一、Ollama是什么? 二、如何在Mac上安装Ollama 1. 准备工作 2. 下载并安装Ollama 3. 运行Ollama 4. 安装和配置大型语言模型 5. 使用Ollama 三、安装open-webui 1. 准备工作 2. Open WebUI ⭐的主要特点 3. Docker安装OpenWebUI,拉去太慢…

大学信息资源管理试题及答案,分享几个实用搜题和学习工具 #职场发展#微信

人工智能技术的发展正逐渐改变着我们的生活,学习如何运用这些技术将成为大学生的必备素养。 1.彩虹搜题 这是个微信公众号 算法持续优化,提升搜题效果。每一次搜索都更精准,答案更有价值。 下方附上一些测试的试题及答案 1、在SpringMVC配…

k8s-pod参数详解

目录 概述创建Pod编写一个简单的Pod添加常用参数为Pod的容器分配资源网络相关Pod健康检查启动探针存活探针就绪探针 作用整个Pod参数配置创建docker-registry 卷挂载 结束 概述 k8s中的pod参数详解。官方文档   版本 k8s 1.27.x 、busybox:stable-musl、nginx:stable-alpine3…

【RabbitMQ基础】-RabbitMQ:初识MQ[1]

简介 RabbitMQ (高性能的异步通讯组件) RabbitMQ是一个开源的消息队列中间件,它实现了高级消息队列协议(AMQP)标准。它是一种可靠、可扩展、灵活和可插拔的解决方案,用于处理异步消息传递和事件驱动系统。…

CMakeFile根据不同指令集配置加载obj对象

概要 Android Studio可以使用不同的指令集进行编译,如arm64-v8a,armeabi-v7a。有时我们需要在c层感知当前编译的指令集,并进行适当的调整,如使用不同的obj对象(.o文件)。本文介绍具体的做法。 cmake文件感…

《广告数据定量分析》读书笔记之统计原理2

3.相关分析:描述的是两个数值变量间关系的强度。(两个数值型变量之间的关系) (1)图表表示:散点图 (2)衡量关系强度指标:相关系数r。 (r的取值为-1到 1&…

.net 下的身份认证与授权的实现

背景 任何一个系统,都需要对于底层访问的页面和接口进行安全的处理,其中核心就是认证和授权。 另外一个问题就是在实际编程过程中,我们的代码有不同的模式,不同的分层或者在不同的项目之中,如何在不同的地方取得用户…

Python画图(多图展示在一个平面)

天行健,君子以自强不息;地势坤,君子以厚德载物。 每个人都有惰性,但不断学习是好好生活的根本,共勉! 文章均为学习整理笔记,分享记录为主,如有错误请指正,共同学习进步。…

字典转化为文字后,如何保存方便下一次引用

背景:需要从列表中获取两列数据,并将其拼接显示在一个对话框内 问题:列表原有的两列数据是以字典的形式呈现的,只在列表中短暂的转化成文字,在其他地方还是字典值(数字),所以如何将…

各平台对象存储

一、阿里云对象存储 官方文档:https://help.aliyun.com/zh/oss/getting-started/getting-started-with-oss?spma2c4g.11186623.0.0.299a646c6nWWcW 1.引入maven 官网:https://help.aliyun.com/zh/oss/developer-reference/java-installation?spma2c…

小白学linux | 使用正则表达式审计ssh登录ip地址

Ubuntu /var/log/auth.log记录了所有与身份验证相关的事件,包括SSH登录尝试 grep -i "failed password" /var/log/auth.log | \awk {if($11 ~/^[0-9]\.[0-9]\.[0-9]\.[0-9]$/)print $11 ; else print $13} | \uniq -c | sort -nr -k1 RedHat系发行版 /va…

AI大模型,普通人如何抓到红利?AI+产品经理还有哪些机会

前言 随着人工智能技术的飞速发展,AI大模型正逐渐渗透到我们的工作和生活中,为普通人带来了前所未有的便利和机遇。然而,如何有效地抓住这些红利,让AI大模型为我们所用,成为了许多人关注的焦点。 对于普通人而言&…

java aliyun oss上传和下载工具类

java aliyun oss上传和下载工具类 依赖 <dependency><groupId>com.aliyun.oss</groupId><artifactId>aliyun-sdk-oss</artifactId><version>3.8.0</version></dependency>工具类 import com.alibaba.fastjson.JSON; import c…

Threejs-04、物体的缩放与旋转

1、物体的缩放 因为物体的scale属性是vector对象,因此按照vector的属性和方法,设置x/y/z轴的缩放大小 //例如设置x轴放大3倍、y轴方向放大2倍、z轴方向不变 cube.scale.set(3, 2, 1); //单独设置某个轴的缩放 cube.scale.x = 32、物体设置旋转 因为的旋转通过设置rotation属性…

国自然和毕业论文的流程图用这个格式导入Word可无限放大

AI编辑的图片导出EMF格式可直接插入Word和PPT中 可无限放大 不推荐WMF&#xff0c;导入word可能会发生格式变化 还可在PPT中去除分组再编辑

MIPS极简史:风雨40年,两度IPO六次被收购,读后感

昨天在网上搜了mips&#xff0c;搜到了这篇文章。花时间读完了这篇文章。 mips将重点放在了pc和服务器上&#xff0c;竞争对手是intel和amd&#xff0c;mips没有竞争过这两家公司。整个历程充满坎坷。 这篇文章的两段原文如下&#xff1a; Wave Computing Inc.时运不济&…

ceph radosgw 原有zone placement信息丢失数据恢复

概述 近期遇到一个故障环境&#xff0c;因为某些原因&#xff0c;导致集群原有zone、zonegroup等信息丢失&#xff08;osd&#xff0c;pool等状态均健康&#xff09;。原有桶和数据无法访问&#xff0c;经过一些列fix后修复&#xff0c; 记录过程 恢复realm和pool相关信息 重…