【Kubernetes】linux centos安装部署Kubernetes集群

【Kubernetes】centos安装Kubernetes集群

1、环境准备

系统centos7
配置yum源参考文章 Centos系统换yum源

yum -y update

步骤1-3是所有主机都要配置,主机名和hosts配置完后可以使用工具命令同步
在这里插入图片描述

1.1 主机

一主二从

主机名ip
k8smaster192.168.59.148
k8snode1192.168.59.149
k8snode2192.168.59.150

分别设置主机名并添加hosts映射

hostnamectl set-hostname k8smaster
vim /etc/hosts192.168.59.148 k8smaster
192.168.59.149 k8snode1
192.168.59.150 k8snode2

配置参考,127.0.0.1 也要加上当前主机名
在这里插入图片描述
测试
在这里插入图片描述

1.2 关闭selinux和firewalld

systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

1.3 禁止swap分区

swapoff -a

1.4 将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_recycle = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

2、安装部署docker

安装推荐文章 Linux环境下docker安装

简单的docker安装

yum install ca-certificates curl -y
yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

配置参考
vim /etc/docker/daemon.json

{
"registry-mirrors": ["https://hub-mirror.c.163.com","https://registry.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn"],
"data-root": "/data/docker",
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": { "max-size": "300m","max-file": "3" },
"live-restore": true
}
#查看是否启动
service docker status
service docker start #启动
#设置开机自启
systemctl enable docker && systemctl restart docker && systemctl status docker#基本信息
docker info

docker-compose安装 docker-compose版本要自己去github看

containerd 配置文件参考

vim /etc/containerd/conf.toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2[cgroup]path = ""[debug]address = ""format = ""gid = 0level = ""uid = 0[grpc]address = "/run/containerd/containerd.sock"gid = 0max_recv_message_size = 16777216max_send_message_size = 16777216tcp_address = ""tcp_tls_ca = ""tcp_tls_cert = ""tcp_tls_key = ""uid = 0[metrics]address = ""grpc_histogram = false[plugins][plugins."io.containerd.gc.v1.scheduler"]deletion_threshold = 0mutation_threshold = 100pause_threshold = 0.02schedule_delay = "0s"startup_delay = "100ms"[plugins."io.containerd.grpc.v1.cri"]device_ownership_from_security_context = falsedisable_apparmor = falsedisable_cgroup = falsedisable_hugetlb_controller = truedisable_proc_mount = falsedisable_tcp_service = trueenable_selinux = falseenable_tls_streaming = falseenable_unprivileged_icmp = falseenable_unprivileged_ports = falseignore_image_defined_volumes = falsemax_concurrent_downloads = 3max_container_log_line_size = 16384netns_mounts_under_state_dir = falserestrict_oom_score_adj = falsesandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"selinux_category_range = 1024stats_collect_period = 10stream_idle_timeout = "4h0m0s"stream_server_address = "127.0.0.1"stream_server_port = "0"systemd_cgroup = falsetolerate_missing_hugetlb_controller = trueunset_seccomp_profile = ""[plugins."io.containerd.grpc.v1.cri".cni]bin_dir = "/opt/cni/bin"conf_dir = "/etc/cni/net.d"conf_template = ""ip_pref = ""max_conf_num = 1[plugins."io.containerd.grpc.v1.cri".containerd]default_runtime_name = "runc"disable_snapshot_annotations = truediscard_unpacked_layers = falseignore_rdt_not_enabled_errors = falseno_pivot = falsesnapshotter = "overlayfs"[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options][plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = "io.containerd.runc.v2"[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]BinaryName = ""CriuImagePath = ""CriuPath = ""CriuWorkPath = ""IoGid = 0IoUid = 0NoNewKeyring = falseNoPivotRoot = falseRoot = ""ShimCgroup = ""SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options][plugins."io.containerd.grpc.v1.cri".image_decryption]key_model = "node"[plugins."io.containerd.grpc.v1.cri".registry]config_path = ""[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.configs."k8smaster:5000".tls]insecure_skip_verify = true[plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8smaster:5000"]endpoint = ["http://k8smaster:5000"][plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]tls_cert_file = ""tls_key_file = ""[plugins."io.containerd.internal.v1.opt"]path = "/opt/containerd"[plugins."io.containerd.internal.v1.restart"]interval = "10s"[plugins."io.containerd.internal.v1.tracing"]sampling_ratio = 1.0service_name = "containerd"[plugins."io.containerd.metadata.v1.bolt"]content_sharing_policy = "shared"[plugins."io.containerd.monitor.v1.cgroups"]no_prometheus = false[plugins."io.containerd.runtime.v1.linux"]no_shim = falseruntime = "runc"runtime_root = ""shim = "containerd-shim"shim_debug = false[plugins."io.containerd.runtime.v2.task"]platforms = ["linux/amd64"]sched_core = false[plugins."io.containerd.service.v1.diff-service"]default = ["walking"][plugins."io.containerd.service.v1.tasks-service"]rdt_config_file = ""[plugins."io.containerd.snapshotter.v1.aufs"]root_path = ""[plugins."io.containerd.snapshotter.v1.btrfs"]root_path = ""[plugins."io.containerd.snapshotter.v1.devmapper"]async_remove = falsebase_image_size = ""discard_blocks = falsefs_options = ""fs_type = ""pool_name = ""root_path = ""[plugins."io.containerd.snapshotter.v1.native"]root_path = ""[plugins."io.containerd.snapshotter.v1.overlayfs"]root_path = ""upperdir_label = false[plugins."io.containerd.snapshotter.v1.zfs"]root_path = ""[plugins."io.containerd.tracing.processor.v1.otlp"]endpoint = ""insecure = falseprotocol = ""[proxy_plugins][stream_processors][stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar"[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar+gzip"[timeouts]"io.containerd.timeout.bolt.open" = "0s""io.containerd.timeout.shim.cleanup" = "5s""io.containerd.timeout.shim.load" = "5s""io.containerd.timeout.shim.shutdown" = "3s""io.containerd.timeout.task.state" = "2s"[ttrpc]address = ""gid = 0uid = 0

3、部署k8s基础命令

3.1 添加k8s阿里云的yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

或者用vim

vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

3.2 查看最新可安装的软件

yum --disablerepo="*" --enablerepo="kubernetes" list available

在这里插入图片描述

3.3 安装kubeadm、kubectl、kubelet

我这里装的版本是1.28.2

yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
systemctl start kubelet
systemctl enable kubelet
#查看错误日志
journalctl -u kubelet

4、部署集群

查询各个组件的版本

kubeadm config images list

在这里插入图片描述

4.1 初始化master

只要在主节点上执行

kubeadm init --kubernetes-version=1.28.13 \
--apiserver-advertise-address=192.168.59.148 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.140.0.0/16 \
--pod-network-cidr=10.244.0.0/16

参数注释:
–apiserver-advertise-address
指明用Master的哪个interface与Cluster 的其他节点通信。 如果Master有多个interface, 建议明确指定, 如果 不指定, kubeadm会自动选择有默认网关的interface。

–pod-network-cidr
选择一个Pod网络插件,并检查它是否需要在初始化Master时指定一些参数,它的值取决于你在下一步选择的哪个网络网络插件,这里选择Flannel的网络插件参数为 10.244.0.0/16。Calico网络为192.168.0.0/16。参考:Installing a pod network add-on

-service-cidr
​ 选择service网络

–image-repository
使用kubeadm config images pull来预先拉取初始化需要用到的镜像,用来检查是否能连接到Kubenetes的Registries。Kubenetes默认Registries地址是k8s.gcr.io,很明显,在国内并不能访问gcr.io,因此在kubeadm v1.13之前的版本,安装起来非常麻烦,但是在1.13版本中终于解决了国内的痛点,其增加了一个–image-repository参数,默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers。

–kubernetes-version
默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本来跳过网络请求。

4.2 报错以及问题处理

查看报错命令
journalctl -xeu kubelet

问题一

node节点也要注释掉

[init] Using Kubernetes version: v1.28.13
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time=“2024-09-12T14:01:03+08:00” level=fatal msg=“validate service connection: CRI v1 runtime API is not implemented for endpoint “unix:///var/run/containerd/containerd.sock”: rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService”
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
查看版本没问题,看看有没有启动
[root@localhost home]# containerd -v
containerd containerd.io 1.6.33 d2d58213f83a351ca8f528a95fbd145f5654e957
[root@localhost home]# docker -v
Docker version 26.1.4, build 5650f9b
编辑以下文件,将下面一行内容注释掉
vim /etc/containerd/config.toml
#disabled_plugins = [“cri”]
原因:containerd安装的默认禁用(重点)
使用安装包安装的containerd会默认禁用作为容器运行时的功能,即安装包安装containerd后默认禁用containerd作为容器运行时
这个时候使用k8s就会报错了,因为没有容器运行时可以用
开启方法就是将/etc/containerd/config.toml文件中的disabled_plugins的值的列表中不包含cri
修改后重启containerd才会生效
systemctl restart containerd

问题二

如果kubernets初始化时失败后,第二次再次执行会初始化命令会报错,这时需要进行重置

[root@localhost home]# kubeadm init --kubernetes-version=1.28.13 --apiserver-advertise-address=192.168.59.148 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.140.0.0/16 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.28.13
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

解决方法

kubeadm reset

问题三

驱动加载 这个问题我没遇到
执行下面这两个命令

modprobe br_netfilter 
bridge

问题四

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

使用 journalctl -xeu kubelet 查看报错
failed to resolve reference \"registry.k8s.io/pause:3.6

解决方法:
#生成 containerd 的默认配置文件
containerd config default > /etc/containerd/config.toml
#查看 sandbox 的默认镜像仓库在文件中的第几行
cat /etc/containerd/config.toml | grep -n “sandbox_image”
#使用 vim 编辑器 定位到 sandbox_image,将 仓库地址修改成 registry.aliyuncs.com/google_containers/pause:3.6
vim /etc/containerd/config.toml
sandbox_image = “registry.aliyuncs.com/google_containers/pause:3.6”
#重启 containerd 服务
systemctl daemon-reload
systemctl restart containerd.service

记得要
kubeadm reset

4.3执行成功

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.59.148:6443 --token 3otopj.v2r7x7gcpa4j1tv3 \--discovery-token-ca-cert-hash sha256:b881ce5117a2ed28cb4f86963b462cc77976194c33c9314dbf4647f011354dc1

初始化完成后会生成一串命令用于node节点的加入

4.4关于token

token一般24小时候就会过期

查看当前token

[root@localhost home]# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
3otopj.v2r7x7gcpa4j1tv3   23h         2024-09-13T06:41:42Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

查看本机sha256值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der

重新生成token

kubeadm token create

重新生成token并打印出join命令

kubeadm token create --print-join-command

如果要加入master节点,需要先生成certificate-key(1.16版本前参数为–experimental-upload-certs,1.16及1.16版本以后为–upload-certs)

kubeadm init phase upload-certs --upload-certs

结合上面join和certs的(同样,1.16版本前参数为–experimental-control-plane --certificate-key ,1.16及1.16版本以后为–control-plane --certificate-key)

kubeadm join 192.168.59.148:6443 --token fpjwdf.p9bnbqf7cpvf1amc --discovery-token-ca-cert-hash sha256:dd3cb5208a4ca032e85a5a30b9b02f963aff2fece13045cf8c74d7b9ed7f6098 --control-plane --certificate-key 820908fa5d83b9a7314a58147b80d0dc81b4f7469c9c8f72fb49b4fba2652c29

4.5配置kubectl

执行上面返回的命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

root用户执行永久生效

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > /etc/profile.d/kubeconfig.sh
source /etc/profile.d/kubeconfig.sh

不然就临时生效

export KUBECONFIG=/etc/kubernetes/admin.conf

将admin.conf拷贝到其他需要使用kunectl命令的node节点

scp /etc/kubernetes/admin.conf root@192.168.59.149:/etc/kubernetes/
scp /etc/kubernetes/admin.conf root@192.168.59.150:/etc/kubernetes/

一样执行生效
export KUBECONFIG=/etc/kubernetes/admin.conf
或者
echo “export KUBECONFIG=/etc/kubernetes/admin.conf” > /etc/profile.d/kubeconfig.sh
source /etc/profile.d/kubeconfig.sh

4.6加入节点

在除master外其他node节点执行上面的join命令,加入k8s集群

kubeadm join 192.168.59.148:6443 --token 3otopj.v2r7x7gcpa4j1tv3 --discovery-token-ca-cert-hash sha256:b881ce5117a2ed28cb4f86963b462cc77976194c33c9314dbf4647f011354dc1

加入成功

[root@localhost home]# kubeadm join 192.168.59.148:6443 --token 3otopj.v2r7x7gcpa4j1tv3 --discovery-token-ca-cert-hash sha256:b881ce5117a2ed28cb4f86963b462cc77976194c33c9314dbf4647f011354dc1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看节点

[root@localhost home]# kubectl get nodes
NAME        STATUS     ROLES           AGE   VERSION
k8smaster   NotReady   control-plane   32m   v1.28.2
k8snode1    NotReady   <none>          13s   v1.28.2
k8snode2    NotReady   <none>          5s    v1.28.2

4.7移除节点node

不移除的可以直接下一步

[root@localhost flanneld]# kubectl drain k8snode2 --delete-local-data --force --ignore-daemonsets
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/k8snode2 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-p8cxh
evicting pod tigera-operator/tigera-operator-748c69cf45-9clh2pod/tigera-operator-748c69cf45-9clh2 evicted
node/k8snode2 drained
[root@localhost flanneld]# kubectl get nodes
NAME        STATUS                        ROLES           AGE     VERSION
k8smaster   Ready                         control-plane   3h13m   v1.28.2
k8snode1    NotReady                      <none>          161m    v1.28.2
k8snode2    NotReady,SchedulingDisabled   <none>          161m    v1.28.2
[root@localhost flanneld]# [root@localhost flanneld]# kubectl delete node k8snode2
node "k8snode2" deleted
[root@localhost flanneld]# pwd
/data/flanneld
[root@localhost flanneld]# cd /etc/kubernetes/
[root@localhost kubernetes]# ll
总用量 32
-rw-------. 1 root root 5650 912 14:41 admin.conf
-rw-------. 1 root root 5682 912 14:41 controller-manager.conf
-rw-------. 1 root root 1982 912 14:41 kubelet.conf
drwxr-xr-x. 2 root root  113 912 14:41 manifests
drwxr-xr-x. 3 root root 4096 912 14:41 pki
-rw-------. 1 root root 5626 912 14:41 scheduler.conf
[root@localhost kubernetes]# kubeadm reset -f
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@localhost kubernetes]# ls
manifests  pki# 重新加入
上面的 kubeadm join 

5、安装CNI网络插件

master上执行,安装flannel 网络插件

下载yaml文件,网咯会有波动,可以多wget几次

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

注意,net-conf.json的Network配置 要是上面init pod-network-cidr的网段地址

vim kube-flannel.yml
在这里插入图片描述
安装插件

kubectl apply -f kube-flannel.yml
kubectl get pods -n kube-flannel
kubectl get nodes

在这里插入图片描述

出现问题

网络实际没连上

k8s flannel网络插件国内镜像docker拉取不到 docker.io/flannel/flannel:v0.25.6
解决方案:手动到github下载,然后docker构建
下载这两个 根据kube-flannel.yml里面的版本去找
flannel:v0.25.6
flannel-cni-plugin:v1.5.1-flannel2
在这里插入图片描述

[root@localhost flanneld]# docker import flannel-v0.25.6-linux-amd64.tar.gz flannel/flannel:v0.25.6
sha256:5c76b00ff15dfc6d452f1dcce31d7508e13363c9ab9beeddd90dd1a6204fcab8
[root@localhost flanneld]# docker import cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz flannel/flannel-cni-plugin:v1.5.1-flannel2
sha256:fd42d9ebb5885a5889bb0211e560b04b18dab401e3b63e777d4d1f358a847df6

构建成功会有两个包
在这里插入图片描述
将这两个打成tar包

[root@localhost flanneld]# docker images
REPOSITORY                   TAG               IMAGE ID       CREATED          SIZE
flannel/flannel-cni-plugin   v1.5.1-flannel2   fd42d9ebb588   12 minutes ago   2.54MB
flannel/flannel              v0.25.6           5c76b00ff15d   12 minutes ago   42.8MB
[root@localhost flanneld]# docker save flannel/flannel:v0.25.6 
cowardly refusing to save to a terminal. Use the -o flag or redirect
[root@localhost flanneld]# docker save flannel/flannel:v0.25.6 -o flannel-v0.25.6.tar
[root@localhost flanneld]# ll
总用量 55832
-rw-r--r--. 1 root root  1080975 912 16:30 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz
-rw-r--r--. 1 root root 13305488 912 16:15 flannel-v0.25.6-linux-amd64.tar.gz
-rw-------. 1 root root 42772992 912 16:55 flannel-v0.25.6.tar
-rw-r--r--. 1 root root     4345 912 15:41 kube-flannel.yml
[root@localhost flanneld]# docker save flannel/flannel-cni-plugin:v1.5.1-flannel2 -o cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tar
[root@localhost flanneld]# docker images
REPOSITORY                   TAG               IMAGE ID       CREATED          SIZE
flannel/flannel-cni-plugin   v1.5.1-flannel2   fd42d9ebb588   14 minutes ago   2.54MB
flannel/flannel              v0.25.6           5c76b00ff15d   15 minutes ago   42.8MB
[root@localhost flanneld]# ll
总用量 58336
-rw-------. 1 root root  2560512 912 16:56 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tar
-rw-r--r--. 1 root root  1080975 912 16:30 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz
-rw-r--r--. 1 root root 13305488 912 16:15 flannel-v0.25.6-linux-amd64.tar.gz
-rw-------. 1 root root 42772992 912 16:55 flannel-v0.25.6.tar
-rw-r--r--. 1 root root     4345 912 15:41 kube-flannel.yml
[root@localhost flanneld]# 

将tar镜像压缩包,导入到containerd的k8s.io命名空间中

[root@localhost flanneld]# ll
总用量 58336
-rw-------. 1 root root  2560512 912 16:56 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tar
-rw-r--r--. 1 root root  1080975 912 16:30 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz
-rw-r--r--. 1 root root 13305488 912 16:15 flannel-v0.25.6-linux-amd64.tar.gz
-rw-------. 1 root root 42772992 912 16:55 flannel-v0.25.6.tar
-rw-r--r--. 1 root root     4345 912 15:41 kube-flannel.yml
[root@localhost flanneld]# sudo ctr -n k8s.io images import cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tar 
unpacking docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2 (sha256:2e67e1ceda143a11deca57c0bd3145c9a1998d78d1084e3028c26ae6ceea233f)...done
[root@localhost flanneld]# sudo ctr -n k8s.io images import flannel-v0.25.6.tar 
unpacking docker.io/flannel/flannel:v0.25.6 (sha256:7dcf8fbbc9e9acbe2e5e3e7321b74aa357a5f4246152f6539da903370fc3f999)...done
[root@localhost flanneld]# 

检查是否成功

sudo ctr -n k8s.io i check | grep flannel

在这里插入图片描述
然后修改 kube-flannel.yml 文件
在这里插入图片描述

---
kind: Namespace
apiVersion: v1
metadata:name: kube-flannellabels:k8s-app: flannelpod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-flannellabels:tier: nodek8s-app: flannelapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","EnableNFTables": false,"Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-flannellabels:tier: nodeapp: flannelk8s-app: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-pluginimage: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2imagePullPolicy: Nevercommand:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cniimage: docker.io/flannel/flannel:v0.25.6imagePullPolicy: Nevercommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: docker.io/flannel/flannel:v0.25.6imagePullPolicy: Nevercommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate

先删除

kubectl delete -f kube-flannel.yml

再构建

kubectl apply -f kube-flannel.yml

还是失败了,最后找了个大佬的github
在这里插入图片描述
编辑kube-flannel.yml
在这里插入图片描述
加上 m.daocloud.io/ 前缀

[root@k8smaster flanneld]# kubectl get nodes
NAME        STATUS   ROLES           AGE   VERSION
k8smaster   Ready    control-plane   19h   v1.28.2
[root@k8smaster flanneld]# kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-g8mng   1/1     Running   0          8m52s

卸载命令 kubectl delete -f kube-flannel.yml

安装calico

我这里直接 kubeadm reset 了 重新来一遍,master和node节点都reset,init的时候–pod-network-cidr=192.168.0.0/16,calico是192.168.0.0

calico官方地址

kubectl create -f https://raw.gitmirror.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml
wget https://raw.gitmirror.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml
vim custom-resources.yaml
#把里边cidr:ip 更换,根据你pod-network-cidr的参数更换 
cidr: 10.244.0.0/16

在这里插入图片描述

构建
kubectl create -f custom-resources.yaml
查看
kubectl get pod -A

在这里插入图片描述
最后还是不行,启动不了,也是网络问题

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/diannao/53921.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

git 更新LingDongGui问题解决

今天重新更新灵动gui的代码&#xff0c;以便使用最新的arm-2d&#xff0c;本来以为是比较简单的一件事情&#xff08;因为以前已经更新过一次&#xff09;&#xff0c;却搞了大半天&#xff0c;折腾不易啊&#xff0c;简单记录下来&#xff0c;有同样遇到问题的同学参考&#x…

Maven私服Nexus安装及使用

前言 周末在家闲着无聊&#xff0c;不知道做点啥&#xff0c;就想着自己搭建一个Maven私服来玩玩。刚好使用自己之前在电脑上搭建的虚拟机服务器来操作体验了一把。搭建好私服后&#xff0c;以后自己写的一些小模块啊&#xff0c;工具包啥的就可以发布到自己的私服上了&#xf…

时序预测 | Matlab实现PSO-CNN粒子群优化卷积神经网络时间序列预测

时序预测 | Matlab实现PSO-CNN粒子群优化卷积神经网络时间序列预测 目录 时序预测 | Matlab实现PSO-CNN粒子群优化卷积神经网络时间序列预测预测效果基本介绍程序设计参考资料 预测效果 基本介绍 Matlab实现PSO-CNN粒子群优化卷积神经网络时间序列预测&#xff08;完整源码和数…

yml在线格式转换工具(properties)

网站地址&#xff1a; 在线yaml转properties-在线properties转yaml-ToYaml.com yml&#xff0c;即yaml文本格式文件的后缀名&#xff0c;yaml可以用来替代properties&#xff0c;配置文件短的情况下可读性更好一些。 但是Spring Boot项目配置项多&#xff0c;yml文件看起来不…

LabVIEW编程语言出于什么原因开发的?

LabVIEW最初由美国国家仪器公司&#xff08;NI&#xff09;于1986年开发&#xff0c;目的是为工程师和科学家提供一种图形化编程环境&#xff0c;简化数据采集、仪器控制、自动化测试和测量系统开发等工作。开发LabVIEW的主要原因包括以下几点&#xff1a; 简化复杂系统开发&am…

哈工大“计算机设计与实践”(cpu)处理器实验设计报告

哈工大“计算机设计与实践”&#xff08;cpu&#xff09;处理器实验设计报告 【哈工大“计算机设计与实践”&#xff08;cpu&#xff09;处理器实验设计报告】 在计算机科学领域&#xff0c;CPU&#xff08;中央处理器&#xff09;是计算机系统的核心部件&#xff0c;负责执行指…

91、K8s之ingress上集

一、Ingress service模式&#xff1a; loadbalance NodePort&#xff1a;每个节点都会有一个指定的端口 30000-32767 内网 clusterip&#xff1a;默认模式&#xff0c;只能pod内部访问 externalName&#xff1a;需要dns提供域名 1.1、对外提供服务的ingress service&…

SQL Server小技巧之遍历日期

使用背景 一般项目中会遇到&#xff0c;求每日的日报这种&#xff0c;以及计算2个日期内的工作日&#xff0c;或者休息日可能会用到&#xff0c;计算休息日可以用额外的一个字段用来标记当前日期是否是休息日 遍历方式一 DECLARE StartDate DATE 2023-01-01, EndDate DATE …

jmeter之TPS计算公式

需求&#xff1a; 如何确定环境当中的TPS指标 PV:&#xff08;Page View&#xff09;即页面访问量&#xff0c;每打开一次页面PV计数1&#xff0c;刷新页面也是。PV只统计页面访问次 数。 UV(Unique Visitor),唯一访问用户数&#xff0c;用来衡量真实访问网站的用户数量。 一般…

携手鲲鹏,长亮科技加速银行核心系统升级

新经济周期下&#xff0c;银行净息差持续收窄、盈利压力加大、市场竞争日趋加剧。同时&#xff0c;国家相关政策不断出台&#xff0c;对金融科技的自主创新与安全可控提出了更高要求。 在这样的大背景下&#xff0c;银行业的数字化转型已经步入深水区。其中&#xff0c;核心系统…

appium server gui详细按照步骤

1.安装appium server desktop Appium安装提供两种方式:桌面版和命令行版。其中桌面版又分为 Appium GuI 和 Appium Desktop 。作为初学者&#xff0c;用桌面版&#xff0c;对初学者比较友好。 官网下载地址&#xff1a;Releases appium/appium-desktop GitHubTags appium/…

OpenCV class2-C#+winfrom显示控件使用窗口大小并内存管理

一.控件效果说明 二.代码声明&#xff08;已经循环读取10000次&#xff09; 全局 OpenCvSharp.Point point new OpenCvSharp.Point(0, 0); OpenCvSharp.Size size2; Mat src new Mat(); 初始化 size2 new OpenCvSharp.Size(pictureBox1.Size.Width, pictureBox1.Size.Hei…

MySQL迁移达梦报错,DMException: 第1 行附近出现错误: 无效的表或视图名[ACT_GE_PROPERTY]

达梦数据库选好模式和登录用户&#xff0c;迁移时的目标模式名要和达梦的当前登录的用户名相同&#xff0c;否则查询的时候需要“form 模式名.表名”&#xff0c;只from表名就会报表不存在的错误。

汽车无钥匙启动功能工作原理

移‌动管家无钥匙启动‌是一种科技化的汽车启动方式&#xff0c;它允许车主在不使用传统钥匙的情况下启动车辆。这种技术通过智能感应系统实现&#xff0c;车主只需携带智能钥匙&#xff0c;当靠近车辆时&#xff0c;车辆能够自动解锁并准备启动。启动车辆时&#xff0c;车主无…

水库大坝安全监测方案,双重守护,安全无忧

水库作为重要的水利设施&#xff0c;在防洪、灌溉及供水等方面发挥着重要作用。然而随着时间的推移&#xff0c;大坝面临着自然老化、设计标准不足及极端天气等多重挑战&#xff0c;其安全性与稳定性日益受到关注。水库堤坝险情导致的洪涝灾害给人民生命财产和经济社会发展带来…

TDengine 与 SCADA 强强联合:提升工业数据管理的效率与精准

随着时序数据库&#xff08;Time Series Database&#xff09;的日益普及&#xff0c;越来越多的工业自动化控制&#xff08;工控&#xff09;人员开始认识到其强大能力。然而&#xff0c;时序数据库在传统实时数据库应用领域&#xff0c;特别是在过程监控层的推广仍面临挑战&a…

凸优化学习(2)——梯度类方法求解(gradient descent)

&#x1f345; 写在前面 &#x1f468;‍&#x1f393; 博主介绍&#xff1a;大家好&#xff0c;这里是hyk写算法了吗&#xff0c;一枚致力于学习算法和人工智能领域的小菜鸟。 &#x1f50e;个人主页&#xff1a;主页链接&#xff08;欢迎各位大佬光临指导&#xff09; ⭐️近…

Vue3 响应式工具函数isRef()、unref()、isReactive()、isReadonly()、isProxy()

isRef() isRef()&#xff1a;检查某个值是否为 ref。 isRef函数接收一个参数&#xff0c;即要判断的值。如果该参数是由ref创建的响应式对象&#xff0c;则返回true&#xff1b;否则&#xff0c;返回false。 import { ref, isRef } from vue const normalValue 这是一个普通…

虚拟背景扣像SDK解决方案,电影级抠像技术

美摄科技&#xff0c;作为影像技术领域的领航者&#xff0c;凭借其革命性的虚拟背景抠像SDK解决方案&#xff0c;正以前所未有的方式&#xff0c;重新定义电影级背景抠像技术&#xff0c;让直播与视频制作迈入全新境界。 电影级抠像技术&#xff0c;重塑视觉盛宴 美摄科技的虚…

ArcGIS属性表汉字转拼音

直接复制粘贴现成代码&#xff1a; # -*- coding: utf-8 -*-import arcpyfrom arcpy import envimport osimport pypinyin import sys# 不带声调皿stylepypinyin.NORMAL)def pinyin(word): s for i in pypinyin.pinyin(word, stylepypinyin.NORMAL): s .join(…