ubuntu使用kubeadm搭建k8s集群

一、卸载k8s

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/# 自己选择性删除 坑点哦
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
apt clean all
apt remove kube*

UBUNTU下载

oldubuntu-releases镜像_oldubuntu-releases下载地址_oldubuntu-releases安装教程-阿里巴巴开源镜像站

建议不要下载desktop的镜像,直接使用server的镜像,选择的版本22.04 :ubuntu-22.04.4-live-server-amd64.iso

 下载完成进入服务器替换apt源

ubuntu | 镜像站使用帮助 | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror

 按照提示的版本内容替换源即可。

二、安装K8S一主两从集群

k8s-master01 192.168.124.132     操作系统: Ubuntu22.04

k8s-node01 192.168.124.133      操作系统: Ubuntu22.04

k8s-node02 192.168.124.134      操作系统: Ubuntu22.04

最低配置:2核 2G内存 20G硬盘

 1、环境准备:(所有服务器都需要操作)

  1)  时间同步

timedatectl set-timezone Asia/Shanghai

sudo apt install ntpdate

sudo ntpdate ntp.ubuntu.com

2)  固定IP地址

Ubuntu固定虚拟机的ip地址-CSDN博客

3)修改主机名

[root@k8s-master1 ~]# hostnamectl set-hostname k8s-master
[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node1
[root@k8s-node2 ~]# hostnamectl set-hostname k8s-node2

4)关闭交换分区

#临时关闭所有的交换分区
swapoff -a#永久关闭所有的交换分区
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

5)所有节点都添加集群ip与主机名到hosts中:

cat >> /etc/hosts << EOF 
172.16.11.221 k8s-master
172.16.11.222 k8s-node1
172.16.11.223 k8s-node2
EOF
配置内核转发及网桥过滤:cat > /etc/modules-load.d/k8s.conf << EOF
overlay
br_netfilter
EOF
加载配置:modprobe overlaymodprobe br_netfilter
查看是否加载:lsmod |grep overlaylsmod |grep br_netfilter
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

让配置生效: 

sysctl -p /etc/sysctl.d/k8s.conf

查看是否加载生效

lsmod |grep br_netfilter

6)关闭防火墙

sudo systemctl disable --now ufw

7)修改Ubuntu的sources.list

ubuntu | 镜像站使用帮助 | 清华大学开源软件镜像站 | Tsinghua Open Source Mirror

# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse# 以下安全更新软件源包含了官方源与镜像站配置,如有需要可自行修改注释切换
deb http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse
# deb-src http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse# 预发布软件源,不建议启用
# deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
# # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse

 8)安装ipset和ipvsadm

apt install ipset ipvsadm

配置 ipvsadm 模块加载方式

cat > /etc/modules-load.d/ipvs.conf << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

写成一个脚本文件

cat << EOF | tee ipvs.sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

授权运行检查:

授权运行检查:chmod 755 /etc/modules-load.d/ipvs.conf && bash /etc/modules-load.d/ipvs.conf && lsmod | grep -e ip_vs -e nf_conntrack

9)容器运行时containetd

wget https://github.com/containerd/containerd/releases/download/v1.7.15/cri-containerd-1.7.15-linux-amd64.tar.gz

可能由于网络问题:链接: https://pan.baidu.com/s/1VoYMDB6ikOTSn4W-tziY9g 提取码: 3cvd 
--来自百度网盘超级会员v2的分享

解压并查看

tar xf cri-containerd-1.7.15-linux-amd64.tar.gz -C /

which containerd

10)containerd配置文件生成并修改

创建文件:mkdir /etc/containerd生成配置文件:containerd config default > /etc/containerd/config.toml

修改配置文件将3.8改为3.9

或者改为阿里云:registry.aliyuncs.com/google_containers/pause:3.9

vim /etc/containerd/config.toml

建议使用阿里云的镜像

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2[cgroup]path = ""[debug]address = ""format = ""gid = 0level = ""uid = 0[grpc]address = "/run/containerd/containerd.sock"gid = 0max_recv_message_size = 16777216max_send_message_size = 16777216tcp_address = ""tcp_tls_ca = ""tcp_tls_cert = ""tcp_tls_key = ""uid = 0[metrics]address = ""grpc_histogram = false[plugins][plugins."io.containerd.gc.v1.scheduler"]deletion_threshold = 0mutation_threshold = 100pause_threshold = 0.02schedule_delay = "0s"startup_delay = "100ms"[plugins."io.containerd.grpc.v1.cri"]cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]device_ownership_from_security_context = falsedisable_apparmor = falsedisable_cgroup = falsedisable_hugetlb_controller = truedisable_proc_mount = falsedisable_tcp_service = truedrain_exec_sync_io_timeout = "0s"enable_cdi = falseenable_selinux = falseenable_tls_streaming = falseenable_unprivileged_icmp = falseenable_unprivileged_ports = falseignore_deprecation_warnings = []ignore_image_defined_volumes = falseimage_pull_progress_timeout = "5m0s"image_pull_with_sync_fs = falsemax_concurrent_downloads = 3max_container_log_line_size = 16384netns_mounts_under_state_dir = falserestrict_oom_score_adj = falsesandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"selinux_category_range = 1024stats_collect_period = 10stream_idle_timeout = "4h0m0s"stream_server_address = "127.0.0.1"stream_server_port = "0"systemd_cgroup = falsetolerate_missing_hugetlb_controller = trueunset_seccomp_profile = ""[plugins."io.containerd.grpc.v1.cri".cni]bin_dir = "/opt/cni/bin"conf_dir = "/etc/cni/net.d"conf_template = ""ip_pref = ""max_conf_num = 1setup_serially = false[plugins."io.containerd.grpc.v1.cri".containerd]default_runtime_name = "runc"disable_snapshot_annotations = truediscard_unpacked_layers = falseignore_blockio_not_enabled_errors = falseignore_rdt_not_enabled_errors = falseno_pivot = falsesnapshotter = "overlayfs"[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""sandbox_mode = ""snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options][plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = "io.containerd.runc.v2"sandbox_mode = "podsandbox"snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]BinaryName = ""CriuImagePath = ""CriuPath = ""CriuWorkPath = ""IoGid = 0IoUid = 0NoNewKeyring = falseNoPivotRoot = falseRoot = ""ShimCgroup = ""SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""sandbox_mode = ""snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options][plugins."io.containerd.grpc.v1.cri".image_decryption]key_model = "node"[plugins."io.containerd.grpc.v1.cri".registry]config_path = "/etc/containerd/certs.d"[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]tls_cert_file = ""tls_key_file = ""[plugins."io.containerd.internal.v1.opt"]path = "/opt/containerd"[plugins."io.containerd.internal.v1.restart"]interval = "10s"[plugins."io.containerd.internal.v1.tracing"][plugins."io.containerd.metadata.v1.bolt"]content_sharing_policy = "shared"[plugins."io.containerd.monitor.v1.cgroups"]no_prometheus = false[plugins."io.containerd.nri.v1.nri"]disable = truedisable_connections = falseplugin_config_path = "/etc/nri/conf.d"plugin_path = "/opt/nri/plugins"plugin_registration_timeout = "5s"plugin_request_timeout = "2s"socket_path = "/var/run/nri/nri.sock"[plugins."io.containerd.runtime.v1.linux"]no_shim = falseruntime = "runc"runtime_root = ""shim = "containerd-shim"shim_debug = false[plugins."io.containerd.runtime.v2.task"]platforms = ["linux/amd64"]sched_core = false[plugins."io.containerd.service.v1.diff-service"]default = ["walking"][plugins."io.containerd.service.v1.tasks-service"]blockio_config_file = ""rdt_config_file = ""[plugins."io.containerd.snapshotter.v1.aufs"]root_path = ""[plugins."io.containerd.snapshotter.v1.blockfile"]fs_type = ""mount_options = []root_path = ""scratch_file = ""[plugins."io.containerd.snapshotter.v1.btrfs"]root_path = ""[plugins."io.containerd.snapshotter.v1.devmapper"]async_remove = falsebase_image_size = ""discard_blocks = falsefs_options = ""fs_type = ""pool_name = ""root_path = ""[plugins."io.containerd.snapshotter.v1.native"]root_path = ""[plugins."io.containerd.snapshotter.v1.overlayfs"]mount_options = []root_path = ""sync_remove = falseupperdir_label = false[plugins."io.containerd.snapshotter.v1.zfs"]root_path = ""[plugins."io.containerd.tracing.processor.v1.otlp"][plugins."io.containerd.transfer.v1.local"]config_path = ""max_concurrent_downloads = 3max_concurrent_uploaded_layers = 3[[plugins."io.containerd.transfer.v1.local".unpack_config]]differ = ""platform = "linux/amd64"snapshotter = "overlayfs"[proxy_plugins][stream_processors][stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar"[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar+gzip"[timeouts]"io.containerd.timeout.bolt.open" = "0s""io.containerd.timeout.metrics.shimstats" = "2s""io.containerd.timeout.shim.cleanup" = "5s""io.containerd.timeout.shim.load" = "5s""io.containerd.timeout.shim.shutdown" = "3s""io.containerd.timeout.task.state" = "2s"[ttrpc]address = ""gid = 0uid = 0
创建上图相应的文件目录mkdir -p /etc/containerd/certs.d/docker.io配置加速cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://x46sxvnb.mirror.aliyuncs.com"]
capabilities = ["pull", "resolve"]
EOF

启动并设置开机自启

systemctl enable --now containerd

查看版本:

containerd --version

 查看状态 systemctl status containerd.service

2、集群部署(所有服务器都需要操作)

1、下载用于kubernetes软件包仓库的公告签名密钥


k8s社区源(和下面的阿里云源二选一即可)网络问题选择阿里云

创建目录:sudo mkdir -p /etc/apt/keyrings/下载密钥curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
阿里云源创建目录:sudo mkdir -p /etc/apt/keyrings/下载密钥curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
k8s社区(和下面的阿里云源二选一即可)添加kubernetes apt仓库echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
阿里云添加kubernetes apt仓库echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

更新仓库

apt-get update

查看软件列表:

apt-cache policy kubeadm

安装指定版本:sudo apt-get install -y kubelet=1.30.0-1.1 kubeadm=1.30.0-1.1 kubectl=1.30.0-1.1
修改kubelet配置vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
设置为开机自启systemctl enable kubelet
锁定版本,防止后期自动更新。sudo apt-mark hold kubelet kubeadm kubectl
解锁版本,可以执行更新sudo apt-mark unhold kubelet kubeadm kubectl

3、集群初始化:(k8s-master01节点操作)

查看版本

kubeadm version

生成配置文件:

kubeadm config print init-defaults > kubeadm-config.yaml

完整的 kubeadm-config.yaml,修改对应的ip和name即可。

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
localAPIEndpoint:advertiseAddress: 192.168.124.150bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: k8s-mastertaints: null---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: 1.30.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16
scheduler: {}---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

查看镜像

kubeadm config images list --config kubeadm-config.yaml

下载镜像

kubeadm config images pull --config kubeadm-config.yaml

查看镜像

crictl images

查看镜像:

kubeadm config images list --config kubeadm-config.yaml

如果出现报错镜像拉取不下来如下图错误

解决办法(我们在阿里云仓库进行拉取镜像)

kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

初始化集群

kubeadm init --config kubeadm-config.yaml

非首次安装会出现下面这个错误,需要 kubeadm reset -f 再进行安装

还是失败,参考下面两篇博客,我解决了上面的问题   使用 journalctl -xeu kubelet 查看报错日志

“validate service connection: validate CRI v1 runtime API for endpoint \“unix:///var/run/containerd/-CSDN博客

KS8初始化遇到问题:rpc error: code = Unknown desc = failed to get sandbox image \“registry.aliyuncs.com/goog-CSDN博客

再次安装成功!!!

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.124.150:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:aeceb14ab601148161cd69b9e041f7ac09e79f5e45135fdac9c4ee3e42e1acb8 

执行上面的命令,初始化集群。


node节点执行完之后去master节点查看node, kubectl get node

可以看到节点已经安装成功!

4、网络插件安装部署(k8s-master01节点操作)

访问calico的官网查看

Quickstart for Calico on Kubernetes | Calico Documentation (tigera.io)

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml

查看是否运行:

kubectl get pods -n tigera-operator

kubectl get ns

下载calico文件 建议使用wget 复制官网的链接进行下载,因为官网的命令是直接下载运行的,我们需要对文件先进行修改在运行,

wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml

由于网络问题:科学上网:

# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:name: default
spec:# Configures Calico networking.calicoNetwork:ipPools:- name: default-ipv4-ippoolblockSize: 26cidr: 10.244.0.0/16encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()---# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:name: default
spec: {}

修改文件

注我们在kubeadm-config.yaml文件中添加的pod网段是10.244.0.0/16所以我们要修改为一样的

vim custom-resources.yaml

 

启动

 kubectl create -f custom-resources.yaml

查看命名空间

kubectl get ns

查看命名空间中运行的pods 如果没有全部起来稍微等一下,应该是在创建中,如果网络慢可能要半个小时。

kubectl get pods -n calico-system

查看命名空间中运行的pods

kubectl get pods -n kube-system

然后再次查看我们的集群状态:

kubectl get nodes

 

由于kubectl get pods -n calico-system 太慢了 我们先删除掉

kubectl delete -f custom-resources.yaml

选择网络插件:https://kubernetes.io/docs/concepts/cluster-administration/addons/
快速开始配置:https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/clis/calicoctl/install
calico网络插件:https://docs.projectcalico.org/v3.9/getting-started/kubernetes/

kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml 

https://docs.tigera.io/archive/v3.9/getting-started/kubernetes/

下载calico.yaml文件到本地:

---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Typha is disabled.typha_service_name: "none"# Configure the backend to use.calico_backend: "bird"# Configure the MTU to useveth_mtu: "1440"# The CNI network configuration to install on each node.  The special# values in this config will be automatically populated.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","datastore_type": "kubernetes","nodename": "__KUBERNETES_NODE_NAME__","mtu": __CNI_MTU__,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: felixconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: FelixConfigurationplural: felixconfigurationssingular: felixconfiguration
---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamblocks.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMBlockplural: ipamblockssingular: ipamblock---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: blockaffinities.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BlockAffinityplural: blockaffinitiessingular: blockaffinity---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamhandles.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMHandleplural: ipamhandlessingular: ipamhandle---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamconfigs.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMConfigplural: ipamconfigssingular: ipamconfig---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgppeers.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPPeerplural: bgppeerssingular: bgppeer---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgpconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPConfigurationplural: bgpconfigurationssingular: bgpconfiguration---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ippools.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPPoolplural: ippoolssingular: ippool---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: hostendpoints.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: HostEndpointplural: hostendpointssingular: hostendpoint---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: clusterinformations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: ClusterInformationplural: clusterinformationssingular: clusterinformation---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworkpolicies.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkPolicyplural: globalnetworkpoliciessingular: globalnetworkpolicy---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworksets.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkSetplural: globalnetworksetssingular: globalnetworkset---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networkpolicies.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkPolicyplural: networkpoliciessingular: networkpolicy---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networksets.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkSetplural: networksetssingular: networkset
---
# Source: calico/templates/rbac.yaml# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
rules:# Nodes are watched to monitor for deletions.- apiGroups: [""]resources:- nodesverbs:- watch- list- get# Pods are queried to check for existence.- apiGroups: [""]resources:- podsverbs:- get# IPAM resources are manipulated when nodes are deleted.- apiGroups: ["crd.projectcalico.org"]resources:- ippoolsverbs:- list- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete# Needs access to update clusterinformations.- apiGroups: ["crd.projectcalico.org"]resources:- clusterinformationsverbs:- get- create- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-kube-controllers
subjects:
- kind: ServiceAccountname: calico-kube-controllersnamespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-node
rules:# The CNI plugin needs to get pods, nodes, and namespaces.- apiGroups: [""]resources:- pods- nodes- namespacesverbs:- get- apiGroups: [""]resources:- endpoints- servicesverbs:# Used to discover service IPs for advertisement.- watch- list# Used to discover Typhas.- get- apiGroups: [""]resources:- nodes/statusverbs:# Needed for clearing NodeNetworkUnavailable flag.- patch# Calico stores some configuration information in node annotations.- update# Watch for changes to Kubernetes NetworkPolicies.- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- watch- list# Used by Calico for policy information.- apiGroups: [""]resources:- pods- namespaces- serviceaccountsverbs:- list- watch# The CNI plugin patches pods/status.- apiGroups: [""]resources:- pods/statusverbs:- patch# Calico monitors various CRDs for config.- apiGroups: ["crd.projectcalico.org"]resources:- globalfelixconfigs- felixconfigurations- bgppeers- globalbgpconfigs- bgpconfigurations- ippools- ipamblocks- globalnetworkpolicies- globalnetworksets- networkpolicies- networksets- clusterinformations- hostendpoints- blockaffinitiesverbs:- get- list- watch# Calico must create and update some CRDs on startup.- apiGroups: ["crd.projectcalico.org"]resources:- ippools- felixconfigurations- clusterinformationsverbs:- create- update# Calico stores some configuration information on the node.- apiGroups: [""]resources:- nodesverbs:- get- list- watch# These permissions are only requried for upgrade from v2.6, and can# be removed after upgrade or on fresh installations.- apiGroups: ["crd.projectcalico.org"]resources:- bgpconfigurations- bgppeersverbs:- create- update# These permissions are required for Calico CNI to perform IPAM allocations.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete- apiGroups: ["crd.projectcalico.org"]resources:- ipamconfigsverbs:- get# Block affinities must also be watchable by confd for route aggregation.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinitiesverbs:- watch# The Calico IPAM migration needs to get daemonsets. These permissions can be# removed if not upgrading from an installation using host-local IPAM.- apiGroups: ["apps"]resources:- daemonsetsverbs:- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodeannotations:# This, along with the CriticalAddonsOnly toleration below,# marks the pod as a critical add-on, ensuring it gets# priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0priorityClassName: system-node-criticalinitContainers:# This container performs upgrade from host-local IPAM to calico-ipam.# It can be deleted if this is a fresh installation, or if you have already# upgraded to use calico-ipam.- name: upgrade-ipamimage: calico/cni:v3.9.6command: ["/opt/cni/bin/calico-ipam", "-upgrade"]env:- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backendvolumeMounts:- mountPath: /var/lib/cni/networksname: host-local-net-dir- mountPath: /host/opt/cni/binname: cni-bin-dir# This container installs the CNI binaries# and CNI network config file on each node.- name: install-cniimage: calico/cni:v3.9.6command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# Set the hostname based on the k8s node name.- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Prevents the container from sleeping forever.- name: SLEEPvalue: "false"volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes# to communicate with Felix over the Policy Sync API.- name: flexvol-driverimage: calico/pod2daemon-flexvol:v3.9.6volumeMounts:- name: flexvol-driver-hostmountPath: /host/drivercontainers:# Runs calico-node container on each Kubernetes node.  This# container programs network policy and routes on each# host.- name: calico-nodeimage: calico/node:v3.9.6env:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "192.168.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:exec:command:- /bin/calico-node- -felix-live- -bird-liveperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:exec:command:- /bin/calico-node- -felix-ready- -bird-readyperiodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- name: policysyncmountPath: /var/run/nodeagentvolumes:# Used by calico-node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the directory for host-local IPAM allocations. This is# used when upgrading from host-local to calico-ipam, and can be removed# if not using the upgrade-ipam init container.- name: host-local-net-dirhostPath:path: /var/lib/cni/networks# Used to create per-pod Unix Domain Sockets- name: policysynchostPath:type: DirectoryOrCreatepath: /var/run/nodeagent# Used to install Flex Volume Driver- name: flexvol-driver-hosthostPath:type: DirectoryOrCreatepath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-kube-controllers.yaml# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllers
spec:# The controllers can only have a single active instance.replicas: 1selector:matchLabels:k8s-app: calico-kube-controllersstrategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxtolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerspriorityClassName: system-cluster-criticalcontainers:- name: calico-kube-controllersimage: calico/kube-controllers:v3.9.6env:# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: node- name: DATASTORE_TYPEvalue: kubernetesreadinessProbe:exec:command:- /usr/bin/check-status- -r---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml---
# Source: calico/templates/calico-typha.yaml---
# Source: calico/templates/configure-canal.yaml

kubectl get pods --all-namespaces -w 查看集群信息

5、由于国内的网络问题,所以对应的镜像拉不下来

删除上面的网络,安装flannel网络插件

这里参考这个大神的博客方法一解决了!!!

kubernetes(1.28)配置flannel:kubelet无法拉取镜像(NotReady ImagePullBackOff)同时解决k8s配置harbor私人镜像仓库问题_flannel镜像拉取失败-CSDN博客

链接:https://pan.baidu.com/s/1bkDHddFxhRfFS3iiJv8ScA 
提取码:d8bg 
--来自百度网盘超级会员V2的分享 下载对应的资料

 kube-flannel.yml来源:GitHub - flannel-io/flannel: flannel is a network fabric for containers, designed for Kubernetes

1、安装docker 

Ubuntu安装docker,指定版本-CSDN博客

 2、使用docker工具,将上面的文件导入为本地镜像

但是,在1.24之后的版本之后kubelet 彻底移除了dockershim,改为默认使用Containerd。在本地有docker镜像,yml修改为本地配置镜像地址,也无法使用这一镜像。还需要继续操作。

1.将docker镜像save为tar(因为ctr无法直接import这两个镜像压缩包,只好借助docker转一手)
#docker -o tar压缩包名要保存的路径 本地镜像
docker save -o flannel-flannel-cni-plugin-v1.4.1-flannel1-amd64.tar flannel/flannel-cni-plugin:v1.4.1-flannel1
docker save -o flannel-flannel-v0.25.1-amd64.tar flannel/flannel:v0.25.1

操作成功,在当前文件夹下能看到这两个tar文件。
2将tar镜像压缩包,导入到containerd的k8s.io命名空间中

#-n后面加命名空间名称

ctr -n k8s.io images import flannel-flannel-v0.25.1-amd64.tar
ctr -n k8s.io images import flannel-flannel-cni-plugin-v1.4.1-flannel1-amd64.tar


操作成功,显示如下:

ctr -n k8s.io i check | grep flannel

查看k8s.io下面是否有flannel的镜像文件

注:Kubernetes 下使用的 containerd 默认命名空间是k8s.io,只有导入这个命名空间,kubelet才能从本地拉取这个镜像文件。本文kubernetes集群是1.28的,所以不再是默认地从本地docker镜像里获取镜像,而是从containerd的本地镜像里面获取。如果是1.24之前的版本,默认仍是docker,那么1.2以及1.3的操作是不需要的,kubelet能直接拉取本地docker镜像。

1.4修改kube-flanne.yml

将开头提及的kube-flannel.yml下载到本地后,修改所有image键值对,并添加镜像拉取策略。修改内容如下:

 

kube-flannel.yml完整版,修改后的 

apiVersion: v1
kind: Namespace
metadata:labels:k8s-app: flannelpod-security.kubernetes.io/enforce: privilegedname: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
- apiGroups:- networking.k8s.ioresources:- clustercidrsverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
kind: ConfigMap
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-cfgnamespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-dsnamespace: kube-flannel
spec:selector:matchLabels:app: flannelk8s-app: flanneltemplate:metadata:labels:app: flannelk8s-app: flanneltier: nodespec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxcontainers:- args:- --ip-masq- --kube-subnet-mgrcommand:- /opt/bin/flanneldenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"image: docker.io/flannel/flannel:v0.25.1imagePullPolicy: Nevername: kube-flannelresources:requests:cpu: 100mmemory: 50MisecurityContext:capabilities:add:- NET_ADMIN- NET_RAWprivileged: falsevolumeMounts:- mountPath: /run/flannelname: run- mountPath: /etc/kube-flannel/name: flannel-cfg- mountPath: /run/xtables.lockname: xtables-lockhostNetwork: trueinitContainers:- args:- -f- /flannel- /opt/cni/bin/flannelcommand:- cpimage: docker.io/flannel/flannel-cni-plugin:v1.4.1-flannel1imagePullPolicy: Nevername: install-cni-pluginvolumeMounts:- mountPath: /opt/cni/binname: cni-plugin- args:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistcommand:- cpimage: docker.io/flannel/flannel:v0.25.1imagePullPolicy: Nevername: install-cnivolumeMounts:- mountPath: /etc/cni/net.dname: cni- mountPath: /etc/kube-flannel/name: flannel-cfgpriorityClassName: system-node-criticalserviceAccountName: flanneltolerations:- effect: NoScheduleoperator: Existsvolumes:- hostPath:path: /run/flannelname: run- hostPath:path: /opt/cni/binname: cni-plugin- hostPath:path: /etc/cni/net.dname: cni- configMap:name: kube-flannel-cfgname: flannel-cfg- hostPath:path: /run/xtables.locktype: FileOrCreatename: xtables-lock
  1. #删除资源

  2. kubectl delete -f kube-flannel.yml

  3. #重新运行

  4. kubectl apply -f kube-flannel.yml

如下图,集群终于搭建成功了!!!

我使用方式一解决了网络插件问题,其实使用私仓更优雅!

解决办法二:k8s1.28,为集群配置harbor镜像仓库,上传镜像,从仓库拉取相关镜像
针对1.24之前的版本,网上有许多配置harbor镜像仓库的办法,重点是/etc/docker/daemon.json文件,这里不再赘述。

1.24之后 kubelet 彻底移除dockershim,改为默认Containerd。containerd 不能像docker一样 “docker login 私人镜像仓库地址” 登录到镜像仓库,即便是本地docker登录harbor ,集群仍是无法从harbor拉取到镜像。这里需要修改配置文件。

本文的containerd为1.6,按照网上常规的修改/etc/containerd/config.toml文件的方式不再被推荐。

参考官方文档——containerd/docs/cri/config.md at main · containerd/containerd · GitHub

配置相应的hosts.toml,为集群设置Harbor镜像仓库
步骤如下(2.1-2.4操作每一台机器都要进行!!!):

2.1修改/etc/containerd/config.toml

2.2自定义证书

绕过TLS验证(containerd/docs/hosts.md at main · containerd/containerd · GitHub)

#创建文件目录 IP和端口替换为你的私人镜像仓库的IP以及对外开放的端口
mkdir -p /etc/containerd/certs.d/192.168.12.34:5000
#创建-打开hosts.toml
vi /etc/containerd/certs.d/192.168.12.34:5000/hosts.toml
 
#想hosts.toml中写入以下内容 IP和端口和前面一样进行替换
server = "http://192.168.12.34:5000"
 
[host."192.168.12.34:5000"]
  capabilities = ["pull", "resolve", "push"]
  skip_verify = true
2.3重新启动containerd

systemctl daemon-reload
systemctl restart containerd
2.4完成上述操作,可能crictl以及集群yml文件仍然无法正常从私人镜像仓库拉取镜像

可能报错如下:

解决操作:

vim /etc/crictl.yaml
将runtime以及image endpoint均替换为"unix:///run/containerd/containerd.sock"

 :wq保存退出

2.5测试将镜像推送至harbor

使用docker上传(传统操作)

docker tag flannel/flannel-cni-plugin:v1.4.1-flannel1 IP:PORT/flannel/flannel-cni-plugin:v1.4.1-flannel1
docker push IP:PORT/flannel/flannel-cni-plugin:v1.4.1-flannel1
另一个镜像也是相同操作(如果报错,可能是需要先在harbor上创建一个flannel的project)。

2.6修改kube-flannel.yml文件

:wq保存退出。重新删除并运行kube-flannel.yml发现成功拉取镜像(可以通过kubectl describe详查pod)。

总结
方法二比方法一更为正式,毕竟配置集群总该要配置私人镜像,省去了每一个节点导入镜像的操作。要是还未配置私人镜像仓库,使用方法一也可行。

其余知识点:/etc/containerd/config.toml配置文件对ctr不生效,这个配置文件由CRI使用,即对crictl生效。

如果你的ctr images pull总是报错: http: server gave HTTP response to HTTPS client。可能这并不是你的配置出错,而是你需要在后面pull操作后加上--plain-http参数。表示你当前client与镜像仓库之间接受http形式的通信。或者是使用自定义的hosts-dir来指定配置文件。

ctr images pull IP:PORT/library/service:v1.0.0 --plain-http
ctr images pull IP:PORT/library/service:v1.0.0 --hosts-dir "/etc/containerd/certs.d"

感谢参考:ubuntu系统 kubeadm方式搭建k8s集群_ubuntu kubeadm-CSDN博客

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/44648.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Prometheus + alermanager + webhook-dingtalk 告警

添加钉钉机器人 1. 部署 alermanager 1.1 下载软件包 wget https://github.com/prometheus/alertmanager/releases/download/v0.26.0/alertmanager-0.26.0.linux-amd64.tar.gz 网址 &#xff1a;Releases prometheus/alertmanager (github.com) 1.2 解压软件包 mkdir -pv …

医日健集团技术力量体现测试的背后

医日健集团覆盖式更新 科技日新月异的时代&#xff0c;医日健集团始终走在行业的前列。近日&#xff0c;医日健集团外勤技术人员全面对市场点位投放的数智药房进行了新系统升级和机器测试&#xff0c;这是医日健对于科技创新的最新尝试。 以客户体验为核心优化新体验 医日健集团…

Js 前置,后置补零的原生方法与补字符串 padStart及padEnd

在工作中&#xff0c;遇到了需要将不满八位的一个字符串进行后补0的操作&#xff0c;所以就在网上学习了关于js原生补充字符串的方法&#xff0c;然后用这篇博客记录下来。 目录 前置补充字符串 String.prototype.padStart() 后置补充字符串String.prototype.padEnd() 前置补…

【超音速 专利 CN117710683A】基于分类模型的轻量级工业图像关键点检测方法

申请号CN202311601629.7公开号&#xff08;公开&#xff09;CN117710683A申请日2023.11.27申请人&#xff08;公开&#xff09;超音速人工智能科技股份有限公司发明人&#xff08;公开&#xff09;张俊峰(总); 杨培文(总); 沈俊羽; 张小村 技术领域 本发明涉及图像关键点检测…

数据库MySQL下载安装

MySQL下载安装地址如下&#xff1a; MySQL :: Download MySQL Community Server 1、下载界面 2、点击下载 3、解压记住目录 4、配置my.ini文件 未完..

Vue.js学习笔记(五)抽奖组件封装——转盘抽奖

基于VUE2转盘组件的开发 文章目录 基于VUE2转盘组件的开发前言一、开发步骤1.组件布局2.布局样式3.数据准备 二、最后效果总结 前言 因为之前的转盘功能是图片做的&#xff0c;每次活动更新都要重做UI和前端&#xff0c;为了解决这一问题进行动态配置转盘组件开发&#xff0c;…

STM32智能仓储管理系统教程

目录 引言环境准备晶智能仓储管理系统基础代码实现&#xff1a;实现智能仓储管理系统 4.1 数据采集模块 4.2 数据处理与决策模块 4.3 通信与网络系统实现 4.4 用户界面与数据可视化应用场景&#xff1a;仓储管理与优化问题解决方案与优化收尾与总结 1. 引言 智能仓储管理系统…

7 月12日学习打卡--栈和队列的相互转换

hello大家好呀&#xff0c;本博客目的在于记录暑假学习打卡&#xff0c;后续会整理成一个专栏&#xff0c;主要打算在暑假学习完数据结构&#xff0c;因此会发一些相关的数据结构实现的博客和一些刷的题&#xff0c;个人学习使用&#xff0c;也希望大家多多支持&#xff0c;有不…

什么是STM32?嵌入式和STM32简单介绍

1、嵌入式和STM32 1.1.什么是嵌入式 除了桌面PC之外&#xff0c;所有的控制类设备都是嵌入式 嵌入式系统的定义&#xff1a;“用于控制、监视或者辅助操作机器和设备的装置”。 嵌入式系统是一个控制程序存储在ROM中的嵌入式处理器控制板&#xff0c;是一种专用的计算机系统。…

初阶数据结构速成

本篇文章算是对初阶数据结构的总结&#xff0c;内容较多&#xff0c;请耐心观看 基础概念部分 顺序表 线性表&#xff08; linear list &#xff09;是n个具有相同特性的数据元素的有限序列。 线性表是⼀种在实际中⼴泛使 ⽤的数据结构&#xff0c;常⻅的线性表&#xff1a;…

机器学习——关于极大似然估计法的一些个人思考(通俗易懂极简版)

最近在回顾机器学习的一些相关理论知识&#xff0c;回顾到极大似然法时&#xff0c;对于极大似然法中的一些公式有些迷糊了&#xff0c;所以本文主要想记录并分享一下个人关于极大似然估计法的一些思考&#xff0c;如果有误&#xff0c;请见谅&#xff0c;欢迎一起前来探讨。当…

单元测试实施最佳方案(背景、实施、覆盖率统计)

1. 什么是单元测试&#xff1f; 对于很多开发人员来说&#xff0c;单元测试一定不陌生 单元测试是白盒测试的一种形式&#xff0c;它的目标是测试软件的最小单元——函数、方法或类。单元测试的主要目的是验证代码的正确性&#xff0c;以确保每个单元按照预期执行。单元测试通…

合肥高校大学智能制造实验室数字孪生可视化系统平台建设项目验收

合肥高校大学智能制造实验室近日迎来了一项重要时刻&#xff0c;数字孪生可视化系统平台建设项目顺利通过了验收。这一项目的成功实施&#xff0c;不仅标志着合肥高校在智能制造领域取得新的突破&#xff0c;为我国智能制造技术的发展注入新活力。 合肥高校智能制造实验室作为…

T972 切换至pdm 声音输入的方法

1.在hardware/amlogic/audio/audio_hal/audio_hw.c下&#xff0c;直接切换 在 static unsigned int select_port_by_device(struct aml_audio_device *adev) 中先强制切换为pdm 2.在device mk 配置文件中 #add fof fix the mic bug by jason 20230621 PRODUCT_PROPERTY_OVE…

MySQL 数据库基础概念

一、什么是数据库&#xff1f; 数据库&#xff08;Database&#xff09;是按照数据结构来组织、存储和管理数据的仓库。 每个数据库都有一个或多个不同的 API 用于创建&#xff0c;访问&#xff0c;管理&#xff0c;搜索和复制所保存的数据。 我们也可以将数据存储在文件中&…

MSPM0G3507(三十六)——超声波PID控制小车固定距离

效果图&#xff1a; 波形图软件是VOFA&#xff0c;B站有教程 &#xff0c;虽然有缺点但是非常简单。 视频效果&#xff1a; PID控制距离 之前发过只有超声波测距的代码&#xff0c;MSPM0G3507&#xff08;三十二&#xff09;——超声波模块移植代码-CSDN博客 SYSCFG配置&#…

用友NC Cloud blobRefClassSearch FastJson反序列化RCE漏洞复现

0x01 产品简介 用友 NC Cloud 是一种商业级的企业资源规划云平台,为企业提供全面的管理解决方案,包括财务管理、采购管理、销售管理、人力资源管理等功能,实现企业的数字化转型和业务流程优化。 0x02 漏洞概述 用友 NC Cloud blobRefClassSearch 接口处存在FastJson反序列…

开源PHP论坛HadSky本地部署与配置公网地址实现远程访问

文章目录 前言1. 网站搭建1.1 网页下载和安装1.2 网页测试1.3 cpolar的安装和注册 2. 本地网页发布2.1 Cpolar临时数据隧道2.2 Cpolar稳定隧道&#xff08;云端设置&#xff09;2.3 Cpolar稳定隧道&#xff08;本地设置&#xff09;2.4 公网访问测试 总结 前言 今天和大家分享…

idea启动ssm项目详细教程

前言 今天碰到一个ssm的上古项目&#xff0c;项目没有使用内置的tomcat作为服务器容器&#xff0c;这个时候就需要自己单独设置tomcat容器。这让我想起了我刚入行时被外置tomcat配置支配的恐惧。现在我打算记录一下配置的过程&#xff0c;希望对后面的小伙伴有所帮助吧。 要求…

SpringBoot3.3.0升级方案

本文介绍了由SpringBoot2升级到SpringBoot3.3.0升级方案&#xff0c;新版本的升级可以解决旧版本存在的部分漏洞问题。 一、jdk17下载安装 1、下载 官网下载地址 Java Archive Downloads - Java SE 17 Jdk17下载后&#xff0c;可不设置系统变量java_home&#xff0c;仅在id…