基于Kubeeasy安装Kubernetes-v1.22.1版本(安装报错已解决)

基础环境准备

将提供的安装包 chinaskills_cloud_paas_v2.0.2.iso 上传至 master 节点 /root 目录,并解压 到 /opt 目录:

[root@localhost ~]# ll
total 7446736
-rw-------. 1 root root       1579 Mar  7 22:46 anaconda-ks.cfg
-rw-r--r--. 1 root root 4712300544 Jun  7  2022 CentOS-7-x86_64-DVD-2009.iso
-rw-r--r--. 1 root root 2913150976 Jun 20  2022 chinaskills_cloud_paas_v2.0.2.iso

创建目录存放yum源

[root@localhost ~]# mkdir /opt/centos

挂载镜像

[root@localhost ~]# mount chinaskills_cloud_paas_v2.0.2.iso /mnt/
mount: /dev/loop0 is write-protected, mounting read-only
[root@localhost ~]# cp -rf /mnt/* /opt/
[root@localhost ~]# ll /opt/
total 1964852
drwxr-xr-x. 2 root root          6 May 27 19:33 centos
dr-xr-xr-x. 2 root root         55 May 27 19:35 dependencies
dr-xr-xr-x. 2 root root        181 May 27 19:35 extended-images
-r-xr-xr-x. 1 root root  615853450 May 27 19:35 harbor-offline.tar.gz
-r-xr-xr-x. 1 root root   13862382 May 27 19:35 helm-v3.7.1-linux-amd64.tar.gz
-r-xr-xr-x. 1 root root   21963365 May 27 19:35 istio.tar.gz
-r-xr-xr-x. 1 root root     143832 May 27 19:35 kubeeasy
-r-xr-xr-x. 1 root root 1339977057 May 27 19:35 kubernetes.tar.gz
-r-xr-xr-x. 1 root root   20196005 May 27 19:35 kubevirt.tar.gz

安装 kubeeasy

kubeeasy 为 Kubernetes 集群专业部署工具,极大的简化了部署流程。其特性如下:

全自动化安装流程;

支持 DNS 识别集群;

支持自我修复:一切都在自动扩缩组中运行;

支持多种操作系统(如 Debian、Ubuntu 16.04、CentOS7、RHEL 等);

支持高可用;

在 master 节点安装 kubeeasy 工具:

[root@localhost ~]# mv /opt/kubeeasy /usr/bin/

安装依赖包

此步骤主要完成 docker-ce、git、unzip、vim、wget 等工具的安装。

在 master 节点执行以下命令完成依赖包的安装:

[root@localhost ~]# kubeeasy install depend \
--host 192.168.169.10,192.168.169.20 \
--user root \
--password 000000 \
--offline-file /opt/dependencies/base-rpms.tar.gz 

以下是输出信息

[2024-05-27 21:26:07] INFO:    [start] bash kubeeasy install depend --host 192.168.169.10,192.168.169.20 --user root --password ****** --offline-file /opt/dependencies/base-rpms.tar.gz
[2024-05-27 21:26:07] INFO:    [offline] unzip offline dependencies package on local.
[2024-05-27 21:26:09] INFO:    [offline] unzip offline dependencies package succeeded.
[2024-05-27 21:26:09] INFO:    [install] install dependencies packages on local.
[2024-05-27 21:27:11] INFO:    [install] install dependencies packages succeeded.
[2024-05-27 21:27:16] INFO:    [offline] 192.168.169.10: load offline dependencies file
[2024-05-27 21:27:20] INFO:    [offline] load offline dependencies file to 192.168.169.10 succeeded.
[2024-05-27 21:27:20] INFO:    [install] 192.168.169.10: install dependencies packages
[2024-05-27 21:27:21] INFO:    [install] 192.168.169.10: install dependencies packages succeeded.
[2024-05-27 21:27:26] INFO:    [offline] 192.168.169.20: load offline dependencies file
[2024-05-27 21:27:35] INFO:    [offline] load offline dependencies file to 192.168.169.20 succeeded.
[2024-05-27 21:27:35] INFO:    [install] 192.168.169.20: install dependencies packages
[2024-05-27 21:29:03] INFO:    [install] 192.168.169.20: install dependencies packages succeeded.See detailed log >> /var/log/kubeinstall.log

配置 SSH 免密钥

安装 Kubernetes 集群的时候,需要配置 Kubernetes 集群各节点间的免密登录用于传输文件和通讯。

在 master 节点执行以下命令完成集群节点的连通性检测

[root@localhost ~]# kubeeasy check ssh \
--host 192.168.169.10,192.168.169.20 \
--user root \
--password 000000

以下是输出

[2024-05-27 21:29:49] INFO:    [start] bash kubeeasy check ssh --host 192.168.169.10,192.168.169.20 --user root --password ******
[2024-05-27 21:29:49] INFO:    [check] sshpass command exists.
[2024-05-27 21:29:49] INFO:    [check] ssh 192.168.169.10 connection succeeded.
[2024-05-27 21:29:49] INFO:    [check] ssh 192.168.169.20 connection succeeded.See detailed log >> /var/log/kubeinstall.log

生成密钥

[root@localhost ~]# kubeeasy create ssh-keygen \
--master 192.168.169.10 \
--worker 192.168.169.20 \
--user root \
--password 000000

以下是输出信息

[2024-05-27 21:31:56] INFO:    [start] bash kubeeasy create ssh-keygen --master 192.168.169.10 --worker 192.168.169.20 --user root --password ******
[2024-05-27 21:31:56] INFO:    [check] sshpass command exists.
[2024-05-27 21:31:57] INFO:    [check] ssh 192.168.169.10 connection succeeded.
[2024-05-27 21:31:57] INFO:    [check] ssh 192.168.169.20 connection succeeded.
[2024-05-27 21:31:58] INFO:    [create] create ssh keygen 192.168.169.10
[2024-05-27 21:31:58] INFO:    [create] create ssh keygen 192.168.169.10 succeeded.
[2024-05-27 21:31:59] INFO:    [create] create ssh keygen 192.168.169.20
[2024-05-27 21:31:59] INFO:    [create] create ssh keygen 192.168.169.20 succeeded.See detailed log >> /var/log/kubeinstall.log

部署 Kubernetes 集群

[root@localhost ~]# kubeeasy install k8s \
--master 192.168.169.10 \
--worker 192.168.169.20 \
--user root \
--password 000000 \
--version 1.22.1 \
--offline-file /opt/kubernetes.tar.gz 

以下是报错信息

[2024-05-27 21:34:16] INFO:    [start] bash kubeeasy install k8s --master 192.168.169.10 --worker 192.168.169.20 --user root --password ****** --version 1.22.1 --offline-file /opt/kubernetes.tar.gz
[2024-05-27 21:34:16] INFO:    [check] sshpass command exists.
[2024-05-27 21:34:16] INFO:    [check] rsync command exists.
[2024-05-27 21:34:17] INFO:    [check] ssh 192.168.169.10 connection succeeded.
[2024-05-27 21:34:17] INFO:    [check] ssh 192.168.169.20 connection succeeded.
[2024-05-27 21:34:17] INFO:    [offline] unzip offline package on local.
[2024-05-27 21:34:30] INFO:    [offline] unzip offline package succeeded.
[2024-05-27 21:34:30] INFO:    [offline] master 192.168.169.10: load offline file
[2024-05-27 21:34:31] INFO:    [offline] load offline file to 192.168.169.10 succeeded.
[2024-05-27 21:34:31] INFO:    [offline] master 192.168.169.10: disable the firewall
[2024-05-27 21:34:33] INFO:    [offline] 192.168.169.10: disable the firewall succeeded.
[2024-05-27 21:34:33] INFO:    [offline] worker 192.168.169.20: load offline file
[2024-05-27 21:35:32] INFO:    [offline] load offline file to 192.168.169.20 succeeded.
[2024-05-27 21:35:32] INFO:    [offline] worker 192.168.169.20: disable the firewall
[2024-05-27 21:35:34] INFO:    [offline] 192.168.169.20: disable the firewall succeeded.
[2024-05-27 21:35:34] INFO:    [get] Get 192.168.169.10 InternalIP.
[2024-05-27 21:35:35] INFO:    [result] get MGMT_NODE_IP value succeeded.
[2024-05-27 21:35:35] INFO:    [result] MGMT_NODE_IP is 192.168.169.10
[2024-05-27 21:35:35] INFO:    [init] master: 192.168.169.10
[2024-05-27 21:35:38] INFO:    [init] init master 192.168.169.10 succeeded.
[2024-05-27 21:35:38] INFO:    [init] master: 192.168.169.10 set hostname and hosts
[2024-05-27 21:35:38] INFO:    [init] 192.168.169.10 set hostname and hosts succeeded.
[2024-05-27 21:35:38] INFO:    [init] worker: 192.168.169.20
[2024-05-27 21:35:41] INFO:    [init] init worker 192.168.169.20 succeeded.
[2024-05-27 21:35:41] INFO:    [init] master: 192.168.169.20 set hostname and hosts
[2024-05-27 21:35:41] INFO:    [init] 192.168.169.20 set hostname and hosts succeeded.
[2024-05-27 21:35:41] INFO:    [install] install docker on 192.168.169.10.
[2024-05-27 21:35:42] ERROR:   [install] install docker on 192.168.169.10 failed.
[2024-05-27 21:35:42] INFO:    [install] install kube on 192.168.169.10
[2024-05-27 21:35:43] INFO:    [install] install kube on 192.168.169.10 succeeded.
[2024-05-27 21:35:43] INFO:    [install] install docker on 192.168.169.20.
[2024-05-27 21:35:44] ERROR:   [install] install docker on 192.168.169.20 failed.
[2024-05-27 21:35:44] INFO:    [install] install kube on 192.168.169.20
[2024-05-27 21:35:45] INFO:    [install] install kube on 192.168.169.20 succeeded.
[2024-05-27 21:35:45] INFO:    [kubeadm init] kubeadm init on 192.168.169.10
[2024-05-27 21:35:45] INFO:    [kubeadm init] 192.168.169.10: set kubeadm-config.yaml
[2024-05-27 21:35:45] INFO:    [kubeadm init] 192.168.169.10: set kubeadm-config.yaml succeeded.
[2024-05-27 21:35:45] INFO:    [kubeadm init] 192.168.169.10: kubeadm init start.
[2024-05-27 21:35:48] ERROR:   [kubeadm init] 192.168.169.10: kubeadm init failed.ERROR Summary: [2024-05-27 21:35:42] ERROR:   [install] install docker on 192.168.169.10 failed.[2024-05-27 21:35:44] ERROR:   [install] install docker on 192.168.169.20 failed.[2024-05-27 21:35:48] ERROR:   [kubeadm init] 192.168.169.10: kubeadm init failed.See detailed log >> /var/log/kubeinstall.log 

查看日志发现 DockerContainerd 服务没有启动

Server:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info
, error: exit status 1[ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service'[ERROR SystemVerification]: error verifying Docker info: "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[2024-05-27 21:35:48] ERROR:   [kubeadm init] 192.168.169.10: kubeadm init failed.

没有安装成功

[root@localhost ~]# systemctl status docker
Unit docker.service could not be found.
[root@localhost ~]# systemctl status containerd
Unit containerd.service could not be found.

报错解决

# 配置yum源
[root@localhost ~]# mount CentOS-7-x86_64-DVD-2009.iso /mnt/
mount: /dev/loop1 is write-protected, mounting read-only
[root@localhost ~]# cp -rf /mnt/* /opt/centos/
[root@localhost ~]# ll /opt/centos/
total 328
-rw-r--r--. 1 root root     14 May 27 21:41 CentOS_BuildTag
drwxr-xr-x. 3 root root     35 May 27 21:41 EFI
-rw-r--r--. 1 root root    227 May 27 21:41 EULA
-rw-r--r--. 1 root root  18009 May 27 21:41 GPL
drwxr-xr-x. 3 root root     57 May 27 21:41 images
drwxr-xr-x. 2 root root    198 May 27 21:41 isolinux
drwxr-xr-x. 2 root root     43 May 27 21:41 LiveOS
drwxr-xr-x. 2 root root 225280 May 27 21:41 Packages
drwxr-xr-x. 2 root root   4096 May 27 21:41 repodata
-rw-r--r--. 1 root root   1690 May 27 21:41 RPM-GPG-KEY-CentOS-7
-rw-r--r--. 1 root root   1690 May 27 21:41 RPM-GPG-KEY-CentOS-Testing-7
-r--r--r--. 1 root root   2883 May 27 21:41 TRANS.TBL

编写本地yum源

[root@localhost ~]# cat > /etc/yum.repos.d/centos.repo << EOF
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
EOF

运行kubeeasy会清除yum源所以备份一下

[root@localhost ~]# cp /etc/yum.repos.d/centos.repo .

项目简介

Libseccomp 是一个开源库,它为 Linux 系统提供了一种强大的安全增强工具——Seccomp(Secure Computing)。该项目的目标是使应用能够限制自身或者特定进程的系统调用行为,从而减少攻击面,提高系统的安全性。

技术分析

Seccomp 是 Linux 内核提供的一个功能,允许程序设定一套规则,筛选出哪些系统调用可以被执行,哪些将被阻止。这种机制对于防止恶意代码和零日攻击非常有效。而 libseccomp 则是 Seccomp 的用户空间接口,提供了高级别的 API 和工具,使得开发者无需深入内核编程就能利用这一特性。

Libseccomp 提供了以下关键特性:

  1. 过滤器语法:基于 BPF(Berkeley Packet Filter)的规则定义,这是一种高效的、编译过的指令集,用于描述系统调用过滤策略。
  2. API 支持:提供了 C 库接口,方便集成到各种软件中,同时也支持 golang 和 python 绑定。
  3. 动态策略调整:程序运行时可动态修改过滤规则,适应不同场景下的安全需求。
  4. 兼容性:广泛支持多个 Linux 内核版本,包括老版本内核的回退机制。
[root@localhost ~]# yum install -y libseccomp

配置vsftpd服务

# vsftpd在kubeeasy运行时已经安装过了,防火墙也已经关闭过了
[root@localhost ~]# cat >> /etc/vsftpd/vsftpd.conf << EOF
anon_root=/opt/
EOF

重启vsftpd服务

[root@localhost ~]# systemctl restart vsftpd

worker节点配置yum源

[root@localhost ~]# cat > /etc/yum.repos.d/centos.repo << EOF
[centos]
name=centos
baseurl=ftp://192.168.169.10/centos
gpgcheck=0
EOF

安装依赖

[root@localhost ~]# yum install -y libseccomp

再次安装依赖包

[root@localhost ~]# kubeeasy install depend \
--host 192.168.169.10,192.168.169.20 \
--user root \
--password 000000 \
--offline-file /opt/dependencies/base-rpms.tar.gz

以下是输出信息

[2024-05-27 21:51:36] INFO:    [start] bash kubeeasy install depend --host 192.168.169.10,192.168.169.20 --user root --password ****** --offline-file /opt/dependencies/base-rpms.tar.gz
[2024-05-27 21:51:36] INFO:    [offline] unzip offline dependencies package on local.
[2024-05-27 21:51:38] INFO:    [offline] unzip offline dependencies package succeeded.
[2024-05-27 21:51:38] INFO:    [install] install dependencies packages on local.
[2024-05-27 21:52:07] INFO:    [install] install dependencies packages succeeded.
[2024-05-27 21:52:08] INFO:    [offline] 192.168.169.10: load offline dependencies file
[2024-05-27 21:52:12] INFO:    [offline] load offline dependencies file to 192.168.169.10 succeeded.
[2024-05-27 21:52:12] INFO:    [install] 192.168.169.10: install dependencies packages
[2024-05-27 21:52:12] INFO:    [install] 192.168.169.10: install dependencies packages succeeded.
[2024-05-27 21:52:13] INFO:    [offline] 192.168.169.20: load offline dependencies file
[2024-05-27 21:52:20] INFO:    [offline] load offline dependencies file to 192.168.169.20 succeeded.
[2024-05-27 21:52:20] INFO:    [install] 192.168.169.20: install dependencies packages
[2024-05-27 21:52:52] INFO:    [install] 192.168.169.20: install dependencies packages succeeded.

查看服务状态

[root@localhost ~]# systemctl status docker
● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)Active: inactive (dead)Docs: https://docs.docker.com
[root@localhost ~]# systemctl status containerd
● containerd.service - containerd container runtimeLoaded: loaded (/usr/lib/systemd/system/containerd.service; disabled; vendor preset: disabled)Active: inactive (dead)Docs: https://containerd.io

再次安装Kubernetes

[root@localhost ~]# kubeeasy install k8s \
--master 192.168.169.10 \
--worker 192.168.169.20 \
--user root \
--password 000000 \
--version 1.22.1 \
--offline-file /opt/kubernetes.tar.gz 

以下是输出信息报错成功解决

[2024-05-27 21:53:36] INFO:    [start] bash kubeeasy install k8s --master 192.168.169.10 --worker 192.168.169.20 --user root --password ****** --version 1.22.1 --offline-file /opt/kubernetes.tar.gz
[2024-05-27 21:53:36] INFO:    [check] sshpass command exists.
[2024-05-27 21:53:36] INFO:    [check] rsync command exists.
[2024-05-27 21:53:37] INFO:    [check] ssh 192.168.169.10 connection succeeded.
[2024-05-27 21:53:37] INFO:    [check] ssh 192.168.169.20 connection succeeded.
[2024-05-27 21:53:37] INFO:    [offline] unzip offline package on local.
[2024-05-27 21:53:48] INFO:    [offline] unzip offline package succeeded.
[2024-05-27 21:53:48] INFO:    [offline] master 192.168.169.10: load offline file
[2024-05-27 21:53:49] INFO:    [offline] load offline file to 192.168.169.10 succeeded.
[2024-05-27 21:53:49] INFO:    [offline] master 192.168.169.10: disable the firewall
[2024-05-27 21:53:49] INFO:    [offline] 192.168.169.10: disable the firewall succeeded.
[2024-05-27 21:53:49] INFO:    [offline] worker 192.168.169.20: load offline file
[2024-05-27 21:53:50] INFO:    [offline] load offline file to 192.168.169.20 succeeded.
[2024-05-27 21:53:50] INFO:    [offline] worker 192.168.169.20: disable the firewall
[2024-05-27 21:53:51] INFO:    [offline] 192.168.169.20: disable the firewall succeeded.
[2024-05-27 21:53:51] INFO:    [get] Get 192.168.169.10 InternalIP.
[2024-05-27 21:53:51] INFO:    [result] get MGMT_NODE_IP value succeeded.
[2024-05-27 21:53:51] INFO:    [result] MGMT_NODE_IP is 192.168.169.10
[2024-05-27 21:53:51] INFO:    [init] master: 192.168.169.10
[2024-05-27 21:53:53] INFO:    [init] init master 192.168.169.10 succeeded.
[2024-05-27 21:53:54] INFO:    [init] master: 192.168.169.10 set hostname and hosts
[2024-05-27 21:53:55] INFO:    [init] 192.168.169.10 set hostname and hosts succeeded.
[2024-05-27 21:53:55] INFO:    [init] worker: 192.168.169.20
[2024-05-27 21:53:57] INFO:    [init] init worker 192.168.169.20 succeeded.
[2024-05-27 21:53:57] INFO:    [init] master: 192.168.169.20 set hostname and hosts
[2024-05-27 21:53:59] INFO:    [init] 192.168.169.20 set hostname and hosts succeeded.
[2024-05-27 21:53:59] INFO:    [install] install docker on 192.168.169.10.
[2024-05-27 21:56:17] INFO:    [install] install docker on 192.168.169.10 succeeded.
[2024-05-27 21:56:17] INFO:    [install] install kube on 192.168.169.10
[2024-05-27 21:56:18] INFO:    [install] install kube on 192.168.169.10 succeeded.
[2024-05-27 21:56:18] INFO:    [install] install docker on 192.168.169.20.
[2024-05-27 22:00:47] INFO:    [install] install docker on 192.168.169.20 succeeded.
[2024-05-27 22:00:47] INFO:    [install] install kube on 192.168.169.20
[2024-05-27 22:00:49] INFO:    [install] install kube on 192.168.169.20 succeeded.
[2024-05-27 22:00:49] INFO:    [kubeadm init] kubeadm init on 192.168.169.10
[2024-05-27 22:00:49] INFO:    [kubeadm init] 192.168.169.10: set kubeadm-config.yaml
[2024-05-27 22:00:49] INFO:    [kubeadm init] 192.168.169.10: set kubeadm-config.yaml succeeded.
[2024-05-27 22:00:49] INFO:    [kubeadm init] 192.168.169.10: kubeadm init start.
[2024-05-27 22:01:20] INFO:    [kubeadm init] 192.168.169.10: kubeadm init succeeded.
[2024-05-27 22:01:23] INFO:    [kubeadm init] 192.168.169.10: set kube config.
[2024-05-27 22:01:24] INFO:    [kubeadm init] 192.168.169.10: set kube config succeeded.
[2024-05-27 22:01:24] INFO:    [kubeadm init] 192.168.169.10: delete master taint
[2024-05-27 22:01:24] INFO:    [kubeadm init] 192.168.169.10: delete master taint succeeded.
[2024-05-27 22:01:25] INFO:    [kubeadm init] Auto-Approve kubelet cert csr succeeded.
[2024-05-27 22:01:25] INFO:    [kubeadm join] master: get join token and cert info
[2024-05-27 22:01:25] INFO:    [result] get CACRT_HASH value succeeded.
[2024-05-27 22:01:26] INFO:    [result] get INTI_CERTKEY value succeeded.
[2024-05-27 22:01:26] INFO:    [result] get INIT_TOKEN value succeeded.
[2024-05-27 22:01:26] INFO:    [kubeadm join] worker 192.168.169.20 join cluster.
[2024-05-27 22:01:40] INFO:    [kubeadm join] worker 192.168.169.20 join cluster succeeded.
[2024-05-27 22:01:40] INFO:    [kubeadm join] set 192.168.169.20 worker node role.
[2024-05-27 22:01:40] INFO:    [kubeadm join] set 192.168.169.20 worker node role succeeded.
[2024-05-27 22:01:40] INFO:    [network] add flannel network
[2024-05-27 22:01:41] INFO:    [calico] change flannel pod subnet succeeded.
[2024-05-27 22:01:41] INFO:    [apply] apply kube-flannel.yaml file
[2024-05-27 22:01:42] INFO:    [apply] apply kube-flannel.yaml file succeeded.
[2024-05-27 22:01:45] INFO:    [waiting] waiting kube-flannel-ds
[2024-05-27 22:01:46] INFO:    [waiting] kube-flannel-ds pods ready succeeded.
[2024-05-27 22:01:46] INFO:    [apply] apply coredns-cm.yaml file
[2024-05-27 22:01:47] INFO:    [apply] apply coredns-cm.yaml file succeeded.
[2024-05-27 22:01:47] INFO:    [apply] apply metrics-server.yaml file
[2024-05-27 22:01:48] INFO:    [apply] apply metrics-server.yaml file succeeded.
[2024-05-27 22:01:51] INFO:    [waiting] waiting metrics-server
[2024-05-27 22:02:01] INFO:    [waiting] metrics-server pods ready succeeded.
[2024-05-27 22:02:01] INFO:    [apply] apply dashboard.yaml file
[2024-05-27 22:02:02] INFO:    [apply] apply dashboard.yaml file succeeded.
[2024-05-27 22:02:05] INFO:    [waiting] waiting dashboard-agent
[2024-05-27 22:02:06] INFO:    [waiting] dashboard-agent pods ready succeeded.
[2024-05-27 22:02:09] INFO:    [waiting] waiting dashboard-en
[2024-05-27 22:02:09] INFO:    [waiting] dashboard-en pods ready succeeded.
[2024-05-27 22:02:24] INFO:    [cluster] kubernetes cluster status
+ kubectl get node -o wide
NAME               STATUS   ROLES                         AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master-node1   Ready    control-plane,master,worker   70s   v1.22.1   192.168.169.10   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
k8s-worker-node1   Ready    worker                        49s   v1.22.1   192.168.169.20   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
+ kubectl get pods -A -o wide
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE               NOMINATED NODE   READINESS GATES
dashboard-cn   dashboard-agent-cd88cf454-4m5ct            1/1     Running   0          23s   10.244.1.3       k8s-worker-node1   <none>           <none>
dashboard-cn   dashboard-cn-64bd46887f-dtkxv              1/1     Running   0          23s   10.244.1.2       k8s-worker-node1   <none>           <none>
dashboard-en   dashboard-en-55596d469-84ggw               1/1     Running   0          23s   10.244.1.4       k8s-worker-node1   <none>           <none>
kube-system    coredns-78fcd69978-b77bx                   1/1     Running   0          52s   10.244.0.2       k8s-master-node1   <none>           <none>
kube-system    coredns-78fcd69978-lnwxl                   1/1     Running   0          52s   10.244.0.3       k8s-master-node1   <none>           <none>
kube-system    etcd-k8s-master-node1                      1/1     Running   0          65s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-apiserver-k8s-master-node1            1/1     Running   0          65s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-controller-manager-k8s-master-node1   1/1     Running   0          65s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-flannel-ds-sfczm                      1/1     Running   0          43s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-flannel-ds-tzz9l                      1/1     Running   0          43s   192.168.169.20   k8s-worker-node1   <none>           <none>
kube-system    kube-proxy-gg64q                           1/1     Running   0          49s   192.168.169.20   k8s-worker-node1   <none>           <none>
kube-system    kube-proxy-p5thp                           1/1     Running   0          52s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    kube-scheduler-k8s-master-node1            1/1     Running   0          68s   192.168.169.10   k8s-master-node1   <none>           <none>
kube-system    metrics-server-77564bc84d-5687x            1/1     Running   0          37s   192.168.169.10   k8s-master-node1   <none>           <none> See detailed log >> /var/log/kubeinstall.log

部署完成后查看集群状态

[root@localhost ~]# kubectl cluster-info
Kubernetes control plane is running at https://apiserver.cluster.local:6443
CoreDNS is running at https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@localhost ~]# kubectl get nodes
NAME               STATUS   ROLES                         AGE     VERSION
k8s-master-node1   Ready    control-plane,master,worker   2m35s   v1.22.1
k8s-worker-node1   Ready    worker                        2m14s   v1.22.1
[root@localhost ~]# kubectl get pods --all-namespaces
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE
dashboard-cn   dashboard-agent-cd88cf454-4m5ct            1/1     Running   0          117s
dashboard-cn   dashboard-cn-64bd46887f-dtkxv              1/1     Running   0          117s
dashboard-en   dashboard-en-55596d469-84ggw               1/1     Running   0          117s
kube-system    coredns-78fcd69978-b77bx                   1/1     Running   0          2m26s
kube-system    coredns-78fcd69978-lnwxl                   1/1     Running   0          2m26s
kube-system    etcd-k8s-master-node1                      1/1     Running   0          2m39s
kube-system    kube-apiserver-k8s-master-node1            1/1     Running   0          2m39s
kube-system    kube-controller-manager-k8s-master-node1   1/1     Running   0          2m39s
kube-system    kube-flannel-ds-sfczm                      1/1     Running   0          2m17s
kube-system    kube-flannel-ds-tzz9l                      1/1     Running   0          2m17s
kube-system    kube-proxy-gg64q                           1/1     Running   0          2m23s
kube-system    kube-proxy-p5thp                           1/1     Running   0          2m26s
kube-system    kube-scheduler-k8s-master-node1            1/1     Running   0          2m42s
kube-system    metrics-server-77564bc84d-5687x            1/1     Running   0          2m11s

查看节点负载情况

[root@localhost ~]# kubectl top nodes --use-protocol-buffers
NAME               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master-node1   358m         17%    1171Mi          30%       
k8s-worker-node1   141m         7%     699Mi           18%

部署 KubeVirt 集群

[root@localhost ~]# kubeeasy add --virt kubevirt

以下是输出信息

[root@localhost ~]# kubeeasy add --virt kubevirt
[2024-05-27 22:07:04] INFO:    [start] bash kubeeasy add --virt kubevirt
[2024-05-27 22:07:04] INFO:    [check] sshpass command exists.
[2024-05-27 22:07:04] INFO:    [check] wget command exists.
[2024-05-27 22:07:04] INFO:    [check] conn apiserver succeeded.
[2024-05-27 22:07:05] INFO:    [virt] add kubevirt
[2024-05-27 22:07:05] INFO:    [apply] apply kubevirt-operator.yaml file
[2024-05-27 22:07:06] INFO:    [apply] apply kubevirt-operator.yaml file succeeded.
[2024-05-27 22:07:09] INFO:    [waiting] waiting kubevirt
[2024-05-27 22:07:16] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-05-27 22:07:16] INFO:    [apply] apply kubevirt-cr.yaml file
[2024-05-27 22:07:17] INFO:    [apply] apply kubevirt-cr.yaml file succeeded.
[2024-05-27 22:07:50] INFO:    [waiting] waiting kubevirt
[2024-05-27 22:08:00] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-05-27 22:08:04] INFO:    [waiting] waiting kubevirt
[2024-05-27 22:08:49] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-05-27 22:08:52] INFO:    [waiting] waiting kubevirt
[2024-05-27 22:08:52] INFO:    [waiting] kubevirt pods ready succeeded.
[2024-05-27 22:08:52] INFO:    [apply] apply multus-daemonset.yaml file
[2024-05-27 22:08:54] INFO:    [apply] apply multus-daemonset.yaml file succeeded.
[2024-05-27 22:08:57] INFO:    [waiting] waiting kube-multus
[2024-05-27 22:08:58] INFO:    [waiting] kube-multus pods ready succeeded.
[2024-05-27 22:08:58] INFO:    [apply] apply multus-cni-macvlan.yaml file
[2024-05-27 22:09:01] INFO:    [apply] apply multus-cni-macvlan.yaml file succeeded.
[2024-05-27 22:09:01] INFO:    [cluster] kubernetes kubevirt status
+ kubectl get pod -n kubevirt -o wide
NAME                              READY   STATUS    RESTARTS   AGE    IP           NODE               NOMINATED NODE   READINESS GATES
virt-api-86f9d6d4f-4vdrq          1/1     Running   0          81s    10.244.0.5   k8s-master-node1   <none>           <none>
virt-api-86f9d6d4f-56t5x          1/1     Running   0          81s    10.244.1.7   k8s-worker-node1   <none>           <none>
virt-controller-54b79f5db-4dfhq   1/1     Running   0          53s    10.244.0.7   k8s-master-node1   <none>           <none>
virt-controller-54b79f5db-vflkh   1/1     Running   0          53s    10.244.1.8   k8s-worker-node1   <none>           <none>
virt-handler-4kzm9                1/1     Running   0          53s    10.244.1.9   k8s-worker-node1   <none>           <none>
virt-handler-rrsdz                1/1     Running   0          53s    10.244.0.6   k8s-master-node1   <none>           <none>
virt-operator-6fbd74566c-9nrj6    1/1     Running   0          115s   10.244.0.4   k8s-master-node1   <none>           <none>
virt-operator-6fbd74566c-vmwz7    1/1     Running   0          115s   10.244.1.5   k8s-worker-node1   <none>           <none> See detailed log >> /var/log/kubeinstall.log 

部署 Istio 服务网格

[root@localhost ~]# kubeeasy add --istio istio

以下是输出信息

[2024-05-27 22:09:53] INFO:    [start] bash kubeeasy add --istio istio
[2024-05-27 22:09:53] INFO:    [check] sshpass command exists.
[2024-05-27 22:09:53] INFO:    [check] wget command exists.
[2024-05-27 22:09:53] INFO:    [check] conn apiserver succeeded.
[2024-05-27 22:09:55] INFO:    [istio] add istio
✔ Istio core installed                                                                                                                                                     
✔ Istiod installed                                                                                                                                                         
✔ Egress gateways installed                                                                                                                                                
✔ Ingress gateways installed                                                                                                                                               
✔ Installation complete                                                                                                                                                    Making this installation the default for injection and validation.Thank you for installing Istio 1.12.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/FegQbc9UvePd4Z9z7
[2024-05-27 22:10:23] INFO:    [waiting] waiting istio-egressgateway
[2024-05-27 22:10:23] INFO:    [waiting] istio-egressgateway pods ready succeeded.
[2024-05-27 22:10:26] INFO:    [waiting] waiting istio-ingressgateway
[2024-05-27 22:10:26] INFO:    [waiting] istio-ingressgateway pods ready succeeded.
[2024-05-27 22:10:29] INFO:    [waiting] waiting istiod
[2024-05-27 22:10:29] INFO:    [waiting] istiod pods ready succeeded.
[2024-05-27 22:10:33] INFO:    [waiting] waiting grafana
[2024-05-27 22:10:35] INFO:    [waiting] grafana pods ready succeeded.
[2024-05-27 22:10:38] INFO:    [waiting] waiting jaeger
[2024-05-27 22:10:38] INFO:    [waiting] jaeger pods ready succeeded.
[2024-05-27 22:10:41] INFO:    [waiting] waiting kiali
[2024-05-27 22:11:00] INFO:    [waiting] kiali pods ready succeeded.
[2024-05-27 22:11:03] INFO:    [waiting] waiting prometheus
[2024-05-27 22:11:03] INFO:    [waiting] prometheus pods ready succeeded.
[2024-05-27 22:11:03] INFO:    [cluster] kubernetes istio status
+ kubectl get pod -n istio-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
grafana-6ccd56f4b6-kg7zv               1/1     Running   0          34s   10.244.1.13   k8s-worker-node1   <none>           <none>
istio-egressgateway-7f4864f59c-kxfl4   1/1     Running   0          54s   10.244.1.12   k8s-worker-node1   <none>           <none>
istio-ingressgateway-55d9fb9f-m4dg8    1/1     Running   0          54s   10.244.1.11   k8s-worker-node1   <none>           <none>
istiod-555d47cb65-k6xkz                1/1     Running   0          61s   10.244.1.10   k8s-worker-node1   <none>           <none>
jaeger-5d44bc5c5d-gbgrk                1/1     Running   0          34s   10.244.1.14   k8s-worker-node1   <none>           <none>
kiali-9f9596d69-clcl8                  1/1     Running   0          34s   10.244.1.15   k8s-worker-node1   <none>           <none>
prometheus-64fd8ccd65-d9zqq            2/2     Running   0          34s   10.244.1.16   k8s-worker-node1   <none>           <none> See detailed log >> /var/log/kubeinstall.log

部署 Harbor 仓库

[root@localhost ~]# kubeeasy add --registry harbor

以下是输出信息

[2024-05-27 22:12:29] INFO:    [start] bash kubeeasy add --registry harbor
[2024-05-27 22:12:29] INFO:    [check] sshpass command exists.
[2024-05-27 22:12:29] INFO:    [check] wget command exists.
[2024-05-27 22:12:29] INFO:    [check] conn apiserver succeeded.
[2024-05-27 22:12:29] INFO:    [offline] unzip offline harbor package on local.
[2024-05-27 22:12:58] INFO:    [offline] installing docker-compose on local.
[2024-05-27 22:12:59] INFO:    [offline] Installing harbor on local.[Step 0]: checking if docker is installed ...Note: docker version: 20.10.14[Step 1]: checking docker-compose is installed ...Note: docker-compose version: 2.2.1[Step 2]: loading Harbor images ...[Step 3]: preparing environment ...[Step 4]: preparing harbor configs ...
prepare base dir is set to /opt/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir[Step 5]: starting Harbor ...
[+] Running 10/10⠿ Network harbor_harbor        Created                                                                                                                               0.3s⠿ Container harbor-log         Started                                                                                                                               2.3s⠿ Container harbor-portal      Started                                                                                                                               9.5s⠿ Container registryctl        Started                                                                                                                               9.3s⠿ Container redis              Started                                                                                                                               8.4s⠿ Container registry           Started                                                                                                                               9.3s⠿ Container harbor-db          Started                                                                                                                               9.4s⠿ Container harbor-core        Started                                                                                                                              10.2s⠿ Container nginx              Started                                                                                                                              13.6s⠿ Container harbor-jobservice  Started                                                                                                                              13.3s
✔ ----Harbor has been installed and started successfully.----
[2024-05-27 22:16:30] INFO:    [cluster] kubernetes Harbor status
+ docker-compose -f /opt/harbor/docker-compose.yml ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
harbor-core         "/harbor/entrypoint.…"   core                running (healthy)   
harbor-db           "/docker-entrypoint.…"   postgresql          running (healthy)   
harbor-jobservice   "/harbor/entrypoint.…"   jobservice          running (healthy)   
harbor-log          "/bin/sh -c /usr/loc…"   log                 running (healthy)   127.0.0.1:1514->10514/tcp
harbor-portal       "nginx -g 'daemon of…"   portal              running (healthy)   
nginx               "nginx -g 'daemon of…"   proxy               running (healthy)   0.0.0.0:80->8080/tcp, :::80->8080/tcp
redis               "redis-server /etc/r…"   redis               running (healthy)   
registry            "/home/harbor/entryp…"   registry            running (healthy)   
registryctl         "/home/harbor/start.…"   registryctl         running (healthy)    See detailed log >> /var/log/kubeinstall.log

查看资源

[root@k8s-master-node1 ~]# kubectl get pods -A
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE
dashboard-cn   dashboard-agent-cd88cf454-4m5ct            1/1     Running   0          17m
dashboard-cn   dashboard-cn-64bd46887f-7z59r              1/1     Running   0          55s
dashboard-en   dashboard-en-55596d469-84ggw               1/1     Running   0          17m
istio-system   grafana-6ccd56f4b6-kg7zv                   1/1     Running   0          9m17s
istio-system   istio-egressgateway-7f4864f59c-kxfl4       1/1     Running   0          9m37s
istio-system   istio-ingressgateway-55d9fb9f-m4dg8        1/1     Running   0          9m37s
istio-system   istiod-555d47cb65-k6xkz                    1/1     Running   0          9m44s
istio-system   jaeger-5d44bc5c5d-gbgrk                    1/1     Running   0          9m17s
istio-system   kiali-9f9596d69-clcl8                      1/1     Running   0          9m17s
istio-system   prometheus-64fd8ccd65-d9zqq                2/2     Running   0          9m17s
kube-system    coredns-78fcd69978-b77bx                   1/1     Running   0          18m
kube-system    coredns-78fcd69978-lnwxl                   1/1     Running   0          18m
kube-system    etcd-k8s-master-node1                      1/1     Running   0          18m
kube-system    kube-apiserver-k8s-master-node1            1/1     Running   0          18m
kube-system    kube-controller-manager-k8s-master-node1   1/1     Running   0          18m
kube-system    kube-flannel-ds-sfczm                      1/1     Running   0          18m
kube-system    kube-flannel-ds-tzz9l                      1/1     Running   0          18m
kube-system    kube-multus-ds-9h5bb                       1/1     Running   0          10m
kube-system    kube-multus-ds-rbnsp                       1/1     Running   0          10m
kube-system    kube-proxy-gg64q                           1/1     Running   0          18m
kube-system    kube-proxy-p5thp                           1/1     Running   0          18m
kube-system    kube-scheduler-k8s-master-node1            1/1     Running   0          18m
kube-system    metrics-server-77564bc84d-5687x            1/1     Running   0          17m
kubevirt       virt-api-86f9d6d4f-4vdrq                   1/1     Running   0          12m
kubevirt       virt-api-86f9d6d4f-56t5x                   1/1     Running   0          12m
kubevirt       virt-controller-54b79f5db-4dfhq            1/1     Running   0          11m
kubevirt       virt-controller-54b79f5db-vflkh            1/1     Running   0          11m
kubevirt       virt-handler-4kzm9                         1/1     Running   0          11m
kubevirt       virt-handler-rrsdz                         1/1     Running   0          11m
kubevirt       virt-operator-6fbd74566c-9nrj6             1/1     Running   0          12m
kubevirt       virt-operator-6fbd74566c-vmwz7             1/1     Running   0          12m

其他报错解决

查看磁盘空间是否够用

[root@k8s-master-node1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 2.0G     0  2.0G   0% /dev
tmpfs                    2.0G     0  2.0G   0% /dev/shm
tmpfs                    2.0G   17M  2.0G   1% /run
tmpfs                    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/mapper/centos-root   94G   25G   69G  27% /
/dev/sda1               1014M  138M  877M  14% /boot
/dev/mapper/centos-home  2.0G   33M  2.0G   2% /home
tmpfs                    394M     0  394M   0% /run/user/0
/dev/loop1               4.4G  4.4G     0 100% /mnt
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/b039687a23f8fae6c5187496752b40f9680b1176a6b450012da37205da799228/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/3aa2dd9ef3f5215fef69348e2915ae3fdede0172d10c87233ba5d495c7f3481c/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/b0d856449d7fd4be80f1dbab0d8eb7b6a12435ebc645d2bd67442ece1c957bfa/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/83779008afa2470bee75fcf40986a2b88677c070a142434d2afd24ccc6595c8c/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/a25f32a6997c8e8837e3fd53f8f49317ba00d3792fd2400eaeb37651ffc6adf1/mounts/shm
shm                       64M     0   64M   0% /var/lib/docker/containers/1bc9a2dcf244a4bab3f6b7b452220d57f84f751fde6d2343041e13752d43cb5c/mounts/shm
shm                       64M     0   64M   0% /var/lib/docker/containers/ad389f9f68ba4d07e1e230f0263e62c59b93de201ddf66969037cb5e3305d9b0/mounts/shm
shm                       64M     0   64M   0% /var/lib/docker/containers/3d907398b2cf132c4c712543c9b86fae338a984e52ddc1141bb60a9309c965b3/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/e19cf1eb10fcd855424dfd274e4da3302743ae22f5b30571203c5235e6c84f29/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/e4ca942935911960946ffb8f3b32f064c8e3696117f61c9ab4e079dfd7711d53/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/fe04aff4c51e8579db0c52d8313ea09aa19cefd9e4183d9fbfc7fe008fd85f0e/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/0ba8948cc724f519001633dbaee7bb4faa5b96e83bc47545c190d2040d7dc975/merged
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/ef02c2ea-55c0-470f-9d5f-fded20a23ca6/volumes/kubernetes.io~projected/kube-api-access-jb4jm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/971ecbd26e0fe7add7b25144508b1812284143fb62b0e8e8d059a2d25b1205b6/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/0e032d5f3aa5b4cb696eeb0cf160ea7180c223eec7adf485f6eb69cf2f60de0e/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/2874cd9c985d553a8617e52c4e37ce07185e2522df5366354da35eb36827dc8f/merged
tmpfs                     50M   12K   50M   1% /var/lib/kubelet/pods/92783df2-056d-403a-aa81-15c585d725bf/volumes/kubernetes.io~projected/kube-api-access-sbfd5
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ba0551bd461d5077cfdaa4ecf2269558b1ecb116c21dc213d28aad352c36066c/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/ec37c7d0bb4a287200b840af79317ddf66e29efdee0e501fe69462faabe1e68d/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/2f2caf724cfce283b0770371e2d367b52adb6ebc0e81ca81a9741dbe9dece228/merged
tmpfs                    170M   12K  170M   1% /var/lib/kubelet/pods/a81ce6dc-f02a-4053-bf5d-92353bf0b260/volumes/kubernetes.io~projected/kube-api-access-q56s8
tmpfs                    170M   12K  170M   1% /var/lib/kubelet/pods/7666fe20-f005-408d-898c-6515f9c9e82b/volumes/kubernetes.io~projected/kube-api-access-dfl4j
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/8b80eafa-2b69-4727-a4b4-60b1601af7b2/volumes/kubernetes.io~projected/kube-api-access-fsvlh
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/16ef5f9dc887f97e540a1a71fbe8b11c06911b5d679a61d8bfe75e583510545c/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/47a4cdd51151d957850c7064def32e664d0d9f6432955db7a7e9eb5c5e53eda5/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/05d335b1ad2fa4bfc722351d75ccc90c459c996d9e991fd48a4b0b5135cb51ba/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/a2447dc63f5b6a7a6910b5d7cca2fdcd4ea29141659b130c8d02b5a83dc26197/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ef6d7fea914b853924850dc9aa1ca536ab6f16d702d0031e7edd621fc5ffa460/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/90753d4fe75abf6a3e9296947175f9aaeccbc64ff5b6cc19d68a03e75f546767/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/61be0dd3df2fff2c28b7ca31510123e8bd9665068d8c6ad9a106bce71467566c/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/c1bc3abea4903c4b70db30436b2163fb9d8360b38fa7f7fe68c7b133f29a7c52/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/9b833c94fa3d68af7c543a776805fd2640c1572a45ba343b58c8214442968613/merged
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/b965e58c-5af4-4642-967d-c2478bd13933/volumes/kubernetes.io~secret/kubevirt-operator-certs
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/b965e58c-5af4-4642-967d-c2478bd13933/volumes/kubernetes.io~projected/kube-api-access-kxxt5
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/a3d7abde028b02ce103634320b26d6375c5fb2e3dd66a0d416276c7166941410/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/0178ab4dcec3e23eda1f48bff7764cd5df4ed8266eb88f745ad4a58343f016d9/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/2335318d42637beb79eb21832921ff085ae753aa1029ef17b570f2e962e8cfdd/merged
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~secret/kubevirt-virt-handler-certs
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~projected/kube-api-access-rspl9
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~secret/kubevirt-virt-api-certs
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ef243c1f62650ac77de2525f199b52f3e63e82badb2465a2b47bddadbda95923/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/7acb351ec8eb07876ffe470d2a5d83ea260ae70e7aee4558db0ca755887f3e50/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/874c2b916a2b5f86a57a1c4fc68947119e0854e997996bf5345e39284b7cbaa1/merged
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~secret/kubevirt-virt-handler-certs
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~secret/kubevirt-virt-handler-server-certs
tmpfs                    3.8G  8.0K  3.8G   1% /var/lib/kubelet/pods/35153ae2-e931-45f8-a374-2d4da17fd354/volumes/kubernetes.io~secret/kubevirt-controller-certs
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/35153ae2-e931-45f8-a374-2d4da17fd354/volumes/kubernetes.io~projected/kube-api-access-28wt5
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~projected/kube-api-access-pckgc
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/0f5b9d0761baaf31c5ce519f507295c2cb8c14e5791403f5a65de5c69d2e5ae1/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/f0a64a8405ebfffb980d40a4e0d9c829000d7a4bbe1f1712705a6a84b357e5e1/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/f116e463a9f43d90a24f6c719fcbef9bc18494c1680065cde6b83aa89a73533d/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/5383598d8af1938aef830ba4c68ce26116f0b1e2ec2b40abf53c0268fa0f9fcb/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/f6f05621a5cb6f361ccd023773ee27a680da6d3a7f44627adfa3ba13c9969ba2/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ed10477d475aa46d65ba380fcef6f6d98d82d7686263e6bd5e95ebdf9f48e3da/merged
tmpfs                     50M   12K   50M   1% /var/lib/kubelet/pods/a327cf65-4ace-4281-8b75-e3badd0b912a/volumes/kubernetes.io~projected/kube-api-access-dkrbx
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/9b33e3c7aed5bdce53b286b1e342078ca9bd1602bfcf561e9d2ffa4c8cd655cd/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/091e2157f089eb0747e63f4e4b4282f9d40fc45c90d3a73ec4e187aa38f419cd/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/5a471abc372438b63f23ef5ea8fd82a1f01db73afb3589c4b01a76e04b1857ad/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/af21221686a4ae59ff55c16945cf649004eeecf9e1333a56a1a99a91e91ebd65/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/2f60c35ae8573c9cbad08fb26d1ec676fb156d869af2c7b533e84c7503c30c34/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/59b1196c741f5aa612fdbfad9a4c04b937f36648f24ac3d4ee6acab567152d67/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/1604976bc8461cfb835ab152ccf0219a108cd2032d2e5c3a777b189a3953acca/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/96343f1ed6280bfbc9e465165762ba8394a5abf223d20ba4c74c6cf99727eb44/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/92e58a653625ee2e93c6ac90d69e996af6d306e16637825203cd4ebe7511a7d4/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/a6f9001a599dc66c3d9de23e6708bd6b3ff685a69b44b1c666d8ff9f0b28050b/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/ad9e487aaf165f4950d430b0fd95db9d45d515dbbfae24d7338b0922b8cf56bb/merged
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/f689a5ad2eb8cba138142620f27a0c5968ca36886f37698945018e15e0b96965/merged
tmpfs                    3.8G   12K  3.8G   1% /var/lib/kubelet/pods/7b683571-c0de-45bd-9ec4-c2eac9755c69/volumes/kubernetes.io~projected/kube-api-access-gztdj
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/dd459ae9c7ce30526e1a0d68e2ab5cce5b6ad1e353d91a7a7ab5f9dc6c40b2d8/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/d94be22f84c576ee1daa39a03f60efddd60e46543a2b4eb5d536621855c8e19b/mounts/shm
overlay                   94G   25G   69G  27% /var/lib/docker/overlay2/0b431cf493569d862eee88343dc82bf8fc8541aaa43e8f182b12205e45026e66/merged

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/16747.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

链表经典题目—相交链表和链表倒数第k个节点

&#x1f389;&#x1f389;&#x1f389;欢迎莅临我的博客空间&#xff0c;我是池央&#xff0c;一个对C和数据结构怀有无限热忱的探索者。&#x1f64c; &#x1f338;&#x1f338;&#x1f338;这里是我分享C/C编程、数据结构应用的乐园✨ &#x1f388;&#x1f388;&…

艾体宝干货 | 教程:使用ntopng和nProbe监控网络流量

本教程旨在分享如何通过 ntopng 和 nProbe 这两款工具&#xff0c;深入了解和掌握网络流量监控的艺术。我们将提供从基本概念到高级应用的全面指导&#xff0c;涵盖了在多种平台和设备上的部署和配置步骤。不论您是专业人员还是技术爱好者&#xff0c;跟随本教程&#xff0c;都…

11thingsboard物联网网关接入ThingsBoard物联网平台的操作说明

本文包含关于如何配置ThingsBoard 平台和连接钡铼技术R40设备的说明。ThingsBoard平台是一个用于数据收集、处理、可视化和设备管理的开源物联网平台。它通过行业标准MQTT协议实现设备连接。ThingsBoard结合了可扩展性、容错性和性能&#xff0c;因此您永远不会丢失数据。 4G L…

【考研数学】线代除了「李永乐」,还能跟谁?

考研线代&#xff0c;除了利用了老师&#xff0c;我觉得还有一个宝藏老师的课程值得听&#xff01; 那就是喻老&#xff0c;这个是我在b站上面新发现的老师&#xff0c;听完他的课程发现真的喜欢 他不仅在B站上开设了课程&#xff0c;还编写了配套的线性代数辅导讲义&#xff…

Python图形界面(GUI)Tkinter笔记(十一):用【Entry()】实现单行文本输入(2)

Tkinter库中的单行文本输入框(Entry)与Pyhton中最常的get()方法有机结合在一起能实现各种各样的功能。get()不只是与Entry()方法配合使用,还可以与其它方法配合使用,例如还可以与前面的Button()方法使用等等。总之,不同的工具不同的组合产生的反应是充满各种梦幻与想象,妙趣…

NIO的ByteBuffer和Netty的ByteBuf的性能

在讨论Java NIO的ByteBuffer与Netty的ByteBuf的性能时&#xff0c;需要考虑几个主要的因素&#xff0c;因为性能表现并不是绝对的&#xff0c;而是依赖于具体的使用场景。Netty的ByteBuf设计更加现代&#xff0c;针对网络编程的需求进行了优化&#xff0c;包含了许多ByteBuffer…

网络安全面临的最大的威胁是什么

网络安全面临的威胁概述 网络安全威胁是指可能对网络系统造成损害、干扰或未经授权访问的各种风险和威胁。随着数字化进程的加快&#xff0c;网络安全问题愈发凸显&#xff0c;企业和个人都面临着来自多方面的威胁。这些威胁包括但不限于恶意软件、网络钓鱼、零日漏洞、拒绝服务…

Python中的SSH、SFTP和FTP操作详解

大家好&#xff0c;在网络编程中&#xff0c;安全地连接到远程服务器并执行操作是一项常见任务。Python 提供了多种库来实现这一目标&#xff0c;其中 Paramiko 是一个功能强大的工具&#xff0c;可以轻松地在 Python 中执行 SSH、SFTP 和 FTP 操作。本文将介绍如何使用 Parami…

企业选择定制化MES管理系统时需要考虑的核心功能

在当今制造业的数字化转型浪潮中&#xff0c;企业对于实现生产现场透明管理的需求愈发迫切。为了满足这一需求&#xff0c;MES管理系统成为了众多企业的首选解决方案。MES管理系统以其高度的灵活性和可定制性&#xff0c;能够根据不同行业的特性&#xff0c;为企业提供量身定制…

php质量工具系列之paslm

Psalm是一个静态分析工具&#xff0c;深入程序&#xff0c;尽可能多地找到与类型相关的bug 混合类型警告 Intelligent logic checks 属性初始化检查 Taint analysis Language Server Automatic fixes Automatic refactoring 安装 composer global require --dev vimeo/psalm …

看潮成长日程表用户手册(上)

看潮成长日程表用户手册&#xff08;上&#xff09; 一、特色功能1、以每周日程表为主要形式2、全时管控的时间管理3、持续的日程管理4、分期间时间表5、按日排程&#xff0c;按周输出6、夏季作息时间处理7、年度假日处理8、休息日处理9、弹性日程10、完成记录11、多种输出形式…

重构与优化-前言

关注公众号畅读:IT技术馆 Java代码重构是优化现有代码结构、提升代码可读性、可维护性和性能的过程,而不会改变其外在行为。这包括命名规范、消除重复代码、改进设计模式的使用、优化数据结构和算法等。下面是一些常见的Java代码重构技巧及示例: 1. 重命名(Rename) 目的…

光纤跳线组成结构划分你知道吗

按照组成结构划分 光纤跳线根据组成结构的不同可分为带状光纤跳线和束状光纤跳线。带状光纤跳线使用的是由光纤带组成的带状光缆&#xff0c;大多呈扁平形状&#xff0c;因具有较高的光纤密度&#xff0c;它可以容纳更多的纤芯&#xff0c;因此大大节省布线成本和空间&#xf…

猫咪掉毛严重怎么办?小米、希喂、霍尼韦尔宠物空气净化器测评

吸猫成瘾&#xff0c;养猫“致贫”&#xff1f;在当今社会&#xff0c;养猫已成为众多年轻人的一个追捧的事情。乖巧又可爱&#xff0c;下班回到家撸一把猫已经成为年轻人的日常。但是猫咪可爱也不影响它的各种养猫伴生的问题&#xff01;无论是漂浮的浮毛、飘散的皮屑还是偶发…

C++11 新特性

原文 https://www.cnblogs.com/linuxAndMcu/p/11600553.html 1. nullptr (1) 作用&#xff1a;nullptr 的类型为 nullptr_t&#xff0c;能够隐式地转换为任何指针的类型&#xff0c;能和他们进行相等或者不等的比较。 简单说&#xff0c;nullptr目的是为了区分 空指针NULL …

2024年上半年系统架构设计师真题-复原程度90%

前言 此次考试监考特别严格&#xff0c;草稿纸不允许带出考场&#xff0c;并且准考证上不允许任何写画&#xff0c;甚至连笔都允许带一支&#xff0c;所以下面的相关题目都是参考一些群友的提供&#xff0c;加上自己的记忆回顾&#xff0c;得到的结果。 其中综合知识部分的题…

1.int 与 Integer 的简单区别

蓝桥杯刷题从此开始&#xff1a; 第一题就是两个数的和&#xff0c;个人看来主要考察 int与integer 的区别&#xff1b; 这是我提交的答案&#xff0c;竟然会报错&#xff1a; import java.util.*; //输入A、B&#xff0c;输出AB。 class add {public static void main(String …

yolov10 瑞芯微RKNN、地平线Horizon芯片部署、TensorRT部署,部署工程难度小、模型推理速度快

特别说明&#xff1a;参考官方开源的yolov10代码、瑞芯微官方文档、地平线的官方文档&#xff0c;如有侵权告知删&#xff0c;谢谢。 模型和完整仿真测试代码&#xff0c;放在github上参考链接 模型和代码。 yolov8、v9还没玩热乎&#xff0c;这不yolov10又来了&#xff0c;那么…

tomcat--安全配置多虚拟机

端口8005/tcp 安全配置管理 8005是Tomcat的管理端口&#xff0c;默认监听在127.0.0.1上。无需验证就可发送SHUTDOWN (大小写敏感)这个字符串&#xff0c;tomcat接收到后就会关闭此Server。此管理功能建议禁用&#xff0c;可将SHUTDOWN改为一串猜不出的字符串实现或者port修改成…

Python开发——os与os.path的使用

1. os的一般用法 使用dir()列出库的属性与方法 # 使用dir()列出库的属性与方法 print(dir(os)) 使用os.getcwd()打印当前目录 # 使用os.getcwd()打印当前目录 print("当前目录为:"os.getcwd()) # 打印表示当前工作目录的字符串 获取指定路径下的目录和文件列表 #…