kubernetes企业级高可用部署

目录

1、Kubernetes高可用项目介绍

2、项目架构设计

2.1、项目主机信息

2.2、项目架构图

1、Kubernetes高可用项目介绍

2、项目架构设计

2.1、项目主机信息

2.2、项目架构图

2.3、项目实施思路

3、项目实施过程

3.1、系统初始化

3.2、配置部署keepalived服务

3.3、配置部署haproxy服务

3.4、配置部署Docker服务

3.5、部署kubelet kubeadm kubectl工具

3.6、部署Kubernetes Master

3.6、部署Kubernetes Master

3.7、安装集群网络

3.8、添加master节点

3.9、加入Kubernetes Node

3.10、测试Kubernetes集群

4、项目总结


1、Kubernetes高可用项目介绍

单master节点的可靠性不高,并不适合实际的生产环境。Kubernetes 高可用集群是保证 Master 节点中 API Server 服务的高可用。API Server 提供了 Kubernetes 各类资源对象增删改查的唯一访问入口,是整个 Kubernetes 系统的数据总线和数据中心。采用负载均衡(Load Balance)连接多个 Master 节点可以提供稳定容器云业务。

2、项目架构设计

2.1、项目主机信息

准备6台虚拟机,3台master节点,3台node节点,保证master节点数为>=3的奇数。

硬件:2核CPU+、2G内存+、硬盘20G+

网络:所有机器网络互通、可以访问外网

操作系统

IP地址

角色

主机名

CentOS7-x86-64

192.168.200.111

master

k8s-master1

CentOS7-x86-64

192.168.200.112

master

k8s-master2

CentOS7-x86-64

192.168.200.113

master

k8s-master3

CentOS7-x86-64

192.168.200.114

node

k8s-node1

CentOS7-x86-64

192.168.200.115

node

k8s-node2

CentOS7-x86-64

192.168.200.116

node

k8s-node3

192.168.200.154

VIP

master.k8s.io

2.2、项目架构图

多master节点负载均衡的kubernetes集群。官网给出了两种拓扑结构:堆叠control plane node和external etcd node,本文基于第一种拓扑结构进行搭建。

1、Kubernetes高可用项目介绍

单master节点的可靠性不高,并不适合实际的生产环境。Kubernetes 高可用集群是保证 Master 节点中 API Server 服务的高可用。API Server 提供了 Kubernetes 各类资源对象增删改查的唯一访问入口,是整个 Kubernetes 系统的数据总线和数据中心。采用负载均衡(Load Balance)连接多个 Master 节点可以提供稳定容器云业务。

2、项目架构设计

2.1、项目主机信息

准备6台虚拟机,3台master节点,3台node节点,保证master节点数为>=3的奇数。

硬件:2核CPU+、2G内存+、硬盘20G+、开启虚拟化

网络:所有机器网络互通、可以访问外网

操作系统

IP地址

角色

主机名

CentOS7-x86-64

192.168.147.137

master

k8s-master1

CentOS7-x86-64

192.168.147.139

master

k8s-master2

CentOS7-x86-64

192.168.147.140

master

k8s-master3

CentOS7-x86-64

192.168.147.141

node

k8s-node1

CentOS7-x86-64

192.168.147.142

node

k8s-node2

CentOS7-x86-64

192.168.147.143

node

k8s-node3

192.168.147.154

VIP

master.k8s.io

2.2、项目架构图

多master节点负载均衡的kubernetes集群。官网给出了两种拓扑结构:堆叠control plane node和external etcd node,本文基于第一种拓扑结构进行搭建。

 

 (堆叠control plane node)

 (external etcd node)

2.3、项目实施思路

master节点需要部署etcd、apiserver、controller-manager、scheduler这4种服务,其中etcd、controller-manager、scheduler这三种服务kubernetes自身已经实现了高可用,在多master节点的情况下,每个master节点都会启动这三种服务,同一时间只有一个生效。因此要实现kubernetes的高可用,只需要apiserver服务高可用。

keepalived是一种高性能的服务器高可用或热备解决方案,可以用来防止服务器单点故障导致服务中断的问题。keepalived使用主备模式,至少需要两台服务器才能正常工作。比如keepalived将三台服务器搭建成一个集群,对外提供一个唯一IP,正常情况下只有一台服务器上可以看到这个IP的虚拟网卡。如果这台服务异常,那么keepalived会立即将IP移动到剩下的两台服务器中的一台上,使得IP可以正常使用。

haproxy是一款提供高可用性、负载均衡以及基于TCP(第四层)和HTTP(第七层)应用的代理软件,支持虚拟主机,它是免费、快速并且可靠的一种解决方案。使用haproxy负载均衡后端的apiserver服务,达到apiserver服务高可用的目的。

本文使用的keepalived+haproxy方案,使用keepalived对外提供稳定的入口,使用haproxy对内均衡负载。因为haproxy运行在master节点上,当master节点异常后,haproxy服务也会停止,为了避免这种情况,我们在每一台master节点都部署haproxy服务,达到haproxy服务高可用的目的。由于多master节点会出现投票竞选的问题,因此master节点的数据最好是单数,避免票数相同的情况。

3、项目实施过程

3.1、系统初始化

所有主机:

关闭防火墙、selinux、swap

[root@client2 ~]#  systemctl stop firewalld
[root@client2 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@client2 ~]#  sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@client2 ~]#  setenforce 0
[root@client2 ~]#  swapoff -a
[root@client2 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab

修改主机名(根据主机角色不同,做相应修改)

[root@client2 ~]# hostnamectl set-hostname k8s-master1
[root@client2 ~]# hostnamectl set-hostname k8s-master2
[root@client2 ~]# hostnamectl set-hostname k8s-master3
[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node1
[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node2
[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node3[root@k8s-node1 ~]# vim /etc/hosts
192.168.147.137 master 1.k8s.io k8s-master1
192.168.147.139 master 2.k8s.io k8s-master2
192.168.147.140 master 3.k8s.io k8s-master3
192.168.147.141 node1.k8s.io k8s-node1
192.168.147.142 node2.k8s.io k8s-node2
192.168.147.143 node3.k8s.io k8s-node3
192.168.147.154 master.k8s.io k8s-vip
[root@k8s-node1 ~]# scp /etc/hosts 192.168.147.137:/etc/hosts
[root@k8s-node1 ~]# scp /etc/hosts 192.168.147.139:/etc/hosts
[root@k8s-node1 ~]# scp /etc/hosts 192.168.147.140:/etc/hosts
[root@k8s-node1 ~]# scp /etc/hosts 192.168.147.142:/etc/hosts
[root@k8s-node1 ~]# scp /etc/hosts 192.168.147.143:/etc/hosts

将桥接的IPv4流量传递到iptables的链

[root@k8s-master1 ~]# cat << EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master1 ~]# modprobe br_netfilter
[root@k8s-master1 ~]# sysctl -p

时间同步

[root@k8s-master1 ~]# yum install ntpdate -y
[root@k8s-master1 ~]# ntpdate time.windows.com

3.2、配置部署keepalived服务

安装Keepalived(所有master主机)

[root@k8s-master1 ~]# yum install -y keepalived

k8s-master1节点配置 


[root@k8s-master1 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {router_id k8s
}
vrrp_script check_haproxy {script "killall -0 haproxy"interval 3weight -2fall 10rise 2
}
vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}
virtual_ipaddress {192.168.147.154
}
track_script {check_haproxy
}
}
EOF

k8s-master2节点配置

[root@k8s-master2 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {router_id k8s
}
vrrp_script check_haproxy {script "killall -0 haproxy"interval 3weight -2fall 10rise 2
}
vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 90adver_int 1authentication {auth_type PASSauth_pass 1111}
virtual_ipaddress {192.168.147.154
}
track_script {check_haproxy
}
}
EOF

 k8s-master3节点配置

[root@k8s-master3 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {router_id k8s
}
vrrp_script check_haproxy {script "killall -0 haproxy"interval 3weight -2fall 10rise 2
}
vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 80adver_int 1authentication {auth_type PASSauth_pass 1111}
virtual_ipaddress {192.168.147.154
}
track_script {check_haproxy
}
}
EOF

启动和检查

所有master节点都要执行

[root@k8s-master1 ~]#  systemctl start keepalived
[root@k8s-master1 ~]#  systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

 查看启动状态

[root@k8s-master1 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since 二 2023-08-15 13:38:02 CST; 10s agoMain PID: 18740 (keepalived)CGroup: /system.slice/keepalived.service├─18740 /usr/sbin/keepalived -D├─18741 /usr/sbin/keepalived -D└─18742 /usr/sbin/keepalived -D8月 15 13:38:04 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
8月 15 13:38:04 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
8月 15 13:38:04 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
8月 15 13:38:04 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 f....154
8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154
Hint: Some lines were ellipsized, use -l to show in full.

启动完成后在master1查看网络信息

[root@k8s-master1 ~]# ip a s ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:c7:3f:d6 brd ff:ff:ff:ff:ff:ffinet 192.168.147.137/24 brd 192.168.147.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.147.154/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::bd67:1ba:506d:b021/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::146a:2496:1fdc:4014/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::5d98:c5e3:98f8:181/64 scope link tentative noprefixroute dadfailed 

3.3、配置部署haproxy服务

所有master主机安装haproxy

[root@k8s-master1 ~]# yum install -y haproxy

每台master节点中的配置均相同,配置中声明了后端代理的每个master节点服务器,指定了haproxy的端口为16443,因此16443端口为集群的入口。

[root@k8s-master1 ~]# cat > /etc/haproxy/haproxy.cfg << EOF
> #-------------------------------
> # Global settings
> #-------------------------------
> global
>   log       127.0.0.1 local2
>   chroot    /var/lib/haproxy
>   pidfile   /var/run/haproxy.pid
>   maxconn   4000
>   user      haproxy
>   group     haproxy
>   daemon
>   stats socket /var/lib/haproxy/stats
> #--------------------------------
> # common defaults that all the 'listen' and 'backend' sections will
> # usr if not designated in their block
> #--------------------------------
> defaults
>   mode                http
>   log                 global
>   option              httplog
>   option              dontlognull
>   option http-server-close
>   option forwardfor   except 127.0.0.0/8
>   option              redispatch
>   retries             3
>   timeout http-request  10s
>   timeout queue         1m 
>   timeout connect       10s
>   timeout client        1m
>   timeout server        1m
>   timeout http-keep-alive 10s
>   timeout check           10s
>   maxconn                 3000
> #--------------------------------
> # kubernetes apiserver frontend which proxys to the backends
> #--------------------------------
> frontend kubernetes-apiserver
>   mode              tcp
>   bind              *:16443
>   option            tcplog
>   default_backend   kubernetes-apiserver
> #---------------------------------
> #round robin balancing between the various backends
> #---------------------------------
> backend kubernetes-apiserver
>   mode              tcp
>   balance           roundrobin
>   server            master1.k8s.io    192.168.147.137:6443 check
>   server            master2.k8s.io    192.168.147.139:6443 check
>   server            master3.k8s.io    192.168.147.140:6443 check
> #---------------------------------
> # collection haproxy statistics message
> #---------------------------------
> listen stats
>   bind              *:1080
>   stats auth        admin:awesomePassword
>   stats refresh     5s
>   stats realm       HAProxy\ Statistics
>   stats uri         /admin?stats
> EOF

启动和检查

所有master节点都要执行

[root@k8s-master1 ~]# systemctl start haproxy
[root@k8s-master1 ~]# systemctl enable haproxy

查看启动状态

[root@k8s-master1 ~]#  systemctl status haproxy
● haproxy.service - HAProxy Load BalancerLoaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)Active: active (running) since 二 2023-08-15 13:43:11 CST; 15s agoMain PID: 18812 (haproxy-systemd)CGroup: /system.slice/haproxy.service├─18812 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid├─18814 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds└─18818 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds8月 15 13:43:11 k8s-master1 systemd[1]: Started HAProxy Load Balancer.
8月 15 13:43:11 k8s-master1 haproxy-systemd-wrapper[18812]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f... -Ds
8月 15 13:43:11 k8s-master1 haproxy-systemd-wrapper[18812]: [WARNING] 226/134311 (18814) : config : 'option forward...ode.
8月 15 13:43:11 k8s-master1 haproxy-systemd-wrapper[18812]: [WARNING] 226/134311 (18814) : config : 'option forward...ode.
Hint: Some lines were ellipsized, use -l to show in full.

检查端口

[root@k8s-master1 ~]#  netstat -lntup|grep haproxy
tcp        0      0 0.0.0.0:1080            0.0.0.0:*               LISTEN      18818/haproxy       
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      18818/haproxy       
udp        0      0 0.0.0.0:40763           0.0.0.0:*                           18814/haproxy 

3.4、配置部署Docker服务

所有主机上分别部署 Docker 环境,因为 Kubernetes 对容器的编排需要 Docker 的支持。

[root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

使用 YUM 方式安装 Docker 时,推荐使用阿里的 YUM 源

[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master ~]# yum clean all && yum makecache fast [root@k8s-master ~]# yum -y install docker-ce
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

镜像加速器(所有主机配置)

[root@k8s-master ~]# cat << END > /etc/docker/daemon.json
{"registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]
}
END
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker

3.5、部署kubelet kubeadm kubectl工具

使用 YUM 方式安装Kubernetes时,推荐使用阿里的yum。

所有主机配置

[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF[root@k8s-master ~]# ls /etc/yum.repos.d/
[root@k8s-node3 ~]# ls /etc/yum.repos.d
backup  CentOS-Base.repo  CentOS-Media.repo  docker-ce.repo  kubernetes.repo

安装kubelet kubeadm kubectl

所有主机配置

[root@k8s-master ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
[root@k8s-master ~]# systemctl enable kubelet

3.6、部署Kubernetes Master

在具有vip的master上操作。此处的vip节点为k8s-master1。

创建kubeadm-config.yaml文件

[root@k8s-master1 ~]# cat > kubeadm-config.yaml << EOF
apiServer:certSANs:- k8s-master1- k8s-master2- k8s-master3- master.k8s.io- 192.168.147.137- 192.168.147.139- 192.168.147.140- 192.168.147.154- 127.0.0.1extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:6443"
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.1.0.0/16
scheduler: {}
EOF

3.6、部署Kubernetes Master

在具有vip的master上操作。此处的vip节点为k8s-master1。

创建kubeadm-config.yaml文件

[root@k8s-master1 ~]# cat > kubeadm-config.yaml << EOF
apiServer:certSANs:- k8s-master1- k8s-master2- k8s-master3- master.k8s.io- 192.168.147.137- 192.168.147.139- 192.168.147.140- 192.168.147.154- 127.0.0.1extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:6443"
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.1.0.0/16
scheduler: {}
EOF

查看所需镜像信息

[root@k8s-master1 ~]# kubeadm config images list --config kubeadm-config.yaml
W0815 13:55:35.933304   19444 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:1.7.0

上传k8s所需的镜像并导入(所有master主机)

[root@k8s-master1 ~]# mkdir master
[root@k8s-master1 ~]# cd master/
[root@k8s-master1 master]# rz -E
rz waiting to receive.
[root@k8s-master1 master]# ls
coredns_1.7.0.tar  kube-apiserver_v1.20.0.tar           kube-proxy_v1.20.0.tar      pause_3.2.tar
etcd_3.4.13-0.tar  kube-controller-manager_v1.20.0.tar  kube-scheduler_v1.20.0.tar
[root@k8s-master1 master]# ls | while read line
> do
> docker load < $line
> done
225df95e717c: Loading layer  336.4kB/336.4kB
96d17b0b58a7: Loading layer  45.02MB/45.02MB[root@k8s-master1 ~]# scp master/* 192.168.147.139:/root/master
[root@k8s-master1 ~]# scp master/* 192.168.147.140:/root/master
[root@k8s-master2/3 master]# ls | while read line
> do
> docker load < $line
> done

使用kubeadm命令初始化k8s

[root@k8s-master1 ~]# kubeadm init --config kubeadm-config.yamlYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join master.k8s.io:6443 --token zus2jc.brtsxszpyv03a57j \--discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300a \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join master.k8s.io:6443 --token zus2jc.brtsxszpyv03a57j \--discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300

如果——初始化中的错误:

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1执行以下命令后重新执行初始化命令echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

根据初始化的结果操作

[root@k8s-master1 ~]#   mkdir -p $HOME/.kube
[root@k8s-master1 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群状态

[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}  

注意:出现以上错误情况,是因为/etc/kubernetes/manifests/下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口为0导致的,解决方式是注释掉对应的port即可

修改kube-controller-manager.yaml文件、kube-scheduler.yaml文件

[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml - --leader-elect=true
#    - --port=0- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml - --leader-elect=true
#    - --port=0image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0

查看集群状态

[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

查看pod信息

[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-hdvw8              0/1     Pending   0          5m51s
coredns-7f89b7bc75-jbn4h              0/1     Pending   0          5m51s
etcd-k8s-master1                      1/1     Running   0          6m
kube-apiserver-k8s-master1            1/1     Running   0          6m
kube-controller-manager-k8s-master1   1/1     Running   0          2m56s
kube-proxy-x25rz                      1/1     Running   0          5m51s
kube-scheduler-k8s-master1            1/1     Running   0          2m17s

查看节点信息

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   6m34s   v1.20.0

3.7、安装集群网络

在k8s-master1节点执行

[root@k8s-master1 ~]# rz -E
rz waiting to receive.
[root@k8s-master1 ~]# ll
总用量 52512
-rw-r--r--. 1 root root 53746688 12月 16 2020 flannel_v0.12.0-amd64.tar
-rw-r--r--. 1 root root    14366 11月 13 2020 kube-flannel.yml
[root@k8s-master1 ~]# docker load < flannel_v0.12.0-amd64.tar
256a7af3acb1: Loading layer  5.844MB/5.844MB
d572e5d9d39b: Loading layer  10.37MB/10.37MB
57c10be5852f: Loading layer  2.249MB/2.249MB
7412f8eefb77: Loading layer  35.26MB/35.26MB
05116c9ff7bf: Loading layer   5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64
[root@k8s-master1 ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

再次查看节点信息:

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   8m27s   v1.20.0

还是没有变成Ready:下载cni网络插件

[root@k8s-master1 ~]# tar xf cni-plugins-linux-amd64-v0.8.6.tgz 
[root@k8s-master1 ~]# cp flannel /opt/cni/bin/
[root@k8s-master1 ~]# kubectl apply -f kube-flannel.yml
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds-amd64 unchanged
daemonset.apps/kube-flannel-ds-arm64 unchanged
daemonset.apps/kube-flannel-ds-arm unchanged
daemonset.apps/kube-flannel-ds-ppc64le unchanged
daemonset.apps/kube-flannel-ds-s390x unchanged
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
k8s-master1   Ready    control-plane,master   11m   v1.20.0

3.8、添加master节点

在k8s-master2和k8s-master3节点创建文件夹

[root@k8s-master2 ~]# mkdir -p /etc/kubernetes/pki/etcd[root@k8s-master3 ~]# mkdir -p /etc/kubernetes/pki/etcd

在k8s-master1节点执行

从k8s-master1复制秘钥和相关文件到k8s-master2和k8s-master3

[root@k8s-master1 ~]# scp /etc/kubernetes/admin.conf root@192.168.147.139:/etc/kubernetes 
[root@k8s-master1 ~]# scp /etc/kubernetes/admin.conf root@192.168.147.140:/etc/kubernetes[root@k8s-master1 ~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.147.139:/etc/kubernetes/pki
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.147.140:/etc/kubernetes/pki[root@k8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.* root@192.168.147.139:/etc/kubernetes/pki/etcd
[root@k8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.* root@192.168.147.140:/etc/kubernetes/pki/etcd

将其他master节点加入集群

注意:kubeadm init生成的token有效期只有1天,生成不过期token

[root@k8s-master1 ~]# kubeadm token create --ttl 0 --print-join-command
kubeadm join master.k8s.io:6443 --token 4vd7c0.x8z96hhh4808n4fv     --discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300a [root@k8s-master1 ~]#  kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS
p9u7gb.o9naimgqjauiuzr6   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
xhfagw.6wkdnkdrd2rhkbe9   23h         2023-08-16T14:03:32+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

k8s-master2和k8s-master3都需要加入

[root@k8s-master3 master]#  kubeadm join master.k8s.io:6443 --token zus2jc.brtsxszpyv03a57j     --discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300a     --control-plane [preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@k8s-master3 master]#   mkdir -p $HOME/.kube
[root@k8s-master3 master]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master3 master]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果master2/3加入时报错

  1. [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists 是一个错误消息,表示 /etc/kubernetes/pki/ca.crt 文件已经存在

直接删除改文件,在执行命令即可

[root@k8s-master2 master]# rm -rf /etc/kubernetes/pki/ca.crt

master2/3添加cni

[root@k8s-master3 master]# tar -xf cni-plugins-linux-amd64-v0.8.6.tgz 
[root@k8s-master3 master]# rz -E
rz waiting to receive.
[root@k8s-master3 master]# cp flannel /opt/cni/bin/
[root@k8s-master3 master]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged configured
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel unchanged
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds-amd64 unchanged
daemonset.apps/kube-flannel-ds-arm64 unchanged
daemonset.apps/kube-flannel-ds-arm unchanged
daemonset.apps/kube-flannel-ds-ppc64le unchanged
daemonset.apps/kube-flannel-ds-s390x unchanged

master1查看nodes

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   8m14s   v1.20.0
k8s-master2   Ready    control-plane,master   48s     v1.20.0
k8s-master3   Ready    control-plane,master   13s     v1.20.0

3.9、加入Kubernetes Node

直接在node节点服务器上执行k8s-master1初始化成功后的消息即可:

[root@k8s-node3 ~]# kubeadm join master.k8s.io:6443 --token zus2jc.brtsxszpyv03a57j \
>     --discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300a[root@k8s-node1 ~]# docker load < flannel_v0.12.0-amd64.tar
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64添加cni 同master操作

 查看节点信息

[root@k8s-master1 demo]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
k8s-master1   Ready    control-plane,master   45m   v1.20.0
k8s-master2   Ready    control-plane,master   37m   v1.20.0
k8s-master3   Ready    control-plane,master   37m   v1.20.0
k8s-node1     Ready    <none>                 32m   v1.20.0
k8s-node2     Ready    <none>                 31m   v1.20.0
k8s-node3     Ready    <none>                 31m   v1.20.0

3.10、测试Kubernetes集群

所有node主机导入测试镜像

[root@k8s-node1 ~]# docker load < nginx-1.19.tar 
[root@k8s-node1 ~]# docker tag nginx nginx:1.19.6

在Kubernetes集群中创建一个pod,验证是否正常运行。

[root@k8s-master1 ~]# mkdir demo
[root@k8s-master1 ~]# cd demo
[root@k8s-master1 demo]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector: matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.19.6ports:- containerPort: 80

创建完 Deployment 的资源清单之后,使用 create 执行资源清单来创建容器。通过 get pods 可以查看到 Pod 容器资源已经自动创建完成。

[root@k8s-master1 demo]# kubectl create -f nginx-deployment.yaml
deployment.apps/nginx-deployment created[root@k8s-master1 demo]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-76ccf9dd9d-dhcl8   1/1     Running             0          11m
nginx-deployment-76ccf9dd9d-psn8p   1/1     Running             0          11m
nginx-deployment-76ccf9dd9d-xllhp   1/1     Running             0          11m[root@k8s-master1 demo]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
nginx-deployme-596f5df7f-8mhzz      1/1     Running   0          5m10s   10.244.4.4   k8s-node3   <none>           <none>
nginx-deployme-596f5df7f-ql7l7      1/1     Running   0          5m10s   10.244.4.3   k8s-node3   <none>           <none>
nginx-deployme-596f5df7f-x6pgv      1/1     Running   0          5m10s   10.244.4.2   k8s-node3   <none>           <none>

创建Service资源清单

在创建的 nginx-service 资源清单中,定义名称为 nginx-service 的 Service、标签选择器为 app: nginx、type 为 NodePort 指明外部流量可以访问内部容器。在 ports 中定义暴露的端口库号列表,对外暴露访问的端口是 80,容器内部的端口也是 80。

[root@k8s-master1 demo]# vim nginx-service.yaml
kind: Service
apiVersion: v1
metadata:name: nginx-service
spec:selector:app: nginxtype: NodePortports:- protocol: TCPport: 80
targetPort: 80[root@k8s-master1 demo]# kubectl create -f nginx-service.yaml 
service/nginx-service created
[root@k8s-master1 demo]# kubectl get svc
NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.1.0.1      <none>        443/TCP        52m
nginx-service   NodePort    10.1.39.231   <none>        80:31418/TCP   14s
[root@k8s-master1 demo]# 

通过浏览器访问nginx:http://master.k8s.io:31418 域名或者VIP地址

[root@k8s-master1 demo]#  elinks --dump http://master.k8s.io:31418Welcome to nginx!If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.For online documentation and support please refer to [1]nginx.org.Commercial support is available at [2]nginx.com.Thank you for using nginx.ReferencesVisible links1. http://nginx.org/2. http://nginx.com/

 

挂起k8s-master1节点,刷新页面还是能访问nginx,说明高可用集群部署成功。

 

 

检查会发现VIP已经转移到k8s-master2节点上

[root@k8s-master2 master]# ip a s ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ae:1d:c6 brd ff:ff:ff:ff:ff:ffinet 192.168.147.139/24 brd 192.168.147.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.147.154/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::bd67:1ba:506d:b021/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::146a:2496:1fdc:4014/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::5d98:c5e3:98f8:181/64 scope link noprefixroute valid_lft forever preferred_lft forever

至此Kubernetes企业级高可用环境完美实现。

4、项目总结

1、集群中只要有一个master节点正常运行就可以正常对外提供业务服务。

2、如果需要在master节点使用kubectl相关的命令,必须保证至少有2个master节点正常运行才可以使用,不然会有 Unable to connect to the server: net/http: TLS handshake timeout 这样的错误。

3、Node节点故障时pod自动转移:当pod所在的Node节点宕机后,根据 controller-manager的–pod-eviction-timeout 配置,默认是5分钟,5分钟后k8s会把pod状态设置为unkown, 然后在其它节点启动pod。当故障节点恢复后,k8s会删除故障节点上面的unkown pod。如果你想立即强制迁移,可以用 kubectl drain nodename

4、为了保证集群的高可用性,建议master节点和node节点至少分别部署3台及以上,且master节点应该部署基数个实例(3、5、7、9)。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/38474.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

强制Edge或Chrome使用独立显卡【WIN10】

现代浏览器通常将图形密集型任务卸载到 GPU&#xff0c;以改善你的网页浏览体验&#xff0c;从而释放 CPU 资源用于其他任务。 如果你的系统有多个 GPU&#xff0c;Windows 10 可以自动决定最适合 Microsoft Edge 自动使用的 GPU&#xff0c;但这并不一定意味着最强大的 GPU。 …

Linux/centos上如何配置管理NFS服务器?

Linux/centos上如何配置管理NFS服务器&#xff1f; 1 NFS基础了解1.1 NFS概述1.2 NFS工作流程 2 安装和启动NFS服务2.1 安装NFS服务器2.2 启动NFS服务 3 配置NFS服务器和客户端3.1 配置NFS服务器3.2 配置NFS客户端 4 实际示例4.1 基本要求4.2 案例实现 1 NFS基础了解 NFS&…

LAXCUS如何通过技术创新管理数千台服务器

随着互联网技术的不断发展&#xff0c;服务器已经成为企业和个人获取信息、进行计算和存储的重要工具。然而&#xff0c;随着服务器数量的不断增加&#xff0c;传统的服务器管理和运维方式已经无法满足现代企业的需求。LAXCUS做为专注服务器集群的【数存算管】一体化平台&#…

企业数据库遭到360后缀勒索病毒攻击,360勒索病毒解密

在当今数字化时代&#xff0c;企业的数据安全变得尤为重要。随着数字化办公的推进&#xff0c;企业的生产运行效率得到了很大提升&#xff0c;然而针对网络安全威胁&#xff0c;企业也开始慢慢引起重视。近期&#xff0c;我们收到很多企业的求助&#xff0c;企业的服务器遭到了…

threejs使用gui改变相机的参数

调节相机远近角度 定义相机的配置&#xff1a; const cameraConfg reactive({ fov: 45 }) gui中加入调节fov的方法 const gui new dat.GUI();const cameraFolder gui.addFolder("相机属性设置");cameraFolder.add(cameraConfg, "fov", 0, 100).name(…

Medical Isolated Power Supply System in Angola

安科瑞 华楠 Abstract: Diagnosis and treatment in modern hospitals are inseparable from advanced medical equipment, which are inseparable from safe and reliable power supply. Many operations often last for several hours, and the consequences of a sudden pow…

【UE4 RTS】07-Camera Boundaries

前言 本篇实现的效果是当CameraPawn移动到地图边缘时会被阻挡。 效果 步骤 1. 打开项目设置&#xff0c;在“引擎-碰撞”中&#xff0c;点击“新建Object通道” 新建通道命名为“MapBoundaries”&#xff0c;然后点击接受 2. 向视口中添加 阻挡体积 调整阻挡体积的缩放 向四…

【TypeScript】this指向,this内置组件

this类型 TypeScript可推导的this类型函数中this默认类型对象中的函数中的this明确this指向 怎么指定this类型 this相关的内置工具类型转换ThisParameterType<>ThisParameterType<>ThisType TypeScript可推导的this类型 函数中this默认类型 对象中的函数中的this…

Docker容器:docker基础及安装

文章目录 一.docker容器概述1.什么是容器2. docker与虚拟机的区别2.1 docker虚拟化产品有哪些及其对比2.2 Docker与虚拟机的区别 3.Docker容器的使用场景4.Docker容器的优点5.Docker 的底层运行原理6.namespace的六项隔离7.Docker核心概念 二.Docker安装 及管理1.安装 Docker1.…

【科研论文配图绘制】task1 掌握科研绘图的基本知识

【科研论文配图绘制】task1 掌握科研绘图的基本知识 写在最前 8月份Datawhale组队学习&#xff0c;写下该博客记录学习内容 1.科研论文配图的分类与构成 2.科研论文配图的格式和尺寸 3.科研论文配图中的字体和字号设置 4.科研论文配图的版式设计、结构布局和颜色搭配 占个…

勘探开发人工智能技术:机器学习(6)

0 提纲 7.1 循环神经网络RNN 7.2 LSTM 7.3 Transformer 7.4 U-Net 1 循环神经网络RNN 把上一时刻的输出作为下一时刻的输入之一. 1.1 全连接神经网络的缺点 现在的任务是要利用如下语料来给apple打标签&#xff1a; 第一句话&#xff1a;I like eating apple!(我喜欢吃苹…

pytorch3d成功安装

一、pytorch3d是什么&#xff1f; PyTorch3D的目标是帮助加速深度学习和3D交叉点的研究。3D数据比2D图像更复杂&#xff0c;在从事Mesh R-CNN和C3DPO等项目时&#xff0c;我们遇到了一些挑战&#xff0c;包括3D数据表示、批处理和速度。我们开发了许多有用的算子和抽象&#xf…

【Visual Studio Code】--- Win11 安装 VS Code 超详细

Win11 安装 VS Code 超详细 概述一、下载 Vscode二、安装 Vscode 概述 一个好的文章能够帮助开发者完成更便捷、更快速的开发。书山有路勤为径&#xff0c;学海无涯苦作舟。我是秋知叶i、期望每一个阅读了我的文章的开发者都能够有所成长。 一、下载 Vscode Vscode官网 二、…

HTTP和HTTPS协议

目录 一、HTTP和HTTPS区别&#x1f33b; 二、有了https还有使用http场景吗&#x1f34a; 三、https协议的工作原理&#x1f4a5; 四、https协议的优点和缺点&#x1f35e; 一、HTTP和HTTPS区别&#x1f33b; HTTP&#xff08;Hypertext Transfer Protocol&#xff09;和HTT…

时序预测 | MATLAB实现基于KNN K近邻的时间序列预测-递归预测未来(多指标评价)

时序预测 | MATLAB实现基于KNN K近邻的时间序列预测-递归预测未来(多指标评价) 目录 时序预测 | MATLAB实现基于KNN K近邻的时间序列预测-递归预测未来(多指标评价)预测结果基本介绍程序设计参考资料 预测结果 基本介绍 基于KNN K近邻的时间序列预测-递归预测未来(多指标评价) …

macOS - 安装使用 libvirt、virsh

文章目录 关于 libvirt使用安装启动服务virsh 交互模式virsh 帮助命令 关于 libvirt libvirt 官网&#xff1a; https://libvirt.org/gitlab : https://gitlab.com/libvirt/libvirtgithub : https://github.com/libvirt/libvirt 只读&#xff0c;gitlab 的镜像 libvirt是一套…

机器学习之数据集

目录 1、简介 2、可用数据集 3、scikit-learn数据集API 3.1、小数据集 3.2、大数据集 4、数据集使用 ⭐所属专栏&#xff1a;人工智能 文中提到的代码如有需要可以私信我发给你&#x1f60a; 1、简介 当谈论数据集时&#xff0c;通常是指在机器学习和数据分析中使用的一组…

ES 概念

es 概念 Elasticsearch是分布式实时搜索、实时分析、实时存储引擎&#xff0c;简称&#xff08;ES&#xff09;成立于2012年&#xff0c;是一家来自荷兰的、开源的大数据搜索、分析服务提供商&#xff0c;为企业提供实时搜索、数据分析服务&#xff0c;支持PB级的大数据。 -- …

logstash 原理(含部署)

1、ES原理 原理 使⽤filebeat来上传⽇志数据&#xff0c;logstash进⾏⽇志收集与处理&#xff0c;elasticsearch作为⽇志存储与搜索引擎&#xff0c;最后使⽤kibana展现⽇志的可视化输出。所以不难发现&#xff0c;⽇志解析主要还 是logstash做的事情 从上图中可以看到&#x…

RDMA概述

1. DMA和RDMA概念 1.1 DMA DMA(直接内存访问)是一种能力&#xff0c;允许在计算机主板上的设备直接把数据发送到内存中去&#xff0c;数据搬运不需要CPU的参与。 传统内存访问需要通过CPU进行数据copy来移动数据&#xff0c;通过CPU将内存中的Buffer1移动到Buffer2中。DMA模…