kubernetes集群架构和组件
master节点组件
-
kube-apiserver:Kubernetes API,集群的统一入口,各组件的协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIserver处理后再交给Etcd存储。
-
kube-controller-manager:处理集群中的常规后台事务,一个资源对应一个控制器,Controller Manager就是负责管理这些控制器的。
-
kube-scheduler:根据算法为新创建的Pode选择一个Node节点。
Node节点组件
-
kubelet:kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如:创建容器,pod挂载数据卷,下载secret,获取容器和节点状态等工作。kubelet将每个Pod转换为一组容器。
-
kube-proxy:在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。
-
容器进行时:容器引擎,运行容器,例如Docker,containerd,podman等。
kubeadm搭建一个简单的集群
硬件配置
学习环境:
-
master-2C/2G/20G
-
node-2C/2G/20G
测试环境:
-
master-4C/8G/50G
-
node-8C/16G/100G
生产环境:
-
master-8C/16G/500G
-
node-16C/32G/1T
环境准备
- 操作系统:CentOS7.9-x86_64
- Docker版本:26.1.4(CE)
- kubernetes:1.28
服务器规划
主机名 | ip |
---|---|
k8s-master | 192.168.3.10 |
k8s-node1 | 192.168.3.11 |
k8s-node2 | 192.168.3.12 |
操作系统初始化配置
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld#关闭selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config#关闭swap分区
sed -ri 's@(.*swap.*)@#\1@g' /etc/fstab#修改主机名
hostnamectl set-hostname <hostname>#修改hosts文件(非必选配置)
cat >> /etc/hosts << EOF
192.168.3.10 k8s-master
192.168.3.11 k8s-node1
192.168.3.12 k8s-node2
EOF#开启内核 ipv4 转发需要执⾏如下命令加载 overlay、br_netfilter 模块
cat > /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter#配置内核参数,将桥接的IPv4流量传递到iptables链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf#时间同步,统一配置阿里云时钟服务器
server ntp.aliyun.com iburst
安装docker
配置docker镜像源
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
配置镜像加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://5fid4glg.mirror.aliyuncs.com","https://docker.m.daocloud.io"],"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
注意:这里的镜像源中,大部分镜像都是两年前的,众所周知docker hub已经不对国内开放了,但是学习用也足够了。
“exec-opts”: [“native.cgroupdriver=systemd”] 这个配置是官方推荐的一个配置,也可以不做修改。
“insecure-registries”: 证书验证
配置完成之后,加载配置文件,重启docker,设置开机自启动
systemctl daemon-reload
systemctl enable docker --now
安装cri-docker
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14-3.el7.x86_64.rpm
如果由于网络原因,无法下载,可以科学上网先下载再上传到机器。
指定依赖镜像地址:
在cri-docker.service配置文件中添加:–pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
# vi /usr/lib/systemd/system/cri-docker.service
...
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
...systemctl daemon-reload
systemctl enable cri-docker --now
安装kubernetes组件
配置yum源
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
EOF
安装指定版本的kubeadm、kubelet、kubectl组件
kubeadm :初始化集群⼯具
kubelet :在集群中的每个节点上⽤来启动 Pod 和容器等
kubectl :⽤来与集群通信的命令⾏⼯具(管理节点安装即可)
yum install kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0 -y
配置kubelet服务开机自启动
systemctl enable kubelet --now
配置Master节点
在master节点执行
kubeadm init \
--apiserver-advertise-address="192.168.3.10" \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock
- –apiserver-advertise-address 集群通告地址,即监听地址
- –image-repository registry.aliyuncs.com/google_containers 默认是k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
- –kubernetes-version v1.28.0 指定k8s版本,和机器安装的版本保持一致
- –service-cidr=10.96.0.0/16 集群内部虚拟网络,Pod统一访问入口
- –pod-network-cidr=10.244.0.0/16 Pod网络,和CNI网络组件yanl中保持一致
- –cri-socket=unix:///var/run/cri-dockerd.sock 指定接口
- –ignore-preflight-errors=all 忽略警告
这里执行可能会很慢,因为需要从镜像站拉取镜像,如果感觉慢,也可以先拉取镜像再初始化。
kubeadm config images pull \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.28.0 \--cri-socket=unix:///var/run/cri-dockerd.sock
全过程如下:
[root@k8s-master ~]# kubeadm init \
> --apiserver-advertise-address="192.168.3.10" \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.28.0 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=10.244.0.0/16 \
> --cri-socket=unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.3.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.504564 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: twzwgv.xqhb98gfu1edpm62
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.3.10:6443 --token twzwgv.xqhb98gfu1edpm62 \--discovery-token-ca-cert-hash sha256:43eff3fcb345a6138ae9254d60b219cd04dd5e18cc2910d0eb52db209bb93b26
[root@k8s-master ~]#
拷⻉ kubeconfig 配置⽂件
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
将Node节点加入集群
在node节点执行
kubeadm join 192.168.3.10:6443 --token twzwgv.xqhb98gfu1edpm62 \--discovery-token-ca-cert-hash sha256:43eff3fcb345a6138ae9254d60b219cd04dd5e18cc2910d0eb52db209bb93b26 \--cri-socket=unix:///var/run/cri-dockerd.sock
为了安全性,kubeadm生成的token,默认有效期为24小时,过期之后就无法使用了,需要重新生成加入节点命令:
kubeadm token create --print-join-command
部署容器网络
提前下载好calico的镜像文件,导入所有节点
ls *.tar | xargs -i docker load -i {}
在master节点使用yaml文件创建pod
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
注意:按官方给的做法,是直接使用yaml文件然后在线下载,必须保证镜像源的访问速度才行。不然需要很久很久
到这里,集群就算是搭建完成了
[root@localhost ~]# kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-85955d4f5b-rlrrg 1/1 Running 0 21s
calico-node-gvv4h 1/1 Running 0 21s
calico-node-mhkxp 0/1 Running 0 21s
calico-node-z9czg 1/1 Running 0 21s
calico-typha-6dfcdf98b5-984zj 1/1 Running 0 22s
calico-typha-6dfcdf98b5-pvg5j 1/1 Running 0 18s
csi-node-driver-b5h5x 2/2 Running 0 21s
csi-node-driver-htgqx 2/2 Running 0 21s
csi-node-driver-js88m 2/2 Running 0 21s
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 4h4m v1.28.0
k8s-node1 Ready <none> 4h2m v1.28.0
k8s-node2 Ready <none> 4h2m v1.28.0
Master节点命令自动补全
yum install bash-completion -y
echo 'source <(kubectl completion bash)' >>~/.bashrc