k8s 1.28 二进制安装与部署

第一步 :配置Linux服务器

#借助梯子工具

192.168.196.100
1C8G
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl、haproxy、keepalived
192.168.196.101
1C8G
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl、haproxy、keepalived
192.168.196.102
1C8G
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl

#说明
haproxy与keepalived 主要提供k8s master 的kube-apiserver 组件的高可用实现。
kube-controller-manager Kubernetes 自带的控制器例子包括副本控制器、节点控制器、命名空间控制器和服务账号控制器等。
它是一个永不休止的控制回路组件,其负责控制集群资源的状态。
通过监控 kube-apiserver 的资源状态,比较当前资源状态和期望状态,如果不一致,更新 kube-apiserver 的资源状态以保持当前资源状态和期望状态一致。
kube-scheduler 主要负责Pod的操作。
kube-apiserver 核心中间工具,接收所有组件包括客户端的处理信息。
kubectl 它是Kubernetes命令行工具。
3.1 服务器主机名配置

hostnamectl set-hostname ma01
hostnamectl set-hostname ma02
hostnamectl set-hostname ma03

3.2 服务器/etc/hosts解析

cat >> /etc/hosts << EOF
192.168.196.100 ma01
192.168.196.101 ma02
192.168.196.102 ma03
EOF

3.3 安全配置

systemctl stop firewalld
systemctl disable firewalld
setenfoce 0
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

3.4 禁用swap 分区

swapoff -a
删除或禁用 /etc/fstab中的swap 分区
echo "vm.swappiness=0" >> /etc/sysctl.conf

3.5 配置主机时间同步

ntpdate time.windows.com

3.6 ipvs 管理工具安装及模块加载

yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs 
modprobe -- ip_vs_rr 
modprobe -- ip_vs_wrr 
modprobe -- ip_vs_sh 
modprobe -- nf_conntrack
EOF

#授权,运行,检查是否加载

chmod 755 /etc/sysconfig/modules/ipvs.modules 
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack

3.7 Linux 内核升级

yum -y install perl
#导入elrepo gpg key
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#安装elrepo yum源仓库
yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
#安装kernel-lt 版本,ml为最新稳定版本,lt为长期维护版本
yum  --enablerepo="elrepo-kernel"  -y install kernel-lt.x86_64
#设置grub2默认引导为0,开机启动
grub2-set-default 0
#重新生成grub2引导文件
grub2-mkconfig -o /boot/grub2/grub.cfg
#所有节点配置完内核后,重启服务器,使升级的内核生效
reboot -h now
#验证是否升级成功
uname -r

3.8 Linux 内核优化

#添加网桥过滤及内核转发配置文件
cat  > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p

#加载br_netfilter模块
#临时加载模块

modprobe br_netfilter

#查看

sysctl  -p /etc/sysctl.d/k8s.conf 

#永久性加载模块

cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF

#设置为开机启动

systemctl enable --now systemd-modules-load.service

#查看是否加载

lsmod | grep br_netfilter

3.9 服务器免密

ssh-keygen
ssh-copy-id root@ip

#服务器之间及自己之间进行验证。
ssh ma01
ssh ma02
ssh ma03

第二步 安装部署HAproxy、Keepalived
[haproxy 与 keepalived]
IP:192.168.196.100
IP:192.168.196.101
VIP:192.168.196.200
2.1 使用yum安装haproxy、keepalived

yum -y install haproxy keepalived
#HAproxy 是开源的TCP和HTTP负载均衡器、反向代理工具。内置了监控和统计功能,可以实时获取服务器的状态和指标。
#HAproxy 适用于负载特大的web站点,这些站点
注意:在部署k8s 过程中,因为资源问题出现了几次HAproxy VIP自动切换的问题,经过观察完全不影响K8s其它组件的运行。

2.2 HAProxy 配置

cat >/etc/haproxy/haproxy.cfg <<"EOF"
# 全局配置
globalmaxconn 2000ulimit-n 16384log 127.0.0.1 local0 errstats timeout 30s# 默认配置
defaultslog globalmode httpoption httplogtimeout connect 5000timeout client 50000timeout server 50000timeout http-request 15stimeout http-keep-alive 15s# 监控uri 前端访问的url 地址: IP:33305/monitor
frontend monitor-inbind *:33305mode httpoption httplogmonitor-uri /monitor# 前端服务访问的TCP/IP 方式,default_backend 标记的是负载均衡配置的 名称
frontend k8s-masterbind 0.0.0.0:6443bind 127.0.0.1:6443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-master# 该项续集 上面的default_backend k8s-master 
backend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server  ma01  192.168.196.100:9443 checkserver  ma02  192.168.196.101:9443 checkserver  ma03  192.168.196.102:9443 checkEOF

参数详解:
log global 启用全局日志记录,HAProxy 将记录其运行状态和请求信息。

mode http 将HAPtoxy 设置为HTTP模式,这意味着它将处理HTTP请求和响应
timeout connect 5000 设置连接到后端服务器的超时时间为5000毫秒,如果在此时间内无法建立连接,HAProxy将放弃该连接。
timeout client 50000 设置与客户端之间的超时时间为50000毫秒,如果在此时间内没有数据交换,连接将被关闭。
timeout server 50000 设置与后端服务器之间的超时时间为50000毫秒,,同样的,如果在此时间内没有数据交换,连接将被关闭。
timeout http-request 15s 设置HTTP请求的处理超时时间为15秒,如果在此时间内没有完成请求的处理,HAProxy将终止该请求。
timeout http-keep-alive 5s 设置HTTP Keep-Alive 连接的超时时间为15秒,Keep-Alive 连接允许在同一TCP上进行多个HTTP请求和响应,这个参数控制该连接的保持时间。

option tcplog 选项启用tcp日志记录,它运行HAProxy 记录来自TCP客户端和后端服务器的连接信息,便于后续的监控和故障排查。
tcp-request inspect-delay 5s 这个指令设置在处理TCP请求时,HAProxy要等待的时间。用于确保完整的请求头或数据能够被接收到,尤其是在使用TCP的情况下。

inter 10s 服务器的健康检查间隔为10秒,在每次检查之间,负载均衡器会等待10秒。
downinter 5s 服务器被标记为DOWN 时,负责均衡器将每5秒进行一次检查,以确定服务器是否可以恢复。
rise 2 这个参数指定在服务器被标记为DOWN之后,需要连续成功的健康检查数量(2次),成功则其状态恢复为up。
fall 2 这个参数指定服务器被标记为UP之后,需要连续失败的健康检查数量(2次),才能将其状态标记为DOWN。
slowstart 60 这个参数设置慢启动时间为60秒,在这个时间段内,服务器的最大连接数(maxconn)将逐渐增加,以避免服务器在重新上线时收到过多的流量冲击。
maxconn 250 这个参数设置每个后端服务器的最大连接数为250。
maxqueue 256 这个参数设置连接队列的最大长度为256,当所有连接都达到最大连接数时,新连接将被放入队列,直到有可用连接。
weight 100 这个参数设置服务器的权重为100,权重用于负载均衡算法中,权重越高,分配到的请求越多。

2.3 keepalived 配置

cat > /etc/keepalived/keepalived.conf <<"EOF"
Configuration File for keepalived
global_defs {router_id LVS_DEVEL
script_user rootenable_script_security
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2
rise 1
}
vrrp_instance VI_1 {state MASTERinterface ens33mcast_src_ip 192.168.196.100virtual_router_id 51priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.196.200}track_script {chk_apiserver}
}EOF
#ma02 服务器配置
cat > /etc/keepalived/keepalived.conf <<"EOF"
Configuration File for keepalived
global_defs {router_id LVS_DEVEL
script_user rootenable_script_security
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2
rise 1
}
vrrp_instance VI_1 {state MASTERinterface ens33mcast_src_ip 192.168.196.101virtual_router_id 51priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.196.200}track_script {chk_apiserver}
}EOF

3.4 HAProxy的健康检查脚本

#ma01 与 ma02
cat > /etc/keepalived/check_apiserver.sh <<"EOF"
#!/bin/bash
err=0
for k in $(seq 1 3)
docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfi
doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi
EOFchmod +x /etc/keepalived/check_apiserver.sh#启动服务并验证
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived#ip a s
ens 33网卡上存在192.168.196.100的IP地址与192.168.196.200的IP地址。

第三步 部署ETCD 数据库
IP:192.168.196.100
IP:192.168.196.101
IP:192.168.196.102

3.1 安装cfssl 密钥生成工具
#创建工作目录

mkdir -p /data/k8s-work

安装cfssl 密钥生成工具,并生成CA 证书

https://github.com/cloudflare/cfssl/releases
# 1、下载cfssl、cfssljson、cfssl-certinfo
# cfssl:用于签发证书
# cfssljson:将cfssl签发生成的证书(json格式)变成文件承载式文件
# cfssl-certinfo:验证查看证书信息
# Kubernetes 官网:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/certificates/
# 版本
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
chmod +x cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
chmod +x cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
chmod +x cfssl-certinfocp cfssl /usr/local/bin/
cp cfssljson /usr/local/bin/
cp cfssl-certinfo /usr/local/bin/
# 2、给cfssl、cfssljson、cfssl-certinfo添加可执行权限
chmod +x /usr/local/bin/cfssl*
cfssl version

3.2 配置证书颁发机构CA的json文件

cat > ca-csr.json <<"EOF"
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "kubemsb","OU": "CN"}],"ca": {"expiry": "87600h"}
}
EOF

#创建CA证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

#配置CA证书策略

cat > ca-config.json <<"EOF"
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}}
}
EOF

#配置ETCD 证书的请求文件

cat > etcd-csr.json <<"EOF"
{"CN": "etcd","hosts": ["127.0.0.1","192.168.196.100","192.168.196.101","192.168.196.102","192.168.196.200"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "kubemsb","OU": "CN"}]
}
EOF

#生成etcd证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd

下载etcd软件包

cd /soft
wget https://github.com/etcd-io/etcd/releases/download/v3.5.11/etcd-v3.5.11-linux-amd64.tar.gz
cd etcd-v3.5.11-linux-amd64
cp etcd* /usr/local/bin 
scp etcd* ma02:/usr/local/bin 
scp etcd* ma03:/usr/local/bin#创建etcd的配置文件路径(ma01,ma02,ma03都要执行)
mkdir /etc/etcd

#配置etcd 配置文件,三台服务器的配置

#ma01
cat >  /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.196.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.196.100:2379,http://127.0.0.1:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.196.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.196.100:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.196.100:2380,etcd2=https://192.168.196.101:2380,etcd3=https://192.168.196.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF#ma02
cat >  /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.196.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.196.101:2379,http://127.0.0.1:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.196.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.196.101:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.196.100:2380,etcd2=https://192.168.196.101:2380,etcd3=https://192.168.196.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#ma03
cat >  /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.196.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.196.102:2379,http://127.0.0.1:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.196.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.196.102:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.196.100:2380,etcd2=https://192.168.196.101:2380,etcd3=https://192.168.196.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

#创建服务配置文件[ma01,ma02,ma03]

mkdir -p /etc/etcd/ssl
mkdir -p /var/lib/etcd/default.etcd

#ma01本地拷贝

cp /data/k8s-work/ca_cert/ca*.pem /etc/etcd/ssl
cp /data/k8s-work/etcd_cert/etcd*.pem /etc/etcd/ssl

#ma02远程拷贝

scp /data/k8s-work/ca_cert/ca*.pem ma02:/etc/etcd/ssl
scp /data/k8s-work/etcd_cert/etcd*.pem ma02:/etc/etcd/ssl

#ma03远程拷贝

scp /data/k8s-work/ca_cert/ca*.pem ma03:/etc/etcd/ssl
scp /data/k8s-work/etcd_cert/etcd*.pem ma03:/etc/etcd/ssl

配置etcd 启动文件[all hosts]

cat > /etc/systemd/system/etcd.service <<"EOF"
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \--cert-file=/etc/etcd/ssl/etcd.pem \--key-file=/etc/etcd/ssl/etcd-key.pem \--trusted-ca-file=/etc/etcd/ssl/ca.pem \--peer-cert-file=/etc/etcd/ssl/etcd.pem \--peer-key-file=/etc/etcd/ssl/etcd-key.pem \--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \--peer-client-cert-auth \--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF

#启动etcd集群

systemctl daemon-reload
systemctl enable --now etcd.service
systemctl status etcd

第四步 kubernetes 集群部署

cd /soft
wget https://dl.k8s.io/v1.28.0/kubernetes-server-linux-amd64.tar.gz
tar -xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
scp kube-apiserver kube-controller-manager kube-scheduler kubectl ma02:/usr/local/bin/
scp kube-apiserver kube-controller-manager kube-scheduler kubectl ma03:/usr/local/bin/

#在集群节点上创建目录[ma01,ma02,ma03]

mkdir -p /etc/kubernetes/ssl
mkdir -p /var/log/kubernetes

4.1 部署api-server
#配置apiserver 证书请求文件
cd /data/k8s-work/kube-apiserver

cat >kube-apiserver-csr.json <<EOF
{
"CN": "kubernetes","hosts": ["127.0.0.1","192.168.196.100","192.168.196.101","192.168.196.102","192.168.196.200","10.96.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "kubemsb","OU": "CN"}]
}
EOF

#生成apiserver证书及token文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

#生成token.csv

cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#同步相关信息

cp /data/k8s-work/ca*.pem  /etc/kubernetes/ssl/
scp /data/k8s-work/ca*.pem  ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/ca*.pem  ma03:/etc/kubernetes/ssl/cp /data/k8s-work/kube-apiserver/token.csv /etc/kubernetes/
scp /data/k8s-work/kube-apiserver/token.csv ma02:/etc/kubernetes/
scp /data/k8s-work/kube-apiserver/token.csv ma03:/etc/kubernetes/cp /data/k8s-work/kube-apiserver/kube-apiserver*.pem    /etc/kubernetes/ssl/
scp /data/k8s-work/kube-apiserver/kube-apiserver*.pem    ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/kube-apiserver/kube-apiserver*.pem    ma03:/etc/kubernetes/ssl/

#创建apiserver服务的配置文件[ma01,ma02,ma03]

#ma01
cat >/etc/kubernetes/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-auth=false \--bind-address=192.168.196.100 \--advertise-address=192.168.196.100 \--secure-port=9443 \--authorization-mode=Node,RBAC \--runtime-config=api/all=true \--enable-bootstrap-token-auth \--service-cluster-ip-range=10.96.0.0/16 \--token-auth-file=/etc/kubernetes/token.csv \--service-node-port-range=30000-32767 \--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file=/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \--service-account-issuer=api \--etcd-cafile=/etc/etcd/ssl/ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--etcd-servers=https://192.168.196.100:2379,https://192.168.196.101:2379,https://192.168.196.102:2379 \--allow-privileged=true \--apiserver-count=3 \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/var/log/kube-apiserver-audit.log \--requestheader-allowed-names=aggregator \--requestheader-allowed-names=front-proxy-client \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-group-headers=X-Remote-Group \--requestheader-username-headers=X-Remote-User \--requestheader-client-ca-file=/etc/kubernetes/ssl/agg_ca.pem \--proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \--proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \--enable-aggregator-routing=true \--event-ttl=1h \--v=4"
EOF#ma02
cat >/etc/kubernetes/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-auth=false \--bind-address=192.168.196.101 \--advertise-address=192.168.196.101 \--secure-port=9443 \--authorization-mode=Node,RBAC \--runtime-config=api/all=true \--enable-bootstrap-token-auth \--service-cluster-ip-range=10.96.0.0/16 \--token-auth-file=/etc/kubernetes/token.csv \--service-node-port-range=30000-32767 \--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file=/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \--service-account-issuer=api \--etcd-cafile=/etc/etcd/ssl/ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--etcd-servers=https://192.168.196.100:2379,https://192.168.196.101:2379,https://192.168.196.102:2379 \--allow-privileged=true \--apiserver-count=3 \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/var/log/kube-apiserver-audit.log \--requestheader-allowed-names=aggregator \--requestheader-allowed-names=front-proxy-client \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-group-headers=X-Remote-Group \--requestheader-username-headers=X-Remote-User \--requestheader-client-ca-file=/etc/kubernetes/ssl/agg_ca.pem \--proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \--proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \--enable-aggregator-routing=true \--event-ttl=1h \--v=4"
EOF#ma03
cat >/etc/kubernetes/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-auth=false \--bind-address=192.168.196.102 \--advertise-address=192.168.196.102 \--secure-port=9443 \--authorization-mode=Node,RBAC \--runtime-config=api/all=true \--enable-bootstrap-token-auth \--service-cluster-ip-range=10.96.0.0/16 \--token-auth-file=/etc/kubernetes/token.csv \--service-node-port-range=30000-32767 \--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file=/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \--service-account-issuer=api \--etcd-cafile=/etc/etcd/ssl/ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--etcd-servers=https://192.168.196.100:2379,https://192.168.196.101:2379,https://192.168.196.102:2379 \--allow-privileged=true \--apiserver-count=3 \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/var/log/kube-apiserver-audit.log \--requestheader-allowed-names=aggregator \--requestheader-allowed-names=front-proxy-client \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-group-headers=X-Remote-Group \--requestheader-username-headers=X-Remote-User \--requestheader-client-ca-file=/etc/kubernetes/ssl/agg_ca.pem \--proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \--proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \--enable-aggregator-routing=true \--event-ttl=1h \--v=4"
EOF

注意:关于聚合层说明
#聚合层专用参数
–requestheader-allowed-names=aggregator
–requestheader-allowed-names=front-proxy-client
–requestheader-extra-headers-prefix=X-Remote-Extra-
–requestheader-group-headers=X-Remote-Group
–requestheader-username-headers=X-Remote-User
–requestheader-client-ca-file=/etc/kubernetes/ssl/agg_ca.pem
–proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem
–proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem

#K8s 官网关于聚合层说明
https://v1-28.docs.kubernetes.io/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/
#配置aggregator聚合层专用证书与密钥
agg聚合层专用ca与metrics-server-csr.json 的证书生成
cd /data/k8s-work/agg_cert

cat > agg-ca-csr.json <<"EOF"
{"CN": "agg","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "kubemsb","OU": "CN"}],"ca": {"expiry": "87600h"}
}
EOFcfssl gencert -initca agg-ca-csr.json | cfssljson -bare agg_cacat > agg_ca-config.json <<"EOF"
{"signing": {"default": {"expiry": "87600h"},"profiles": {"agg": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}}
}
EOF#该文件的CN要与api-server 中的--requestheader-allowed-names参数的名称一致
cat > metrics-server-csr.json << EOF
{"CN": "aggregator","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "System"}]
}
EOF#生成证书
cfssl gencert -ca=agg_ca.pem -ca-key=agg_ca-key.pem -config=agg_ca-config.json -profile=agg metrics-server-csr.json | cfssljson  -bare metrics-server
#同步
cp /data/k8s-work/agg_cert/agg_ca*.pem   /etc/kubernetes/ssl/
cp /data/k8s-work/agg_cert/metrics-server*.pem  /etc/kubernetes/ssl/
scp /data/k8s-work/agg_cert/agg_ca*.pem   ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/agg_cert/metrics-server*.pem  ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/agg_cert/agg_ca*.pem   ma03:/etc/kubernetes/ssl/
scp /data/k8s-work/agg_cert/metrics-server*.pem  ma03:/etc/kubernetes/ssl/
ls -l 

#创建apiserver服务管理配置文件[ma01,ma02,ma03]

cat > /etc/systemd/system/kube-apiserver.service << "EOF"
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF

#启动与测试kube-apiserver

systemctl daemon-reload
systemctl enable --now kube-apiserver
systemctl status kube-apiserver
# 测试
curl --insecure https://192.168.196.100:9443/
curl --insecure https://192.168.196.101:9443/
curl --insecure https://192.168.196.102:9443/
curl --insecure https://192.168.196.200:6443/

4.2 部署kubectl

#创建kubectl 证书请求文件
cat > admin-csr.json << "EOF"
{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters",             "OU": "system"}]
}
EOF#生成证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#复制文件到指定目录
cp admin*.pem /etc/kubernetes/ssl/
scp admin*.pem ma02:/etc/kubernetes/ssl/
scp admin*.pem ma03:/etc/kubernetes/ssl/#将所有密钥放在/data/k8s-work目录下,因为前期做了分类时用的mv方式,现在考回
cd /data/k8s-work
cp admin/* ./
cp agg_cert/* ./
cp etcd_cert/* ./
cp kube-apiserver/ ./
cp kube-apiserver/* ./

#生成上下文配置信息,该文件可以用于切换集群实例

# 生成kube.config配置文件
# 该命令设置一个名为kubernetes的就请你,它指定了集群的CA证书文件(ca.pem)、是否将证书嵌入到kubeconfig文件中(--embed-certs=true)、
# 以及集群的API服务器地址(https://192.168.196.200:6443)。配置结果会保存在指定的kube.config文件中。
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kube.config# 设置用户凭证
#该命令设置一个名为"admin" 的用户凭证,指定了用户的客户端证书(admin.pem)和客户端密钥(admin-key.pem)。同样,凭证信息会被存储在kube.config 文件中。
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config# 配置上线文信息
# 这条命令创建一个名为"kubernetes"的上下文,它将之前设置的集群(kubernetes)和用户(admin)关联起来。上下文用于在执行kubectl 命令时指定使用哪个集群和用户。
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config# 使用上下文
# 这条命令告诉kubectl 使用之前设置的名为"kubernetes" 的上下文。这意味着后续的kubectl 命令将会使用这个上下文中定义的集群和用户进行操作。
kubectl config use-context kubernetes --kubeconfig=kube.config#查看信息
cat kube.config 

#准备kubectl 配置文件并进行角色绑定
#ma01、ma02、ma03
#拷贝kube.config 到/root/kube 目录下,并重命名为config
#kube/config 属于授权文件,有该文件的情况下,该服务器才有权限使用kubectl 进行集群的切换
#该config文件可用于集群中不同环境的切换

mkdir -p /root/.kube
cp kube.config ~/.kube/config
scp kube.config ma02:/root/.kube/config
scp kube.config ma03:/root/.kube/config

该命令用于在Kubernetes集群中创建一个ClusterRoleBindong、具体作用如下:

ClusterRoleBinding:这是一个将ClusterRole用户、组或服务账户绑定的对象。通过创建ClusterRoleBinding,您可以为特定的用户或服务账户授予访问特定资源的权限。

kube-apiserver:kubelet-apis:这是该ClusterRoleBinding的名称。通常,这种命名方式有助于识别该绑定的用途。

–clusterrole=system:kubelet-api-amdin:该参数指定了要绑定的ClusterRole。在这里,system:kubelet-api-admin 是一个内置的ClusterRole,授予对kubelet API 的管理权限。

–user kubernetes:这表示您将ClusterRole 绑定到名为Kubernetes 的用户。这个用户将获得system:kubelet-api-admin角色的权限。

–kubeconfig=/root/.kube/config :指定用于访问Kubernetes API的kubeconfig 文件路径。这个文件通常包含集群的连接信息和凭据。

[ma01上执行就好]

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config

#查看集群信息

kubectl cluster-info

#查看集群组件状态[其它组件未安装]

#命令
kubectl get componentstatuses
#结果
controller-manager   Unhealthy   Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
etcd-0               Healthy     ok

#查看命名空间中资源对象

kubectl get all --all-namespaces

4.3 部署kube-controller-manager

#创建kube-controller-manager 证书请求文件
cat > kube-controller-manager-csr.json <<EOF
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"hosts": ["127.0.0.1","192.168.196.100","192.168.196.101","192.168.196.102"],"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-controller-manager","OU": "system"}]
}
EOF

创建kube-controller-manager证书文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

#生成的文件如下
kube-controller-manager.csr
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.pem

#拷贝文件到相关目录下

mv kube-controller-manager* ./kube-controller-manager
cp kube-controller-manager/* ./cp /data/k8s-work/kube-controller-manager*.pem /etc/kubernetes/ssl/
scp /data/k8s-work/kube-controller-manager*.pem ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/kube-controller-manager*.pem ma03:/etc/kubernetes/ssl/

#创建kube-controller-manage的kube-controller-manager.kubeconfig

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kube-controller-manager.kubeconfigkubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfigkubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfigkubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

#同步

cp /data/k8s-work/kube-controller-manager.kubeconfig /etc/kubernetes/
scp /data/k8s-work/kube-controller-manager.kubeconfig ma02:/etc/kubernetes/
scp /data/k8s-work/kube-controller-manager.kubeconfig ma03:/etc/kubernetes/

#创建kube-controller-manager配置文件[ma01,ma02.ma03]

cat > /etc/kubernetes/kube-controller-manager.conf << "EOF"
KUBE_CONTROLLER_MANAGER_OPTS=" \--secure-port=10257 \--bind-address=127.0.0.1 \--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \--service-cluster-ip-range=10.96.0.0/16 \--cluster-name=kubernetes \--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \--allocate-node-cidrs=true \--cluster-cidr=10.244.0.0/16 \--root-ca-file=/etc/kubernetes/ssl/ca.pem \--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \--leader-elect=true \--feature-gates=RotateKubeletServerCertificate=true \--controllers=*,bootstrapsigner,tokencleaner \--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \--use-service-account-credentials=true \--v=2"
EOF

#创建服务启动文件[ma01,ma02.ma03]

cat > /usr/lib/systemd/system/kube-controller-manager.service << "EOF"
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target
EOF

#拷贝证书与密钥到相关目录下

cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp kube-controller-manager.conf /etc/kubernetes/
cp kube-controller-manager.service /usr/lib/systemd/system/scp  kube-controller-manager*.pem ma02:/etc/kubernetes/ssl/
scp  kube-controller-manager*.pem ma03:/etc/kubernetes/ssl/
scp  kube-controller-manager.kubeconfig kube-controller-manager.conf ma02:/etc/kubernetes/
scp  kube-controller-manager.kubeconfig kube-controller-manager.conf ma03:/etc/kubernetes/
scp  kube-controller-manager.service ma02:/usr/lib/systemd/system/
scp  kube-controller-manager.service ma03:/usr/lib/systemd/system/

#启动

systemctl daemon-reload
systemctl enable --now  kube-controller-manager
systemctl status kube-controller-manager

4.4 部署kube-scheduler
#配置证书请求json文件

cat > kube-scheduler-csr.json << "EOF"
{"CN": "system:kube-scheduler","hosts": ["127.0.0.1","192.168.196.100","192.168.196.101","192.168.196.102"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "system"}]
}
EOF

#生成kube-scheduler 证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
#生成的证书如下
kube-scheduler.csr
kube-scheduler-csr.json
kube-scheduler-key.pem
kube-scheduler.pem

#同步

cp /data/k8s-work/kube-scheduler*.pem /etc/kubernetes/ssl/
scp /data/k8s-work/kube-scheduler*.pem ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/kube-scheduler*.pem ma03:/etc/kubernetes/ssl/

#创建kube-scheduler的kubeconfig

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kube-scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfigkubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfigkubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

#同步

cp /data/k8s-work/kube-scheduler.kubeconfig /etc/kubernetes/
scp /data/k8s-work/kube-scheduler.kubeconfig ma02:/etc/kubernetes/
scp /data/k8s-work/kube-scheduler.kubeconfig ma03:/etc/kubernetes/

#创建配置文件

cat > /etc/kubernetes/kube-scheduler.conf << "EOF"
KUBE_SCHEDULER_OPTS=" \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--v=2"
EOF

#创建服务启动配置文件

cat > /usr/lib/systemd/system/kube-scheduler.service << "EOF"
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target
EOF

#启动服务

systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler

#检查

kubectl get cs

第五步 工具节点(worker node部署)
#部署cri-dockerd

cd /soft
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9-3.el7.x86_64.rpm
yum install -y cri-dockerd-0.3.9-3.el7.x86_64.rpm
vi /usr/lib/systemd/system/cri-docker.service

#修改第10行内容,默认启动的pod镜像太低,指定到3.9版本。使用阿里云的镜像仓库,国内下载镜像会比较快

ExecStart=/usr/bin/cri-dockerd  --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://

#启动

systemctl enable --now cri-docker
systemctl status cri-docker
ll /run/cri-dockerd.sock

5.1 部署kubelet
#ma01、ma02、ma03
#node节点上操作
#创建kubelet-bootstrap.kubeconfig
#该命令用于获取token 信息

BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)

#该命令用于kubernetes集群的配置,命名为kubernetes,指定CA证书文件,–embed-certs 表示将证书嵌入到kubeconfig文件中。
#指定Kubernetes API服务器的地址
#最后指定要修改的kubeconfig文件。

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kubelet-bootstrap.kubeconfig

#设置用户的凭证

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig

#设置上下文

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

#表示将当前上下文切换到default上下文。

kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

#创建集群角色绑定

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap

#创建另一个集群角色绑定

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

#创建kubelet配置文件
#在node节点操作,ma01、ma02、ma03

mkdir -p /etc/kubernetes/ssl

#需要修改 address 的地址,该地址需要修改为不同服务器的IP地址。

cat > /etc/kubernetes/kubelet.json << "EOF"
{"kind": "KubeletConfiguration","apiVersion": "kubelet.config.k8s.io/v1beta1","authentication": {"x509": {"clientCAFile": "/etc/kubernetes/ssl/ca.pem"},"webhook": {"enabled": true,"cacheTTL": "2m0s"},"anonymous": {"enabled": false}},"authorization": {"mode": "Webhook","webhook": {"cacheAuthorizedTTL": "5m0s","cacheUnauthorizedTTL": "30s"}},"address": "192.168.196.100","port": 10250,"readOnlyPort": 10255,"cgroupDriver": "systemd","hairpinMode": "promiscuous-bridge","serializeImagePulls": false,"clusterDomain": "cluster.local.","clusterDNS": ["10.96.0.2"]
}
EOF

#创建kubelet 服务启动管理文件
mkdir /var/lib/kubelet

cat > /usr/lib/systemd/system/kubelet.service << "EOF"
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \--cert-dir=/etc/kubernetes/ssl \--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \--config=/etc/kubernetes/kubelet.json \--container-runtime-endpoint=unix:///run/cri-dockerd.sock \--rotate-certificates \--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 \--v=2
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target
EOF

#同步文件与启动

for i in ma01 ma02 ma03;do scp /usr/lib/systemd/system/kubelet.service  $i:/usr/lib/systemd/system/;done
for i in ma01 ma02 ma03;do scp kubelet-bootstrap.kubeconfig $i:/etc/kubernetes/;done
for i in ma01 ma02 ma03;do scp ca.pem $i:/etc/kubernetes/ssl;done
#拷贝命令到
cd /soft/kubernetes/server/bin/
for i in ma01 ma02 ma03;do scp kubelet kube-scheduler $i:/usr/local/bin/;done#启动服务
systemctl daemon-reload
systemctl enable --now kubelet
systemctl status kubelet#测验
kubectl get nodes

5.2 部署kube-proxy证书请求文件

cat > kube-proxy-csr.json << "EOF"
{"CN": "system:kube-proxy","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "kubemsb","OU": "CN"}]
}
EOF

#生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#查看

ls -l kube-proxy*

#生成的证书如下
kube-proxy.csr
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.pem
#创建kubeconfig 文件

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfigkubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

#创建服务配置文件
#ma01\ma02\ma03

bindAddress 需要根据不同的主机进行修改IP地址

cat > /etc/kubernetes/kube-proxy.yaml << "EOF"
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.196.100
clientConnection:kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16
healthzBindAddress: 192.168.196.100:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.196.100:10249
mode: "ipvs"
EOF

#创建服务器启动管理文件
#创建proxy的工作目录,和服务启动文件对应

mkdir -p /var/lib/kube-proxy

#配置启动文件

cat >  /usr/lib/systemd/system/kube-proxy.service << "EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.yaml \--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF

#同步文件

for i in ma01 ma02 ma03;do scp kube-proxy.kubeconfig $i:/etc/kubernetes/; done
for i in ma01 ma02 ma03;do scp kube-proxy*pem $i:/etc/kubernetes/ssl; done

#启动服务

systemctl daemon-reload
systemctl enable --now kube-proxy
systemctl status kube-proxysystemctl daemon-reload
systemctl restart kube-proxy

第六步 网络组件部署Calio
官网:https://docs.tigera.io/calico/latest/about

#先使用wget下载后,检查文件正常后在进行部署

wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml

#custom-resources.yaml文件默认的pod网络为192.168.0.0/16,我们定义的pod网络为10.244.0.0/16,需要修改后再执行

cat custom-resources.yaml
cidr: 192.168.0.0/16  修改成  cidr: 10.244.0.0/16

#应用文件

kubectl create -f tigera-operator.yaml 

#查看有哪些命名空间

kubectl get ns

#查看命名空间中有哪些pod

kubectl get pod -n tigera-operator

#应用文件[应用该文件,需要翻墙软件打开,并安装一些组件,该过程因为资源问题进行了多次HAProxy的切换]

kubectl create -f custom-resources.yaml
kubectl get ns

#查看该命名空间下的pod的下载安装运行情况

kubectl get pod -n calico-system
kubectl get nodes

第七步 部署CoreDNS

cat >  coredns.yaml << "EOF"
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
rules:- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local  in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: "CoreDNS"
spec:# replicas: not specified here:# 1. Default is 1.# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:kubernetes.io/os: linuxaffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: ["kube-dns"]topologyKey: kubernetes.io/hostnamecontainers:- name: corednsimage: coredns/coredns:1.10.1imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPdnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"kubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 10.96.0.2ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCPEOF

#应用文件

kubectl apply -f coredns.yaml

#查看

kubectl get pods -o wide
kubectl get pod -n kube-system -o wide
#验证DNS域名解析是否正常
dig -t a www.baidu.com @10.96.0.2

#部署应用验证

kubectl create ns my-nginx
kubectl create deploy  my-nginx --image=nginx:1.23.0 -n my-nginx --dry-run -o yaml >> my-nginx.yaml
kubectl apply -f my-nginx.yaml
kubectl expose deployment my-nginx --port=80 --target-port=80 --type=NodePort -n my-nginx --dry-run -o yaml >> nginx-svc.yaml
kubectl apply -f nginx-svc.yaml
#根据查询的nginx 的信息,获取nginx的暴露端口
kubectl get all -n my-nginx
#根据nginx的暴露端口是30604,进行访问验证,因为NGINX 没有添加证书认证,所有不需要https的访问协议
http://192.168.196.200:32111

#安装命令补全安装包

yum install bash-completion -y
#生效
source <(kubectl completion bash)
#未生效还需执行下方的命令
source /usr/share/bash-completion/bash_completion#获取所有的资源信息
kubectl api-resources
#获取组件状态信息[componentstatuses的缩写是cs]
kubectl get cs

八 安装部署metric-server
#旧版本的监控组件是heapster
#新版本采用Metrics-server 聚合器采集cAdvisor数据信息。cAdvisor 聚合器内嵌到kubectl 中。
#监控集群资源需要安装Metrics-server
#监控的流程:kubectl top -> apiserver -> metrics-server pod -> kubelet(cadvisor)
#Metrics-server 的安装

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

#配置components.yaml文件
主要增加:- --kubelet-insecure-tls

apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=10250- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP- --kubelet-insecure-tls- --kubelet-use-node-status-port- --metric-resolution=15simage: registry.k8s.io/metrics-server/metrics-server:v0.7.2imagePullPolicy: IfNotPresent

#安装部署

kubectl apply -f components.yaml
kubectl get deployment metrics-server -n kube-system
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
kubectl logs -n kube-system -l k8s-app=metrics-server

#如果因为网络的问题,pull错误后,需要删除pod ,再进行apply

kubectl delete pod metrics-server-75bf97fcc9-ch4dh -n kube-system

#获取kube-system 命名空间中的pods,获取到pod名称

kubectl get pods -n kube-system

#查看pod的状态

kubectl describe pod metrics-server-75bf97fcc9-g5r9v -n kube-system
kubectl logs -n kube-system -l k8s-app=metrics-server
#下面为注释状态,无需理会
#git clone https://github.com/kubernetes-incubator/metrics-server
#cd metrics-server/
#cat metrics-server-deployment.yaml

#因为在delete时,k8s部分会默认自动重新创建pod,所以需要加deployment

kubectl delete deployment metrics-server -n kube-system

#进行查看验证
kubectl --help
查看Node资源消耗
kubectl top node
查看Pod资源消耗
kubectl top pod
#其它知识
k8s 中的 cpu(core) 中1000m=1c
kubectl top node ma01 --sort-by=‘cpu’
kubectl top pods --sort-by=‘memory’
ps -ef|grep kubelet
#通过journalctl -u 的方式查看相关组件的日志
journalctl -u kubelet
journalctl -u apiservice
#倒数也就是最新输出,展示100行
kubectl logs -f metrics-server-65bc69d777-77zds -n kube-system --tail=100
#最旧
kubectl logs -f metrics-server-65bc69d777-77zds -n kube-system --tail=-100
#查看k8s的版本,服务器的版本,内核信息
kubectl get nodes -o wide

八、安装部署helm

官网:https://helm.sh/
Helm的三个重要概念:Chart、Repository和Release

wget https://get.helm.sh/helm-v3.13.3-linux-amd64.tar.gz
tar zxvf helm-v3.13.3-linux-amd64.tar.gz
cp /soft/linux-amd64/helm /usr/local/bin/helm
scp /soft/linux-amd64/helm ma02:/usr/local/bin/
scp /soft/linux-amd64/helm ma03:/usr/local/bin/

其它命令与生效

#查看版本
helm version
#命令补全
source <(helm completion bash)
或者
echo "source <(helm completion bash)" >> ~/.bashrc
source ~/.bashrc

九、安装部署Dashboard
官网:https://github.com/kubernetes/dashboard
#安装dashboard

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

#查看

kubectl get serviceAccount,svc,deploy,pod -n kubernetes-dashboard

#如果存在pod的镜像没有下载完成,可以如下操作

kubectl get pods -n kubernetes-dashboard
kubectl delete pod kubernetes-dashboard-kong-76dff7b666-hdmgd  -n kubernetes-dashboard

#上条命令没有进行重新下载安装则执行如下命令

helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

#访问dashboard
#查看服务
kubectl get services -n kubernetes-dashboard
#更改IP类型,有多种方案。可以根据不同的暴露方案或者k8s的其它技术进行开放外部访问。

kubectl patch service kubernetes-dashboard-kong-proxy -n kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'
kubectl get services -n kubernetes-dashboard

#创建ServiceAccount和Secret

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:name: dashboard-adminnamespace: kube-system
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:name: dashboard-adminnamespace: kube-systemannotations:kubernetes.io/service-account.name: "dashboard-admin"
EOF

#绑定集群

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

#查看token

kubectl describe secrets  dashboard-admin -n kube-system

#根据暴露的NodePort 进行访问,网页需要输入token

https://192.168.196.100:30788/#/workloads?namespace=default

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/888170.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

OGRE 3D----5. OGRE和QML事件交互

在现代图形应用程序开发中,OGRE(Object-Oriented Graphics Rendering Engine)作为一个高性能的3D渲染引擎,广泛应用于游戏开发、虚拟现实和仿真等领域。而QML(Qt Modeling Language)则是Qt框架中的一种声明式语言,专注于设计用户界面。将OGRE与QML结合,可以充分利用OGR…

mysql系列2—InnoDB数据存储方式

背景 本文将深入探讨InnoDB的底层存储机制&#xff0c;包括行格式、页结构、页目录以及表空间等核心概念。通过全面了解这些基础概念&#xff0c;有助于把握MySQL的存储架构&#xff0c;也为后续深入讨论MySQL的索引原理和查询优化策略奠定了基础。 1.行格式 mysql中数据以行…

matlab2024a安装

1.开始安装 2.点击安装 3.选择安装密钥 4.接受条款 5.安装密钥 21471-07182-41807-00726-32378-34241-61866-60308-44209-03650-51035-48216-24734-36781-57695-35731-64525-44540-57877-31100-06573-50736-60034-42697-39512-63953 6 7.选择许可证文件 8.找许可证文件 9.选…

交换机四大镜像(端口镜像、流镜像、VLAN镜像、MAC镜像)应用场景、配置实例及区别对比

在网络管理中&#xff0c;端口镜像、流镜像、VLAN镜像和MAC镜像都是用于监控和分析网络流量的重要技术。 端口镜像&#xff08;Port Mirroring&#xff09; 定义&#xff1a;端口镜像是将一个或多个源端口的流量复制到一个目标端口&#xff0c;以便于网络管理员能够监控和分析…

JVM知识点学习-1

学习视频&#xff1a;狂神说Java 类加载器和双亲委派机制 类加载器 作用&#xff1a;加载Class文件 流程&#xff1a;这里的名字car1。。在栈里面&#xff0c;但是数据在堆里面 类加载器的几个类型&#xff1a; 虚拟机自带的类加载器&#xff1b;启动类&#xff08;根Boot…

Linux下的三种 IO 复用

目录 一、Select 1、函数 API 2、使用限制 3、使用 Demo 二、Poll 三、epoll 0、 实现原理 1、函数 API 2、简单代码模板 3、LT/ET 使用过程 &#xff08;1&#xff09;LT 水平触发 &#xff08;2&#xff09;ET边沿触发 4、使用 Demo 四、参考链接 一、Select 在…

python学习笔记 - python安装与环境变量配置

目录 前言1. 版本选择1.1 什么版本合适&#xff1f;1.2 版本越新越好吗&#xff1f;1.3 维护中的大版本里&#xff0c;选择最早的好吗&#xff1f;1.4 我的选择1.5 Python 发布周期1.6 Python维护中的版本及截止时间 2. 安装包下载2.1 官网地址2.2 下载安装包3. 环境安装3.1 新…

管理表空间和数据文件(二)

只读表空间 使用以下命令将表空间设置为只读模式&#xff1a; ALTER TABLESPACE userdata READ ONLY;必须等到TABLESPACE所有的过程都commit&#xff1b;才能可以执行成功。 导致检查点 Causes a checkpoint 意思是将内存中的数据&#xff08;如缓冲区中的更改&#xff09;写…

解决el-card上绑定@click事件,点击无效

解决&#xff1a; 在click后面加一个.native的修饰符即可 解释&#xff1a; .native 修饰符的作用&#xff1a;告诉 Vue&#xff0c;在绑定事件时&#xff0c;使用原生的 DOM 事件&#xff0c;而不是 Vue 自定义的事件。 因为 el-card 作为一个 Element UI 组件&#xff0c;默认…

AD7606使用方法

AD7606是一款8通道最高16位200ksps的AD采样芯片。5V单模拟电源供电&#xff0c;真双极性模拟输入可以选择10 V&#xff0c;5 V两种量程。支持串口与并口两种读取方式。 硬件连接方式&#xff1a; 配置引脚 引脚功能 详细说明 OS2 OS1 OS2 过采样率配置 000 1倍过采样率 …

蓝桥-希尔排序模板题

第一眼看到这个题还在想希尔排序模板不记得了&#xff0c;于是去网上了搜了一个&#xff0c;但是考虑到这种题只看测试点能不能通过&#xff0c;于是用Arrays方法试了一下&#xff0c;发现也可以。 1.希尔排序模板ac代码 package yunkePra;import java.util.Scanner;public cl…

机器学习6_支持向量机_算法流程

最大化&#xff1a; 限制条件&#xff1a; &#xff08;1&#xff09; &#xff08;2&#xff09; 如何求解这个对偶问题&#xff0c;同时基于对偶问题给出支持向量机算法的统一流程。 (核函数) 只要知道核函数&#xff0c;就可以求个这个最优化的对偶问题。 求解了这个对偶…

【WRF-Urban】城市冠层参数UCPs导入WPS/WRF中

城市冠层参数UCPs导入WPS/WRF中 Urban canopy parameters ingestion into WPS/ WRF关于建筑高度分布的分组数量GEOGRID.TBL 文件的配置是否需要修改 Registry 文件其他建议 参考 本博客主要总结WRF&MPAS-Aforum中有关城市冠层参数UCPs导入WPS/WRF的相关内容。原文章地址-Ur…

利用Python爬虫精准获取淘宝商品详情的深度解析

在数字化时代&#xff0c;数据的价值日益凸显&#xff0c;尤其是在电子商务领域。淘宝作为中国最大的电商平台之一&#xff0c;拥有海量的商品数据&#xff0c;对于研究市场趋势、分析消费者行为等具有重要意义。本文将详细介绍如何使用Python编写爬虫程序&#xff0c;精准获取…

Rook入门:打造云原生Ceph存储的全面学习路径(上)

文章目录 一.Rook简介二.Rook与Ceph架构2.1 Rook结构体系2.2 Rook包含组件2.3 Rook与kubernetes结合的架构图如下2.4 ceph特点2.5 ceph架构2.6 ceph组件 三.Rook部署Ceph集群3.1 部署条件3.2 获取rook最新版本3.3 rook资源文件目录结构3.4 部署Rook/CRD/Ceph集群3.5 查看rook部…

003 LVGL相关文件分析

LVGL移植相关文件&#xff1a; 显示设备接口文件 lv_port_disp_templ.c/输入设备接口文件 lv_port_indev_templ.c/h 裁剪、配置文件 lv_conf.h lv_conf.h文件内容介绍&#xff1a; 对应中文翻译版本&#xff1a; #if 1 /* 设置为1&#xff0c;以启…

汽车轮毂结构分析有哪些?国产3D仿真分析实现静力学+模态分析

本文为CAD芯智库原创&#xff0c;未经允许请勿复制、转载&#xff01; 之前分享了如何通过国产三维CAD软件如何实现「汽车/汽配行业产品设计」&#xff0c;兼容NX&#xff08;UG&#xff09;、Creo&#xff08;Proe&#xff09;&#xff0c;轻松降低企业上下游图纸交互成本等。…

关于Vscode配置Unity环境时的一些报错问题(持续更新)

第一种报错&#xff1a; 下载net请求超时&#xff08;一般都会超时很正常的&#xff09; 实际时并不需要解决&#xff0c;它对你的项目毫无影响 第二种报错&#xff1a; .net版本不匹配 解决&#xff1a;&#xff08;由于造成问题不一样&#xff0c;所以建议都尝试一次&…

iQOO Neo10系列携三大蓝科技亮相,性能与续航全面升级

11月29日&#xff0c;iQOO Neo10系列正式登场。作为iQOO Neo系列的最新力作&#xff0c;Neo10系列不仅延续了该系列一贯的“双芯”特色&#xff0c;更在性能、续航、屏幕、影像等多个方面实现了全面升级&#xff0c;为用户带来前所未有的使用体验。此次发布的Neo10系列共有两款…

NGO-CNN-BiGRU-Attention北方苍鹰算法优化卷积双向门控循环单元时间序列预测,含优化前后对比

NGO-CNN-BiGRU-Attention北方苍鹰算法优化卷积双向门控循环单元时间序列预测&#xff0c;含优化前后对比 目录 NGO-CNN-BiGRU-Attention北方苍鹰算法优化卷积双向门控循环单元时间序列预测&#xff0c;含优化前后对比预测效果基本介绍模型描述程序设计参考资料 预测效果 基本介…