k8s1.27.2版本二进制高可用集群部署

文章目录

  • 环境
  • 软件版本
  • 服务器系统初始化
  • 设置关于etcd签名证书
  • etcd集群部署
  • 负载均衡器组件安装
  • 设置关于k8s自签证书
    • 自签CA
    • kube-apiserver 自签证书
    • kube-controller-manager自签证书
    • kube-scheduler自签证书
    • kube-proxy 自签证书
    • admin 自签证书
  • 控制平面节点组件部署
    • **部署kube-apiserver**
    • **部署kube-controller-manager**
    • **部署kube-scheduler**
    • **查看集群状态**
  • 数据平面节点组件部署
    • 容器运行时安装
    • 部署kubelet
    • 部署kube-proxy
  • calico网络组件部署
  • coredns 组件部署
  • dashboard 组件部署
  • Rancher 管理k8s集群
  • metrics-server 组件部署
  • ingress 组件部署
  • helm、kubens、crictl、ctr 工具
  • nfs storageclass动态pv存储
  • loki 日志采集部署
  • Promthous 组件部署
  • argocd组件部署
  • FAQ

环境

说明:本次实验共有5台主机,3台master节点同时又是worker,os128、os129、os130 节点主机容器运行时用的containerd,worker131、worker132主机的用的docker

主机名IP组件系统
os128192.168.177.128etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9
os129192.168.177.129etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9
os130192.168.177.130etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9
worker131192.168.177.131haproxy、keepalived、kubelet、kube-proxy、docker、cri-dockerdCentOS7.9
worker132192.168.177.132haproxy、keepalived、kubelet、kube-proxy、docker、cri-dockerdCentOS7.9
VIP192.168.177.127

软件版本

软件版本明细

软件版本下载地址备注
CentOS7.9.2009https://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Minimal-2009.iso
kernel3.10.0-1160.105.1.el7.x86_64(系统默认)
kube-apiserver,kube-controller-manager,kube-schedule,kubelet,kube-proxyv1.27.2https://dl.k8s.io/v1.27.2/kubernetes-server-linux-amd64.tar.gz
etcdv3.5.5https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gz
cfsslv1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
cfssljsonv1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
cfssl-certinfov1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
containerdv.1.6.6https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
runcv1.1.11https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64containerd中自带的runc有问题需要替换
docker20.10.24.https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgz
cri-dockerd0.3.6https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.amd64.tgz
crictlv1.29.0https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz使用docker作为runtime时,需要单独安装这个管理工具,containerd的安装包中自带了此工具
haproxy1.5系统默认yum源
keepalived1.3.5系统默认yum源
calicov3.25.0https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
corednsv1.11.1https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
dashboardv2.7https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
metrics-server0.6.1https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml

服务器系统初始化

# 安装依赖包
yum -y install  epel-release.noarch
yum update  -y
yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl  bash-completion  lrzsz  sysstat openssh-clients -y
# 关闭防火墙 与selinux 和ssh优化systemctl stop firewalldsystemctl disable firewalldyum install iptables* -ysetenforce 0sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinuxsed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/configsed -i '/^#UseDNS/s/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_configsed -i 's/#PermitEmptyPasswords no/PermitEmptyPasswords no/g' /etc/ssh/sshd_config sed -i 's/^GSSAPIAuthentication yes/GSSAPIAuthentication no/g' /etc/ssh/sshd_configsystemctl restart sshd
# 关闭交换分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0# 配置系统句柄数
ulimit -SHn 655350
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF
cat >> /etc/security/limits.d/20-nproc.conf << EOF
*  soft    nproc     unlimited
*  hard    nproc     unlimited
EOF# 主机ipvs管理工具安装及模块加载
yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
# 授权、运行、检查是否加载
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
#内核优化k8s.conf
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 131072
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
#设置生效
sysctl --system
#加载br_netfilter
modprobe br_netfilter
#查看是否加载
lsmod | grep br_netfilter

设置关于etcd签名证书

  • 准备签名证书需要的工具 cfssl、cfssljson、cfssl-certinfo(选择一台主机即可,此次证书相关的都在os128上操作)
    wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64mv cfssl_1.6.1_linux_amd64  /usr/bin/cfsslmv cfssljson_1.6.1_linux_amd64 /usr/bin/cfssljsonmv cfssl-certinfo_1.6.1_linux_amd64 /usr/bin/cfssl-certinfochmod +x /usr/bin/cfssl*
  • 自签etcd 的CA
mkdir -p ~/TLS/{etcd,k8s}cd ~/TLS/etcd
#自签CA:
cat > ca-config.json << EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"www": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOFcat > ca-csr.json << EOF
{"CA": {"expiry": "87600h"},"CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}]
}
EOF#生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -会生成ca.pem和ca-key.pem文件
  • 使用自签CA签发Etcd HTTPS证书

#创建证书申请文件:
cd ~/TLS/etcd
cat > server-csr.json << EOF
{"CN": "etcd","hosts": ["192.168.177.128","192.168.177.129","192.168.177.130"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOF#注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
#生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server#会生成server.pem和server-key.pem文件。

etcd集群部署

  • Etcd 的概念:
    Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。
  • 以下在节点os128上操作,为简化操作,待会将节点os128生成的所有文件拷贝到节点os129和节点os130
# 准备etcd的安装包
wget  https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gz mkdir -pv /opt/etcd/{bin,cfg,ssl}
tar zxvf etcd-v3.5.5-linux-amd64.tar.gz
mv etcd-v3.5.5-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
  • 准备etcd的配置文件
#os128主机 etcd 配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.177.128:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.177.128:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.177.128:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.177.128:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.177.128:2380,etcd-2=https://192.168.177.129:2380,etcd-3=https://192.168.177.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
---
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
---
# systemd管理etcd
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF
  • 安装etcd集群
#拷贝刚才生成的证书
#把刚才生成的证书拷贝到配置文件中的路径:
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/# 同步所有主机
scp -r /opt/etcd/ root@192.168.177.129:/opt/
scp -r /opt/etcd/ root@192.168.177.130:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.177.129:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.177.130:/usr/lib/systemd/system/
# os129 主机etcd的配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.177.129:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.177.129:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.177.129:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.177.129:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.177.128:2380,etcd-2=https://192.168.177.129:2380,etcd-3=https://192.168.177.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
# os130主机etcd配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.177.130:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.177.130:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.177.130:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.177.130:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.177.128:2380,etcd-2=https://192.168.177.129:2380,etcd-3=https://192.168.177.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
  • 启动etcd并设置开启自启
启动etcd:
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
  • 使用etcdctl验证etcd集群
 ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.177.128:2379,https://192.168.177.129:2379,https://192.168.177.130:2379" endpoint health --write-out=table

在这里插入图片描述

负载均衡器组件安装

worker131、worker132主机上执行

  • 安装haproxy、keepalived
 yum install haproxy keepalived -y
  • haproxy 配置
cat > /etc/haproxy/haproxy.cfg <<EOF
globallog         127.0.0.1 local2chroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     6000user        haproxygroup       haproxydaemonstats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
defaultsmode                    tcplog                     globaloption                  tcplogoption                  dontlognulloption                  redispatchretries                 3timeout http-request    10stimeout queue           1mtimeout connect         10stimeout client          1mtimeout server          1mtimeout http-keep-alive 10stimeout check           10smaxconn                 3000
#---------------------------------------------------------------------
listen statsbind 0.0.0.0:9100mode  httpoption httplogstats uri /statusstats refresh 30sstats realm "Haproxy Manager"stats auth admin:passwordstats hide-versionstats admin if TRUE
#---------------------------------------------------------------------
frontend  k8s-master-default-nodepool-apiserverbind *:6443mode tcpdefault_backend             k8s-master-default-nodepool
#---------------------------------------------------------------------
backend k8s-master-default-nodepoolbalance     roundrobinmode tcpserver  k8s-apiserver-1 192.168.177.128:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3server  k8s-apiserver-2 192.168.177.129:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3server  k8s-apiserver-3 192.168.177.130:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3
EOF
  • keepalived配置
    • worker131 主机配置

      cat > /etc/keepalived/keepalived.conf  << EOF
      ! Configuration File for keepalived
      global_defs {router_id LVS_DEVELscript_user rootenable_script_security
      }
      vrrp_script check_haproxy {script "/etc/keepalived/check_haproxy.sh"interval 5weight -5fall 2 
      rise 1
      }
      vrrp_instance VI_1 {state BACKUPinterface ens33# 非抢占vip模式nopreempt# 单播unicast_src_ip 192.168.177.131unicast_peer {192.168.177.132}virtual_router_id 51#优先级100大于从服务的99priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {#配置规划的虚拟ip192.168.177.127}#配置对worker131主机haproxy进行监控的脚本track_script {#指定执行脚本的名称(vrrp_script check_haproxy此处做了配置)check_haproxy}
      }
      EOF
      
    • worker132 主机配置

      cat  > /etc/keepalived/keepalived.conf << EOF
      ! Configuration File for keepalived
      global_defs {router_id LVS_DEVEL
      script_user rootenable_script_security
      }
      vrrp_script check_haproxy {script "/etc/keepalived/check_haproxy.sh"interval 5weight -5fall 2 
      rise 1
      }
      vrrp_instance VI_1 {state BACKUPinterface ens33nopreemptunicast_src_ip 192.168.177.132unicast_peer {192.168.177.131}virtual_router_id 51priority 99advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.177.127}#配置对worker132主机haproxy进行监控的脚本track_script {#指定执行脚本的名称(vrrp_script check_haproxy此处做了配置)check_haproxy}
      }
      EOF
      
  • 健康检查脚本
cat > /etc/keepalived/check_haproxy.sh <<EOF 
#!/bin/bash
err=0
for k in $(seq 1 3)
docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfi
doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi
EOF
chmod +x /etc/keepalived/check_haproxy.sh
  • 设置开启自启并验证高可用VIP
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
#查看启动状态
systemctl status keepalived haproxy
#查看虚拟ip是否配置成功了
ip address show

haproxy 监控页面:
在这里插入图片描述查看vip:
在这里插入图片描述此时手动停止woker131主机上的haproxy服务模拟故障,由于keepalived中配置的有监控脚本把woker131主机keepalived服务停掉,vip会自动漂移到worker132的主机上,几乎不会丢包,回出现网络的轻微抖动,如果woker131的keepalived 服务故障恢复启动后,不会抢占vip(配置的非抢占模式)

设置关于k8s自签证书

  • 自签CA

#创建k8s 的kube-apiserver证书
cd ~/TLS/k8scat > ca-config.json << EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOF
cat > ca-csr.json << EOF
{"CA": {"expiry": "87600h"},"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}]
}
EOF#生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -会生成ca.pem和ca-key.pem文件。
  • kube-apiserver 自签证书

#创建证书申请文件:
cat > server-csr.json << EOF
{"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","192.168.177.127","192.168.177.128","192.168.177.129","192.168.177.130","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOF#注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server#会生成server.pem和server-key.pem文件。
  • kube-controller-manager自签证书

# 创建证书请求文件
cat > kube-controller-manager-csr.json << EOF
{"CN": "system:kube-controller-manager","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing", "ST": "BeiJing","O": "system:masters","OU": "System"}]
}
EOF# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
  • kube-scheduler自签证书

# 创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{"CN": "system:kube-scheduler","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:masters","OU": "System"}]
}
EOF# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
  • kube-proxy 自签证书

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  • admin 自签证书

#生成kubectl连接集群的证书:
cat > admin-csr.json <<EOF
{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:masters","OU": "System"}]
}
EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

此时/root/TLS/k8s目录下会有如下这么多文件
在这里插入图片描述

控制平面节点组件部署

  • 准备工作(在os128节点上操作)
#部署k8s1.27.2 
#下载安装包
wget  https://dl.k8s.io/v1.27.2/kubernetes-server-linux-amd64.tar.gz#解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager  kubelet   kube-proxy /opt/kubernetes/bin
cp kubectl /usr/bin/
cp kubectl /usr/local/bin/
# 证书拷贝
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
  • 部署kube-apiserver

  • 创建kube-apiserver配置文件
	# 创建kube-apiserver配置文件cat > /opt/kubernetes/cfg/kube-apiserver.conf <<EOFKUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\--v=2 \\--etcd-servers=https://192.168.177.128:2379,https://192.168.177.129:2379,https://192.168.177.130:2379 \\--bind-address=192.168.177.128 \\--secure-port=6443 \\--advertise-address=192.168.177.128 \\--allow-privileged=true \\--service-cluster-ip-range=10.0.0.0/24 \\--authorization-mode=RBAC,Node \\--enable-bootstrap-token-auth=true \\--token-auth-file=/opt/kubernetes/cfg/token.csv \\--service-node-port-range=30000-32767 \\--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\--client-ca-file=/opt/kubernetes/ssl/ca.pem \\--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\--service-account-issuer=api \\--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\--etcd-cafile=/opt/etcd/ssl/ca.pem \\--etcd-certfile=/opt/etcd/ssl/server.pem \\--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\--requestheader-allowed-names=kubernetes \\--requestheader-extra-headers-prefix=X-Remote-Extra- \\--requestheader-group-headers=X-Remote-Group \\--requestheader-username-headers=X-Remote-User \\--enable-aggregator-routing=true \\--audit-log-maxage=30 \\--audit-log-maxbackup=3 \\--audit-log-maxsize=100 \\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"EOF
  • 启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和
kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,
当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。
为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,
kubelet会以一个低权限用户自动向apiserver申请证书,
kubelet的证书由apiserver动态签署。
所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy
还是由我们统一颁发一个证书。
  • 创建token文件
	cat > /opt/kubernetes/cfg/token.csv << EOFc47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"EOF格式:token,用户名,UID,用户组token可自行生成替换:head -c 16 /dev/urandom | od -An -t x | tr -d ' '
  • systemd管理kube-apiserver
#systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF
  • 以下路径文件分发到其他master主机对应的路径
 /opt/kubernetes/bin /opt/kubernetes/ssl /opt/kubernetes/cfg /usr/lib/systemd/system/kube-apiserver.service  

不同主机的/opt/kubernetes/cfg/kube-apiserver.conf配置文件里面的IP要改成相应主机的

  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver 
systemctl enable kube-apiserver
  • 部署kube-controller-manager

  • 创建配置文件
# 创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS=" \\
--v=2 \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF•--kubeconfig:连接apiserver配置文件
•--leader-elect:当该组件启动多个时,自动选举(HA)
•--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

说明:–bind-address监听的地址必须是127.0.0.1

  • 生成kube-controller-manager.kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.177.127:6443"
cd  ~/TLS/k8s
kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \--client-certificate=./kube-controller-manager.pem \--client-key=./kube-controller-manager-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \--cluster=kubernetes \--user=kube-controller-manager \--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • systemd管理controller-manager
# systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF
  • 以下文件分发到其他master节点主机
/opt/kubernetes/bin/kube-controller-manager
/usr/lib/systemd/system/kube-controller-manager.service 
/opt/kubernetes/cfg/kube-controller-manager.conf  
/opt/kubernetes/cfg/kube-controller-manager.kubeconfig 
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
  • 部署kube-scheduler

  • 创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS=" \\
--v=2 \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF--kubeconfig:连接apiserver配置文件--leader-elect:当该组件启动多个时,自动选举(HA)

说明: --bind-address监听地址必须是127.0.0.1

  • 生成kube-scheduler.kubeconfig
cd ~/TLS/k8s
KUBE_CONFIG="//opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.177.127:6443"kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \--client-certificate=./kube-scheduler.pem \--client-key=./kube-scheduler-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \--cluster=kubernetes \--user=kube-scheduler \--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • systemd管理kube-scheduler
# systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF
  • 以下文件分发到其他master主机对应的路径
/opt/kubernetes/bin/kube-scheduler
/usr/lib/systemd/system/kube-scheduler.service 
/opt/kubernetes/cfg/kube-scheduler.conf  
/opt/kubernetes/cfg/kube-scheduler.kubeconfig 
  • 启动并设置开机启动
# 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
  • 查看集群状态

  • 生成管理集群的kubeconfig认证文件

# 生成管理集群的kubeconfig认证文件:
cd ~/TLS/k8s
mkdir /root/.kube
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.177.127:6443"
kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \--client-certificate=./admin.pem \--client-key=./admin-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \--cluster=kubernetes \--user=cluster-admin \--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • 使用kubectl 查看集群的状态
#查看集群信息
kubectl cluster-info
#查看集群组件状态
kubectl get cs

在这里插入图片描述图片中的coredns可以忽略,后面会有coredns的部署

  • 授权kubelet-bootstrap用户允许请求证书
授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

数据平面节点组件部署

  • 容器运行时安装

    • 安装docker(os131,os132主机)
# 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgz
#解压
tar xvf docker-20.10.24.tgz
#拷贝二进制文件
cp docker/* /usr/bin/
#创建containerd的service文件,并且启动
cat >/etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
systemctl enable --now containerd.service
#准备docker的service文件
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service
[Service]
Type=notify
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
EOF
#准备docker的socket文件
cat > /etc/systemd/system/docker.socket <<EOF
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
#创建docker组
groupadd docker
#启动docker
systemctl enable --now docker.socket  && systemctl enable --now docker.service
#验证
docker info
cat >/etc/docker/daemon.json <<EOF
{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"],"max-concurrent-downloads": 10,"log-driver": "json-file","log-level": "warn","log-opts": {"max-size": "10m","max-file": "3"},"data-root": "/var/lib/docker"
}
EOF
systemctl restart docker
  • 安装cri-dockerd(os131,os132主机)
 由于1.24以及更高版本不支持docker所以安装cri-docker
# 下载cri-docker 
wget  https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.amd64.tgz  # 解压cri-docker
tar -zxvf cri-dockerd-0.3.6.amd64.tgz  
cp cri-dockerd/cri-dockerd  /usr/bin/
chmod +x /usr/bin/cri-dockerd
# 写入启动配置文件
cat >  /usr/lib/systemd/system/cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=alwaysStartLimitBurst=3StartLimitInterval=60sLimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinityTasksMax=infinity
Delegate=yes
KillMode=process[Install]
WantedBy=multi-user.target
EOF# 写入socket配置文件
cat > /usr/lib/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker[Install]
WantedBy=sockets.target
EOF# 进行启动cri-docker
systemctl daemon-reload ; systemctl enable cri-docker --now
  • 安装containerd(os128,os129,os130主机)
wget  https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
tar  xvf cri-containerd-cni-1.6.6-linux-amd64.tar.gz  -C /
#配置 Containerd 所需的模块
cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF
#加载模块
systemctl restart systemd-modules-load.servicemkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
sed  -i  's/\(sandbox_image\) =.*/\1 = "registry.aliyuncs.com\/google_containers\/pause:3.9"/g'  /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd
#查看containerd相关模块加载情况:
lsmod | egrep 'br_netfilter|overlay'
  • 安装runc(os128,os129,os130主机)
    默认runc执行时提示:runc: symbol lookup error: runc: undefined symbol
wget  https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64
mv   runc.amd64  /usr/local/bin/runc 
  • 部署kubelet

  • 准备工作

#在所有worker节点创建工作目录:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs,manifests} 
  • 创建配置文件
cat  > /opt/kubernetes/cfg/kubelet.conf <<EOF
KUBELET_OPTS=" \\
--v=2 \\
--hostname-override=$(hostname) \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--runtime-request-timeout=15m  \\
--container-runtime-endpoint=unix:///run/cri-dockerd.sock \\
--cgroup-driver=systemd \\
--node-labels=node.kubernetes.io/node='Linux'"
EOF--container-runtime-endpoint参数默认为containerd: docker: unix:///run/cri-dockerd.sockcontainerd: unix:///run/containerd/containerd.sock
  • 生成kubelet-conf.yml配置参数文件
cat > /opt/kubernetes/cfg/kubelet-conf.yml << EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /opt/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
  • 生成kubelet初次加入集群引导bootstrap.kubeconfig文件
#生成kubelet初次加入集群引导kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.177.127:6443" 
#与token.csv里保持一致
TOKEN="c47ffb939f5ca36231d9e3121a252940"# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \--token=${TOKEN} \--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \--cluster=kubernetes \--user="kubelet-bootstrap" \--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • systemd管理kubelet
# systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
#此处如果用的cri是docker不用修改,如果是containerd则需要改成containerd.service
After=docker.service[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF
  • 启动并设置开机启动
# 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
  • 批准kubelet证书申请并加入集群

# 查看kubelet证书请求
[root@os128 system]# kubectl get csr 
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
node-csr-wgtllX256bvfMUN-ym0_JW4X0kigCvfDDUTysVAmlrQ   14s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending# 批准申请
kubectl certificate approve node-csr-wgtllX256bvfMUN-ym0_JW4X0kigCvfDDUTysVAmlrQ# 查看节点
kubectl get node
  • 其他worker节点kubelet 安装
# 从master节点上同步以下配置文件,修改成对应主机的启动kubelet即可:
/opt/kubernetes/cfg/kubelet.conf # hostname-override、container-runtime-endpoint 参数的值需要注意,hostname-override的值需要集群中唯一,container-runtime-endpoint的值取决于runtime 用的哪个
/usr/lib/systemd/system/kubelet.service # After 的值取决于主机上的runtime 用的哪个
/opt/kubernetes/cfg/kubelet-config.yml #不需要修改
/opt/kubernetes/cfg/kubelet.kubeconfig #不需要修改
/opt/kubernetes/cfg/bootstrap.kubeconfig #不需要修改
/opt/kubernetes/ssl/ca.pem #不需要修改
/opt/kubernetes/bin/kubelet #不需要修改
启动kubelet并设置开机启动,加入集群,批准证书申请参照上面步骤
  • 查看所有节点加入情况
    kubectl get node
    在这里插入图片描述
  • 部署kube-proxy

  • 生成配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /opt/kubernetes/kubeconfig/kube-proxy.kubeconfigqps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: $(hostname)
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF
  • 生成kube-proxy.kubeconfig文件
cd  ~/TLS/k8s
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.177.127:6443"
kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \--client-certificate=./kube-proxy.pem \--client-key=./kube-proxy-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • systemd管理kube-proxy
systemd管理kube-proxycat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF
  • 启动并设置开机启动
#启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
  • 其他worker节点kube-proxy安装
#从master节点同步以下配置文件
/opt/kubernetes/bin/kube-proxy
/usr/lib/systemd/system/kube-proxy.service 
/opt/kubernetes/cfg/kube-proxy.kubeconfig
/opt/kubernetes/cfg/kube-proxy.yaml #hostnameOverride参数需要确认和当前主机是否一致
启动并设置开机启动

calico网络组件部署

  • 下载calico
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
  • 修改默认网段
# 把calico.yaml里pod所在网段改成 --cluster-cidr=10.244.0.0/16 时选项所指定的网段,
#直接用vim编辑打开此文件查找192,按如下标记进行修改:
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
#   value: "192.168.1.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"把两个#及#后面的空格去掉,并把192.168.1.0/16改成10.244.0.0/16
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"
  • 部署calico
    kubectl apply -f calico.yaml
  • 验证calico
    kubectl get pods -n kube-system
    在这里插入图片描述
  • 授权apiserver访问kubelet
#应用场景:例如kubectl logs
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet
rules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metrics- pods/logverbs:- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: system:kube-apiservernamespace: ""
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet
subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml

coredns 组件部署

  • 准备coredns.yml内容,https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
cat > coredns.yml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-systemlabels:kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:coredns
rules:
- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch
- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-systemlabels:addonmanager.kubernetes.io/mode: EnsureExists
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes __DNS__DOMAIN__ in-addr.arpa ip6.arpa {pods insecurefallthrough in-addr.arpa ip6.arpattl 30}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS"
spec:# replicas: not specified here:# 1. In order to make Addon Manager do not reconcile this replicas parameter.# 2. Default is 1.# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:securityContext:seccompProfile:type: RuntimeDefaultpriorityClassName: system-cluster-criticalserviceAccountName: corednsaffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: ["kube-dns"]topologyKey: kubernetes.io/hostnametolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:kubernetes.io/os: linuxcontainers:- name: corednsimage: registry.k8s.io/coredns/coredns:v1.11.1imagePullPolicy: IfNotPresentresources:limits:memory: __DNS__MEMORY__LIMIT__requests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- ALLreadOnlyRootFilesystem: truednsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: __DNS__SERVER__ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP
EOF
  • 部署coredns
    kubectl apply -f coredns.yml
  • 查看coredns 服务部署
    kubectl get pod -n kube-system | grep coredns
    生产环境需要调整coredns的资源分配并加上hpa

dashboard 组件部署

  • 部署dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# 修改svc为nodePort方式
vim recommended.yaml
----
spec:ports:- port: 443targetPort: 8443nodePort: 30001type: NodePortselector:k8s-app: kubernetes-dashboard
----
kubectl apply -f recommended.yaml
# 查看dashboard服务
kubectl get pods -n kubernetes-dashboard
kubectl get pods,svc -n kubernetes-dashboard
  • 创建service account并绑定默认cluster-admin管理员集群角色
# 创建service account并绑定默认cluster-admin管理员集群角色:cat  > dashadmin.yaml  << EOF
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard
EOF
kubectl apply -f dashadmin.yaml
# 创建用户登录token,生成的token可以用来登录dashboard
kubectl -n kubernetes-dashboard create token admin-user
  • 验证dashboard登录,访问:https://192.168.177.128:30001,token用上面生成的或者使用kubeconfig文件登录
    在这里插入图片描述

Rancher 管理k8s集群

k8s 的dashboard 也可以使用Rancher来管理,图形界面账号项目权限更友好,功能更强大

  • 简单使用docker部署rancher
    生产环境建议直接部署在k8s集群中,通过ingress的方式来访问
    docker run -d --restart=always --privileged=true -p 443:443 -v /data/rancher:/var/lib/rancher/ --name rancher-server -e CATTLE_SYSTEM_CATALOG=bundled rancher/rancher:stable

  • 把上面部署的二进制k8s集群在 Rancher web页面上按照指引一步步导入即可

  • 登录成功界面如下:
    ![在这里插入图片描述](https://img-blog.csdnimg.cn/direct/8cfccb1048224047ada4c47bb14ea8dc.png在这里插入图片描述在这里插入图片描述

metrics-server 组件部署

  • 部署metrics-server
# 下载
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml 
# 修改文件中服务的镜像地址
sed -i  's/\(image:\).*/\1 registry.aliyuncs.com\/google_containers\/metrics-server:v0.6.1/g' components.yaml 
# 部署
kubectl apply -f components.yaml 
# 验证, 使用kubectl top 可以看到数据说明就正常了
kubectl top node
kubectl top pod  -A 

ingress 组件部署

  • 部署ingress-nginx-deploy
# 下载
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/baremetal/deploy.yaml -O ingress-nginx-deploy.yaml
#查看镜像地址grep "image:" ingress-nginx-deploy.yaml 
# mage: registry.k8s.io/ingress-nginx/controller:v1.8.0@sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3#image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b#image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
# 替换镜像
sed  -i   '/controller/s/\(image:\).*/\1 registry.cn-hangzhou.aliyuncs.com\/google_containers\/nginx-ingress-controller:v1.8.0/'  ingress-nginx-deploy.yaml 
sed  -i   '/kube-webhook-certgen/s/\(image:\).*/\1 registry.cn-hangzhou.aliyuncs.com\/google_containers\/kube-webhook-certgen:v20230407/'  ingress-nginx-deploy.yaml 
# 部署ingress-nginx
kubectl apply  -f ingress-nginx-deploy.yaml #查看ingress-nginx服务kubectl get all -n ingress-nginx

在这里插入图片描述

helm、kubens、crictl、ctr 工具

  • helm
    Helm 是一个用于管理 Kubernetes 应用程序的包管理工具。它允许您定义、安装和升级 Kubernetes 应用程序的预定义包,这些包被称为 “charts”。每个 Helm chart 包含了一组描述 Kubernetes 资源的文件,例如部署、服务、配置映射等。
 #下载wget https://get.helm.sh/helm-v3.14.0-linux-amd64.tar.gztar xvf  helm-v3.14.0-linux-amd64.tar.gzmv helm  /usr/local/binchmod +x /usr/local/bin/helm
  • kubens
    kubens 是一个用于快速切换 Kubernetes 命名空间的命令行工具。它是 kubectx 工具包的一部分,用于管理 Kubernetes 上下文和命名空间
#下载
wget https://github.com/ahmetb/kubectx/releases/download/v0.9.5/kubens_v0.9.5_linux_x86_64.tar.gz
# 解压
tar xvf kubens_v0.9.5_linux_x86_64.tar.gz 
mv kubens  /usr/local/bin
chmod +x   /usr/local/bin/kubens
# kubens命令用法
kubens:列出当前配置的所有命名空间。
kubens <namespace>:切换到指定的命名空间。
kubens -c:列出当前配置的所有上下文。
kubens -u:列出当前用户有权访问的所有命名空间
  • crictl
    crictl 是一个用于与容器运行时(Container Runtime Interface,CRI)接口兼容的容器运行时进行交互的命令行工具,默认配置文件路径/etc/crictl.yaml
#下载
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz
tar xvf crictl-v1.29.0-linux-amd64.tar.gz
mv crictl /usr/local/bin
chmod +x   /usr/local/bin/crictl
#crictl 命令使用  
crictl version: 查看版本
crictl pods: 列出主机上有哪些pod
crictl images:列出容器运行时中的镜像列表。
crictl ps:列出容器运行时中正在运行的容器列表。
crictl create:创建一个新的容器。
crictl start:启动一个已经创建的容器。
crictl stop:停止一个正在运行的容器。
crictl rm:删除一个容器。
crictl logs:查看容器的日志。
crictl inspect:查看容器或镜像的详细信息。
crictl pull:从容器镜像仓库中拉取镜像。
crictl rmi:删除一个镜像。
  • ctr
    ctr是Containerd开发的一个命令行工具,可以与Containerd进行交互,用于管理容器、镜像以及其他资源,Containerd 中每个容器实例都会关联到一个命名空间,默认是默认命名空间(default)
 #查看有哪些namespace,默认namespace: defaultctr namespaces# 查看namespace:k8s.io下面有哪些container/task/imagectr -n k8s.io containers list ctr -n k8s.io tasks list ctr -n k8s.io images list

nfs storageclass动态pv存储

  • 参考之前的写的这篇博客

loki 日志采集部署

  • 待完善

Promthous 组件部署

  • 待完善

argocd组件部署

  • 待完善

FAQ

  • kubelet服务启动报错:
    validate CRI v1 runtime API for endpoint "unix:///run/cri-dockerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService
    原因及解决方案:cri-dokcerd-v0.2.6的版本有问题,更换到cri-dokcerd-v0.3.6 的版本 问题解决
  • 某个节点上的calico-node 启动报错:
    ERROR][1] cni-installer/<nil> <nil>: Unable to create token for CNI kubeconfig error=Post "https://10.0.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-node/token": dial tcp 10.0.0.1:443: connect: connection refused
    原因及方案:该节点上的kube-proxy 忘记启动,启动kube-proxy服务问题解决

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/638867.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Spring 事务原理一

从本篇博客开始&#xff0c;我们将梳理Spring事务相关的知识点。在开始前&#xff0c;想先给自己定一个目标&#xff1a;通过此次梳理要完全理解事务的基本概念及Spring实现事务的基本原理。为实现这个目标我想按以下几个步骤进行&#xff1a; 讲解事务中的一些基本概念使用Sp…

x-cmd pkg | jq - 命令行 JSON 处理器

目录 简介首次用户功能特点类似工具进一步探索 简介 jq 是轻量级的 JSON 处理工具&#xff0c;由 Stephen Dolan 于 2012 年使用 C 语言开发。 它的功能极为强大&#xff0c;语法简洁&#xff0c;可以灵活高效地完成从 JSON 数据中提取特定字段、过滤和排序数据、执行复杂的转…

Java NIO (三)NIO Channel类

1 概述 前面提到&#xff0c;Java NIO中一个socket连接使用一个Channel来表示。从更广泛的层面来说&#xff0c;一个通道可以表示一个底层的文件描述符&#xff0c;例如硬件设备、文件、网络连接等。然而&#xff0c;远不止如此&#xff0c;Java NIO的通道可以更加细化。例如&a…

【GitHub项目推荐--GitHub 上的考研神器】【转载】

如果有打算考研的读者&#xff0c;这些开源项目不能错过。把各个学校近几年考研初试真题分享给大家&#xff08;包括 408&#xff09;&#xff0c;应该能帮上大家&#xff0c;文末有下载方式。 同时&#xff0c;我把盘点的开源相关的学习项目更新到 Awesome GiHub Repo&#xf…

【GitHub项目推荐--智能家居项目】【转载】

如果你具备硬件、软件知识&#xff0c;这个项目肯定符合你的胃口。 物美智能是一套软硬件结合的开源项目&#xff0c;该系统可助你快速搭建自己的智能家居系统。你可以学习到设备的集成和软硬件交互。 PC 端或者手机与服务端通信&#xff0c;单片机可以接受遥控设备和服务器的…

硬件-11-服务器的基础知识

参考服务器基础知识大科普 1 电视剧背景 服务器被誉为互联网之魂。 电视剧《创业年代》是一部有冯绍峰和袁姗姗等人联手主演的一部讲述我国第一批科技创业者创业故事的电视剧&#xff0c;可以说是他们铲下了建设中关村的第一捧土。 电视剧《创业年代》中的潮信公司并没有…

【神经网络】火箭点火发射-诠释一场数据与学习的奇妙之旅

火箭点火发射来理解神经网络的故事细节 在一个充满科技气息的研究室里&#xff0c;一群科学家们正在忙碌地准备着一次重要的火箭点火发射。这次发射不仅是一次航天探索的壮丽征程&#xff0c;更是一场利用神经网络处理数据的智慧之旅。 在火箭发射的背后&#xff0c;神经网络…

中仕教育:研究生毕业可以考选调生吗?

选调生的报考条件之一是应届生&#xff0c;研究生毕业也属于应届生&#xff0c;所以是可以报考的。 选调生不同学历的年龄限制&#xff1a; 1.应届本科生&#xff1a;年龄在25岁以内 2.应届研究生&#xff1a;年龄在30岁以内 3.应届博士生&#xff1a;年龄在35岁以内 研究…

excel统计分析——Tukey法多重比较

参考资料&#xff1a;生物统计学 https://real-statistics.com/one-way-analysis-of-variance-anova/unplanned-comparisons/tukey-hsd/ Tukey法是基于学生化极差分布计算最小显著极差&#xff08;LSR&#xff09;&#xff0c;根据平均数个数调整最小显著极差。 LSR&#xff1…

LINUX常用工具之sudo权限控制

一、Sudo基本介绍 sudo是Linux 中用于允许特定用户以超级用户或其他特权用户的身份执行特定的命令或任务。sudo 提供了一种安全的方法&#xff0c;使用户能够临时获取额外的权限&#xff0c;而不需要以完全超级用户的身份登录系统。sudo也可以用了设置黑名单命令清单&#xff…

ROS第 13 课 TF 坐标系广播与监听的编程 实现

文章目录 第 13 课 TF 坐标系广播与监听的编程 实现1.机器人的坐标变换2.创建功能包3.编程方法3.1 编写广播和监听程序3.2 运行程序 第 13 课 TF 坐标系广播与监听的编程 实现 1.机器人的坐标变换 在进行编程前&#xff0c;先需要了解机器人的坐标变换。这里以运行海龟案例来…

有关软件测试的,任何时间都可以,软件测试主要服务项目:测试用例 报告 计划

有关软件测试的&#xff0c;任何时间都可以&#xff0c;软件测试主要服务项目&#xff1a; 1. 测试用例 2. 测试报告 3. 测试计划 4. 白盒测试 5. 黑盒测试 6. 接口测试 7.自动…

Vuex的基础使用

在使用之前要先了解Vuex的组成结构&#xff0c;跟对应的使用关系。 在上图的结构图中可以看到四个组成部分&#xff0c;首先是Components&#xff08;组件&#xff09;、Actions&#xff08;行动&#xff09;、Mutations&#xff08;变化&#xff09;、state&#xff08;状态/数…

Vue——计算属性

文章目录 计算属性computed 计算属性 vs methods 方法计算属性完整写法 综合案例&#xff1a;成绩案例 计算属性 概念&#xff1a;基于现有的数据&#xff0c;计算出来的新属性。依赖的数据变化&#xff0c;自动重新计算 语法: ①声明computed配置项中&#xff0c;一个计算属性…

gin渲染篇

1. 各种数据格式的响应 json、结构体、XML、YAML类似于java的properties、ProtoBuf package mainimport ("github.com/gin-gonic/gin""github.com/gin-gonic/gin/testdata/protoexample" )// 多种响应方式 func main() {// 1.创建路由// 默认使用了2个中…

链表中倒数第k个结点(附带源码)

目录 代码部分&#xff1a; 核心&#xff1a;看图 代码部分&#xff1a; struct ListNode* FindKthToTail(struct ListNode* pListHead, int k ) {// write code here// write code hereif (k 0){return NULL;}else{struct ListNode* slow pListHead, * fast pListHead;//…

Linux内核pinctrl子系统驱动框架

一. 简介 本文简单了解一下Linux内核代码中&#xff0c; pinctrl子系统的驱动实现。 注意&#xff1a;本文会涉及到 Linux 驱动分层与分离、平台设备驱动等还未讲解的知识 &#xff0c;所以&#xff0c;也不会影响后续的实验。 二. Linux内核pinctrl子系统驱动 1. probe函…

【数学建模】数据处理与可视化

文章目录 数值计算工具NumPy数组的创建、属性和操作数组的运算、通用函数和广播运算Numpy.random模块的随机数生成文本文件和二进制文件存取 文件操作文件基本操作文件管理方法 数据处理工具PandasSeries和DataFrame外部文件存取 Matplotlib可视化基础用法可视化应用可视化综合…

2023年DevOps国际峰会暨 BizDevOps 企业峰会(DOIS北京站):核心内容与学习收获(附大会核心PPT下载)

随着科技的飞速发展&#xff0c;软件开发的模式和流程也在不断地演变。在众多软件开发方法中&#xff0c;DevOps已成为当下热门的软件开发运维一体化模式。特别是在中国&#xff0c;随着越来越多的企业开始认识到DevOps的价值&#xff0c;这一领域的研究与实践活动日益活跃。本…

反序列化字符串逃逸(下篇)

这里承接上篇文章反序列化字符串逃逸&#xff08;上篇&#xff09;-CSDN博客带大家学习反序列化字符串逃逸减少&#xff0c;没有看过的可以先去看看&#xff0c;不会吃亏。 例题&#xff1a; <?php highlight_file(__FILE__); error_reporting(0); function filter($name…