k8s之部署kubeadm
master | 20.0.0.71(4核8G) docker、kubelet、kubectl、kubeadm、flannel |
node1 | 20.0.0.73(最少2核4G) docker、kubelet、kubectl、kubeadm、flannel |
node2 | 20.0.0.74(最少2核4G) docker、kubelet、kubectl、kubeadm、flannel |
harbor节点 | 20.0.0.72(最少2核4G) docker、docker-compose harbor |
1、环境准备
(1)iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
(2)swapoff -a
(3)加载ip_vs模块(harbor节点除外)
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
(4)调整内核参数(harbor节点除外):vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
sysctl --system
2、配置时间同步
yum install ntpdate -y
ntpdate ntp.aliyun.com
3、部署 docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://pkm63jfy.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF
systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service
4、同步配置master和node节点
(1)配置映射
5、所有节点同步操作:安装kubeadm,kubelet和kubectl
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15
systemctl enable kubelet.service
6、配置主节点master
(1)查看初始化需要的镜像:kubeadm config images list --kubernetes-version 1.20.15
pause | 特殊的pod |
pause会在节点上创建一个网络命名空间,可以让其他容器加入这个网络命名空间。pod里面的容器可能使用不同的代码和架构编写的,可以在一个网络空间里面实现通信,协调这个命名空间里面的资源(实现pod内容器的兼容性) |
kubeadm安装的k8s组件都是以pod的形式运行在kube-system这个命名空间当中 |
kubelet:node管理器可以进行系统控制 |
(2)初始化kubeadm
kubeadm init \
--apiserver-advertise-address=20.0.0.71 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.20.15 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--token-ttl=0
apiserver-advertise-address | 声明master节点的apiserver的监听地址 |
image-repository registry.aliyuncs.com/google_containers | 声明拉取镜像的仓库,使用阿里云 |
kubernetes-version=v1.20.15 | k8s集群的版本号 |
service-cidr=10.96.0.0/16 | 所有service的对外代理地址都是10.96.0.0/16 |
pod-network-cidr=10.244.0.0/16 | 所有pod的IP地址网段 |
token-ttl=0 | 默认token的有效期,默认是24小时,0表示永不过期 |
①加入集群
kubeadm join 20.0.0.71:6443 --token vb4lva.7gp3nbupnv0hk0pq \
--discovery-token-ca-cert-hash sha256:7ffb5e8fbc8914d1a5ac0fcf62bb6a9bdd1bd549230dfbcbeffbbfd1107ca65f
②node节点加入集群
(4)设定kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
systemctl restart kubelet
①修改映射模式为:ipvs
kubectl edit cm kube-proxy -n=kube-system
systemctl restart kubelet
kubectl get nodes
kubectl get cs
②修改监听地址:
vim /etc/kubernetes/manifests/kube-scheduler.yaml
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
systemctl restart kubelet
7、部署flannel网络
(1)配置从节点
docker load -i flannel.tar
mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
(2)配置主节点
kubectl apply -f kube-flannel.yml
docker load -i flannel.tar
mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
kubectl get pods -n kube-system
(3)修改api-server证书有效期
①查看证书有效期
openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text | grep Not
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep Not
②设置api-server的有效期为10年
chmod 777 update-kubeadm-cert.sh
./update-kubeadm-cert.sh all
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep Not
kubectl get nodes
kubectl get pods -n kube-system
kubectl get cs
(4)设置自动补齐
(5)验证服务
①创建pod:kubectl create deployment nginx --image=nginx --replicas=3
②配置service
8、配置镜像仓库(harbor)
(1)安装docker-compose和docker-harbor
①docker-compose
cd /opt
mv docker-compose-linux-x86_64 docker-compose
cp docker-compose /usr/local/bin/
chmod +x /usr/local/bin/docker-compose
②docker-harbor
tar zxvf harbor-offline-installer-v2.8.1.tgz
cd harbor/
(2)基于https密钥对搭建docker-harbor
①修改配置文件harbor.yml
②生成密钥对:openssl genrsa -des3 -out server.key 2048
mkdir -p /data/cert
③生成证书签名的请求文件:openssl req -new -key server.key -out server.csr
④备份、清除原来的密钥对:openssl rsa -in server.key.old -out server.key
cp server.key server.key.old
⑤生成签名证书
openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt
chmod +x /data/cert/*
⑥登录:https://20.0.0.72
⑦在harbor节点把密钥验证目录整个转给node节点
scp -r /data/ root@20.0.0.73:/
scp -r /data/ root@20.0.0.74:/
⑧在node节点上操作
mkdir -p /etc/docker/certs.d/hub.test.com/
vim /lib/systemd/system/docker.service
--insecure-registry=hub.test.com
systemctl daemon-reload
systemctl restart docker
⑨登录仓库,上传镜像(以node1为例,在node1上操作)
创建一个公开的项目:
docker login -u admin -p 123456 https://hub.test.com
docker tag nginx:latest hub.test.com/k8s/nginx:v1
docker push hub.test.com/k8s/nginx:v1
⑩测试(基于镜像仓库的nginx创建pod)
kubectl create deployment nginx1 --image=hub.test.com/k8s/nginx:v1 --replicas=3
(3)配置dash(在主节点配置)
①kubectl apply -f recommended.yaml
②创建用户,获取token
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
③访问:https://20.0.0.73:30001
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/582972.shtml
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!