kubeadm安装kubernetes高可用集群搭建
第一步:首先搭建etcd集群
yum install -y etcd
配置文件 /etc/etcd/etcd.conf
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://172.20.0.113:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.20.0.113:2379"#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.20.0.113:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.20.0.113:2379,http://127.0.0.1:2379
"
#配置集群IP
ETCD_INITIAL_CLUSTER="infra1=http://172.20.0.113:2380,infra2=http://172.20.0.114:2380,infra3=http://172.20.0.115:2380"
第二步:安装keepalived虚拟IP代理
yum install -y keepalived
# 添加以下内容
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1# 验证并生效
$ sysctl -p
# 验证是否生效
$ cat /proc/sys/net/ipv4/ip_forward
配置文件 /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {notification_email {}router_id <cluster-difference-name>
}
vrrp_script check_haproxy {# 自身状态检测script "killall -0 haproxy"interval 3weight 5
}
vrrp_instance haproxy-vip {# 使用单播通信,默认是组播通信unicast_src_ip 192.168.1.137unicast_peer {192.168.1.138}# 初始化状态state MASTER(BACKUP)# 虚拟ip 绑定的网卡 (这里根据你自己的实际情况选择网卡)interface eth0# 此ID 要与Backup 配置一致virtual_router_id 51<cluster id same># 默认启动优先级,要比Backup 大点,但要控制量,保证自身状态检测生效priority 100 advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {# 虚拟ip 地址192.168.1.139}track_script {check_k8s}
}
virtual_server 192.168.1.139 80 {delay_loop 5lvs_sched wlclvs_method NATpersistence_timeout 1800protocol TCPreal_server 192.168.1.137 80 {weight 1TCP_CHECK {connect_port 80connect_timeout 3}}
}virtual_server 192.168.1.139 443 {delay_loop 5lvs_sched wlclvs_method NATpersistence_timeout 1800protocol TCPreal_server 192.168.1.137 443 {weight 1TCP_CHECK {connect_port 443connect_timeout 3}}
}
第三步:安装docker、准备必要镜像
关闭防火墙
systemctl stop firewalld
关闭SELinux
setenforce 0
sed -i -e 's/SELINUX=enforcing/SELINUX=disable/g' /etc/sysconfig/selinux
安装docker
yum install -y docker
#镜像包
etcd-amd64_v3.1.11
flannel:v0.9.1-amd64_1.14.7
k8s-dns-dnsmasq-nanny-amd64_1.14.7
k8s-dns-sidecar-amd64_1.14.7
kube-apiserver-amd-v1.9.2
kube-controller-manager-amd64-v1.9.2
kube-proxy-amd64-v1.9.2
kube-scheduler-amd64-v1.9.2
pause-amd64_3.0
第四步:配置kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:advertiseAddress: 192.168.4.24
etcd:endpoints:- http://192.168.4.24:2379- http://192.168.4.25:2379- http://192.168.4.26:2379
imageRepository: k8s.gcr.io #配置私有仓库
networking: podSubnet: 10.1.0.0/16 #和flanneld的网段一致
apiServerCertSANs:
- 192.168.4.24
- 192.168.4.25
- 192.168.4.26
- 192.168.4.27
- 192.168.4.40
apiServerExtraArgs:
endpoint-reconciler-type: lease EOL ##初始化kubernetes集群
kubeadm init --config kubeadm-config.yaml
注意:如果flanneld使用服务安装则需要添加
etcdctl --endpoints=https://172.20.0.113:2379,https://172.20.0.114:2379,https://172.20.0.115:2379 \--ca-file=/etc/kubernetes/ssl/ca.pem \--cert-file=/etc/kubernetes/ssl/kubernetes.pem \--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \mkdir /kube-centos/network
etcdctl --endpoints=https://172.20.0.113:2379,https://172.20.0.114:2379,https://172.20.0.115:2379 \--ca-file=/etc/kubernetes/ssl/ca.pem \--cert-file=/etc/kubernetes/ssl/kubernetes.pem \--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
错误NetworkPlugin cni failed to set up pod
停掉集群删除flannel,避免网络污染
rm -rf /var/lib/cni/flannel/* && rm -rf /var/lib/cni/networks/cbr0/* && ip link delete cni0
rm -rf /var/lib/cni/networks/cni0/*