Kubernetes集群安装
环境准备
192.168.1.53 k8s-master
192.168.1.52 k8s-node-1
192.168.1.51 k8s-node-2
设置三台机器的主机名:
Master上执行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-masterNode1上执行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-node-1Node2上执行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-node-2
在三台机器上设置hosts,均执行如下命令:
echo '192.168.1.53 k8s-master
192.168.1.53 etcd
192.168.1.53 registry
192.168.1.52 k8s-node-1
192.168.1.51 k8s-node-2' >> /etc/hostscat /etc/hosts
关闭三台机器上的防火墙
systemctl disable firewalld.service
systemctl stop firewalld.service
安装相关工具
k8s-master k8s-node-1 k8s-node-2 上都执行
yum install -y kubelet kubeadm kubectl kubernetes-cni
可能不能访问源,添加源#docker yum源
cat >> /etc/yum.repos.d/docker.repo <<EOF
[docker-repo]
name=Docker Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=0
EOF#kubernetes yum源
cat >> /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOFvim /etc/resolv.conf 最后增加
nameserver 8.8.8.8yum install -y kubelet kubeadm kubectl kubernetes-cni# kubectl delete node --all卸载 sudo yum remove kubelet kubeadm kubectl kubernetes-cni
kubelet --version
kubeadm version
systemctl enable docker.service
systemctl enable kubelet.servicesystemctl start kubelet
systemctl status kubelet 查看状态 启动失败
禁用SELinux,让容器可以读取主机文件系统
setenforce 0
vim /etc/sysconfig/selinux
SELINUX=disabled
和docker vim /usr/lib/systemd/system/docker.service的用户systemd一致就可以了,不需要修改
–exec-opt native.cgroupdriver=systemd
在k8s-master上执行
kubeadm init --apiserver-advertise-address=192.168.1.53 --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=swap 执行报错echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptablessystemctl enable docker.service 有警告的
systemctl enable kubelet.service 有警告的
k8s-master 上执行------------参看node的这个文件配置,这里有的配置可能多余-------
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
#Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
#EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGSsystemctl daemon-reload
systemctl start kubelet
systemctl status kubelet
journalctl -xefu kubelet 查看日志添加下载源
cat /etc/docker/daemon.json
{"registry-mirrors": ["http://68e02ab9.m.daocloud.io"]
}
systemctl restart docker
https://hub.docker.com/r/warrior/
先下载镜像 -----------使用下面的镜像版本,这里版本不对应-----
docker pull warrior/pause-amd64:3.0
docker tag warrior/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
docker pull warrior/etcd-amd64:3.0.17
docker tag warrior/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17
docker pull warrior/kube-apiserver-amd64:v1.6.0
docker tag warrior/kube-apiserver-amd64:v1.6.0 gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
docker pull warrior/kube-scheduler-amd64:v1.6.0
docker tag warrior/kube-scheduler-amd64:v1.6.0 gcr.io/google_containers/kube-scheduler-amd64:v1.6.0
docker pull warrior/kube-controller-manager-amd64:v1.6.0docker tag warrior/kube-controller-manager-amd64:v1.6.0 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0
docker pull warrior/kube-proxy-amd64:v1.6.0
docker tag warrior/kube-proxy-amd64:v1.6.0 gcr.io/google_containers/kube-proxy-amd64:v1.6.0docker pull gysan/dnsmasq-metrics-amd64:1.0
docker tag gysan/dnsmasq-metrics-amd64:1.0 gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
docker pull warrior/k8s-dns-kube-dns-amd64:1.14.1
docker tag warrior/k8s-dns-kube-dns-amd64:1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1docker pull warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1
docker tag warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
docker pull warrior/k8s-dns-sidecar-amd64:1.14.1
docker tag warrior/k8s-dns-sidecar-amd64:1.14.1 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
docker pull awa305/kube-discovery-amd64:1.0
docker tag awa305/kube-discovery-amd64:1.0 gcr.io/google_container/kube-discovery-amd64:1.0
docker pull gysan/exechealthz-amd64:1.2
docker tag gysan/exechealthz-amd64:1.2 gcr.io/google_container/exechealthz-amd64:1.2kubeadm init --apiserver-advertise-address=192.168.1.53 --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=allkubeadm config images list --kubernetes-version=v1.11.3
三台机器上下载镜像 ----------------------------------------------------
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3
docker pull mirrorgooglecontainers/pause:3.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker pull coredns/coredns:1.1.3
docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
docker pull mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.11
docker tag mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.11 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.11
docker pull mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.11
docker tag mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.11 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.11
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1出现如下信息,表示安装成功
kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228安装成功后,按提示执行下面命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
k8s-node-1 k8s-node-2执行
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptableskubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 --ignore-preflight-errors=swap因为测试主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的启动参数 --fail-swap-on=false 去掉这个限制
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
#EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_DNS_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGSsystemctl daemon-reload
systemctl start kubelet 启动失败没有关系 执行下面命令会去启动 生成配置文件
kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 --ignore-preflight-errors=swap然后查看状态启动成功了
systemctl status kubelet
journalctl -xefu kubelet 查看日志
master 上执行
https://blog.csdn.net/zhuchuangang/article/details/76572157/ 参考 安装flannel
kubectl get nodes 状态为NotReady 网络不通mkdir /docker
cd /docker/
kubectl --namespace kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.ymlwget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
yum -y install wgetcat kube-flannel.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","type": "flannel","delegate": {"isDefaultGateway": true}}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:template:metadata:labels:tier: nodeapp: flannelspec:hostNetwork: truenodeSelector:beta.kubernetes.io/arch: amd64tolerations:- key: node-role.kubernetes.io/masteroperator: Existseffect: NoScheduleserviceAccountName: flannelcontainers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.8.0-amd64command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]securityContext:privileged: trueenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run- name: flannel-cfgmountPath: /etc/kube-flannel/- name: install-cniimage: quay.io/coreos/flannel:v0.8.0-amd64command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]volumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfgkubectl --namespace kube-system apply -f ./kube-flannel.yml
kubectl get cskubectl get nodes 过一会状态为Ready了
参考链接:
https://blog.csdn.net/zhuchuangang/article/details/76572157/