入门K8s:一键脚本搭建Linux服务器集群

前言

好久没有写系列博客了,本文主要是对网上文章的总结篇,主要是将安装和运行代码做了一次真机实验,亲测可用。文章内包含的脚本和代码,多来自于网络,也有我自己的调整和配置,文章末尾对参考的文献做了列举,方便大家参考。

过程很简单,一路next往下看和操作即可,文章不对脚本和代码做原理解释,某些注意点加了红色标注,部分脚本有注释,可以自行参考,以后有机会可以视频讲解。

核心步骤

因为是next的方式,所以本章节主要是操作步骤,步骤中涉及到的代码或者脚本,可以在下文中找到,比如:附录代码一、附录代码二等等,因为脚本实在太长,不太方便放到步骤里。

1、配置 node01 主节点(2个文件)

在root目录下拷贝k8s脚本(附录代码一:kubernetes_node01.sh)和flannel网络(附录代码二:kube-flannel.yml)的文件;

然后给脚本文件赋权限:chmod +x kubernetes_node01.sh

最后执行脚本:./kubernetes_node01.sh

ps:1、sh脚本中,需要配置节点,是内网的。

2、多个节点之间要保证能ping通;

3、中间可能需要自己来配合做些操作,比如输入:y,来做确认等等。

最后,可以在当前文件夹下,看到一个key.txt的文件,里边有安装的结果数据或者密钥等,可查看附录代码三:key.txt,这是我安装的结果,里边有join主节点的配置语句。

查看所有的nodes和pods:

[root@node01 ~]# kubectl get nodes 
NAME     STATUS   ROLES    AGE   VERSION 
node01   Ready    master   26h   v1.18.0

所有的pods:

[root@node01 ~]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            coredns-7ff77c879f-6m6fl                     1/1     Running   0          25m
kube-system            coredns-7ff77c879f-dkd56                     1/1     Running   0          25m
kube-system            etcd-node01                                  1/1     Running   0          26m
kube-system            kube-apiserver-node01                        1/1     Running   0          26m
kube-system            kube-controller-manager-node01               1/1     Running   0          26m
kube-system            kube-flannel-ds-amd64-sdv2h                  1/1     Running   0          25m
kube-system            kube-proxy-vgf4r                             1/1     Running   0          25m
kube-system            kube-scheduler-node01                        1/1     Running   0          26m

 如果都启动,都READY了,表示安装成功。

 

2、配置dashboard仪表盘(2个文件)

上面安装好了kubectl、kubeadm、kubelet后,我们可以通过客户端来连接,这里安利下k8s的客户端:Lens,很香。

如果不用客户端,那就需要安装仪表盘了。

1、Linux根目录拷贝文件,附录代码四:recommended.yaml(安装看板),附录代码五:dashboard-svc-account.yaml(配置管理员账户)

2、执行命令:

sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml

3、启动仪表盘服务:

kubectl apply -f recommended.yaml

4、启动配置账户:

kubectl apply -f dashboard-svc-account.yaml

 

都成功后,会生成一个token字符串,用来登录web端的令牌的,如果没有拷贝或者丢失了也不怕,可以使用命令查看:

kubectl describe secrets -n kube-system `kubectl get secret -n kube-system | grep admin | awk '{print $1}'` | grep '^token'|awk '{print $2}'

 token就是类似这种:

eyJhbGciOiJSUzI1NiIsImtpZCI6Ikl5SE00cXFZR1V2cWstQURVcGlUOGk4cTBY
ekZMV0VmNDEwRy14UTd1d2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2N
vdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrd
WJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5
hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tY3JnejYiLCJrdWJlcm5ldGVzLmlvL
3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWF
kbWliwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQ
udWlkIjoiMjYwMGQ0ZjctM2ZhOS00ODIwLWFmMmUtZTJlZDMxYWMyYWFhIiwic3ViI
joic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1
pbiJ9.BBtdG-S2kHEwRbWIAf6DiUgC3ILUOStPATyWfvxcQs5VJBtLRyMGqQ-AfkUo
VLuhZdUv-CGoEJ1OYA00M6MwoehDdkhLFbXF7Xx1IPyhFTHxZ_oXHBPyjEREkTEera
rZnvgt0ufU4g_Eqn91jdHet73itz-0abgmLMPkRl5YYjlh36Ivwq9IjKgujLwTNisU
FckLuHOscHtQIrjIvAZlWTRh_awMsDHvemAKG_YIjMbyQnXi6VfN3rTW869DA0XAGO
F2t7cWBtMmHvmLxVYqpOauUzwXXeYbO9eP0_d9JtVwKv6R0Q7sexRFZ-iTdZBOJDuj
FI3UT2jsqgVdbagA

 

这里再检查下:

查看所有的nodes和pods:

[root@node01 ~]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            coredns-7ff77c879f-6m6fl                     1/1     Running   0          25m
kube-system            coredns-7ff77c879f-dkd56                     1/1     Running   0          25m
kube-system            etcd-node01                                  1/1     Running   0          26m
kube-system            kube-apiserver-node01                        1/1     Running   0          26m
kube-system            kube-controller-manager-node01               1/1     Running   0          26m
kube-system            kube-flannel-ds-amd64-sdv2h                  1/1     Running   0          25m
kube-system            kube-proxy-vgf4r                             1/1     Running   0          25m
kube-system            kube-scheduler-node01                        1/1     Running   0          26m
kubernetes-dashboard   dashboard-metrics-scraper-78f5d9f487-ldswx   1/1     Running   0          12m
kubernetes-dashboard   kubernetes-dashboard-577bd97bc-szvwt         1/1     Running   0          12m

 多了kubernetes-dashboard命名空间下的两个pod。

 

 

3、配置 node02 子节点(1个文件)

如果你没有多余的服务器,也可以在master节点做自己的pod的,需要开启下,命令将 master 标记为可调度:

sudo kubectl taint nodes --all node-role.kubernetes.io/masteflr-

 

如果要配置多个子节点,那就仿照主节点来继续写sh脚本吧(附录代码六:kubernetes_node02.sh),步骤和主节点一致:

1、拷贝到子节点服务器;

2、赋权限,执行文件:./kubernetes_node02.sh

3、这里不用flannel配置;

4、安装完成后,可以join到主节点,配置文件在主节点的key.txt文件里,如果你安装成功了的话;

kubeadm join 172.17.10.4:6443 --token q3uu1o.4rdfkcyzxjhawvk1 \--discovery-token-ca-cert-hash sha256:a755d8f56733ba8f9d1951298b200202fce7b84389954bf7a38558fa6ce2a9c9

 

如果一切正常,可以去主节点查看所有的nodes:

NAME     STATUS   ROLES    AGE   VERSION
node01   Ready    master   26h   v1.18.0
node02   Ready    <none>   25h   v1.18.0

表示我们的子节点已经配置完成。

 

 

4、配置ASP.Net Core服务

这里的Deployment+Service的写法比较简单,直接贴出来,就不做过多的解释了。

apiVersion: apps/v1
kind: Deployment
metadata:labels:app: laozhang-op2name: laozhang-op2
spec:selector:matchLabels:app: laozhang-op2replicas: 2template:metadata:labels:app: laozhang-op2spec:containers:- name: laozhang-op2image: laozhangisphi/apkimg315imagePullPolicy: IfNotPresent #pull镜像时机,----
apiVersion: v1
kind: Service
metadata:name: laozhang-op2
spec:type: NodePortports:- name: defaultprotocol: TCPport: 80targetPort: 80nodePort: 30099 selector:app: laozhang-op2

 但是这种是nodePort的方式,平时使用更多的是Ingress的方式,那使用ingress,就先需要配置ingress的服务。

 

5、配置Ingress-nginx(1个文件)

在根目录拷贝文件,附录代码七:mandatory.yaml,配置Ingress-Nginx服务,

这里需要注意下,如果服务器之前已经配置过nginx,需要在mandatory.yaml文件中,修改http-port输出端口,详细内容见下面的代码,有注释。

直接执行yaml:

kubectl apply -f mandatory.yaml

 

如果没有报错,可以查看所有的pods:

[root@node01 ~]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
default                laozhang-op2-5cf487b57f-pdvfg                    1/1     Running   0          4h29m
default                laozhang-op2-5cf487b57f-vtgwc                    1/1     Running   0          4h29m
ingress-nginx          nginx-ingress-controller-557475687f-rfl98    1/1     Running   0          122m
kube-system            coredns-7ff77c879f-gj4sl                     1/1     Running   0          26h
kube-system            coredns-7ff77c879f-mqp2q                     1/1     Running   0          26h
kube-system            etcd-node01                                  1/1     Running   0          26h
kube-system            kube-apiserver-node01                        1/1     Running   0          26h
kube-system            kube-controller-manager-node01               1/1     Running   0          26h
kube-system            kube-flannel-ds-amd64-nmnj2                  1/1     Running   0          26h
kube-system            kube-proxy-wcjb8                             1/1     Running   0          26h
kube-system            kube-scheduler-node01                        1/1     Running   2          26h
kubernetes-dashboard   dashboard-metrics-scraper-78f5d9f487-qp2fw   1/1     Running   0          26h
kubernetes-dashboard   kubernetes-dashboard-577bd97bc-2tsj7         1/1     Running   0          26h

 如果和上面一样,那恭喜,一切配置就完成了。

 

 

附录代码一:kubernetes_node01.sh

#!/bin/bash
##############
##主节点##
################## 第一部分,环境初始化 ####
#k8s版本
version=v1.18.0
kubelet=kubelet-1.18.0-0.x86_64
kubeadm=kubeadm-1.18.0-0.x86_64
kubectl=kubectl-1.18.0-0.x86_64
#集群加入方式
key=/root/key.txt
#部署flannel网络
flannel=/root/kube-flannel.yml
#安装必要依赖
yum -y install vim wget git cmake make gcc gcc-c++ net-tools lrzsz#### 第二部分,节点配置 ####
#主机解析,免密登录
#内网ip,配置多节点,也可以不配置,后期通过join的方式
node01=172.21.10.4
#node02=192.168.10.7
#node03=192.168.1.30
hostnamectl set-hostname node01 
echo  '172.21.10.4 node01
#192.168.10.7 node02
#192.168.1.30 node03' >> /etc/hosts
ssh-keygen
ssh-copy-id  -i $node01
#ssh-copy-id  -i $node02
#ssh-copy-id  -i $node03
#scp /etc/hosts node02:/etc/hosts
#scp /etc/hosts node03:/etc/hosts
#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
#swap分区关闭
swapoff -a 
sed -i 's/.*swap.*/#&/' /etc/fstab
#关闭沙盒
setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config  
#打开ipv6
modprobe br_netfilter
modprobe  ip_vs_rr
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0 
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge#### 第三部分,参数/源处理 ####
#安装epel源
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl 
#时区校准
systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
#添加参数
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf
#添加kubernetes的epel源
echo '[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg' > /etc/yum.repos.d/kubernetes.repo
#下载
sudo yum-config-manager \--add-repo \https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
yum makecache fast#### 第四部分,开始安装 ####
yum -y install docker-ce
yum install --enablerepo="kubernetes" $kubelet $kubeadm  $kubectl
systemctl enable kubelet.service && systemctl start kubelet.service
systemctl start docker.service &&  systemctl enable docker.service
#安装tab快捷键
yum -y  install bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
#创建集群
kubeadm init --apiserver-advertise-address $node01 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version $version --pod-network-cidr=10.244.0.0/16 >> $key  2>&1
export KUBECONFIG=/etc/kubernetes/admin.conf
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
docker pull quay.io/coreos/flannel:v0.12.0-amd64
kubectl apply -f $flannel
echo  '请手动查看$key文件的密钥将其他节点接入集群'

 

 

附录代码二:kube-flannel.yml

##############
##flannel网络##
##############
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
rules:- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-amd64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- amd64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-amd64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-arm64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- arm64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-arm64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-arm64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-armnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- armhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-armcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-armcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-ppc64lenamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- ppc64lehostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-ppc64lecommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-ppc64lecommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-s390xnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- s390xhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-s390xcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-s390xcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

 

附录代码三:key.txt

W0526 16:17:20.680490   13760 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.10.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [172.17.10.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [172.17.10.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0526 16:18:02.560249   13760 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0526 16:18:02.561130   13760 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.504466 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: q3uu1o.4rdfkcyzxjhawvk1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.17.10.4:6443 --token q3uu1o.4rdfkcyzxjhawvk1 \--discovery-token-ca-cert-hash sha256:a755d8f56733ba8f9d1951298b200202fce7b84389954bf7a38558fa6ce2a9c9


附录代码四:recommended.yaml

 

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.##############
##安装dashboard##
##############apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.2.0imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.6ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

 

 

 


附录代码五:dashboard-svc-account.yaml

##############
##配置dashboard管理员账号##
##############apiVersion: v1
kind: ServiceAccount
metadata:name: dashboard-adminnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: dashboard-admin
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:- kind: ServiceAccountname: dashboard-adminnamespace: kube-system

 

附录代码六:kubernetes_node02.sh

 

#!/bin/bash
##############
##子节点##
################## 第一部分,环境初始化 ####
#k8s版本
version=v1.18.0
kubelet=kubelet-1.18.0-0.x86_64
kubeadm=kubeadm-1.18.0-0.x86_64
kubectl=kubectl-1.18.0-0.x86_64#集群加入方式
key=/root/key.txt
#部署flannel网络
flannel=/root/kube-flannel.yml
#安装必要依赖
yum -y install vim wget git cmake make gcc gcc-c++ net-tools lrzsz#### 第二部分,节点配置 ####
#配置节点,主机解析,免密登录
node01=172.17.10.4
node02=172.17.10.7
# node03=192.168.1.30
hostnamectl set-hostname node02 
echo  '172.17.10.4 node01
172.17.10.7 node02' >> /etc/hosts
ssh-keygen
ssh-copy-id  -i $node01
ssh-copy-id  -i $node02
# ssh-copy-id  -i $node03
scp /etc/hosts node02:/etc/hosts
# scp /etc/hosts node03:/etc/hosts#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
#swap分区关闭
swapoff -a 
sed -i 's/.*swap.*/#&/' /etc/fstab
#关闭沙盒
setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config  
#打开ipv6
modprobe br_netfilter
modprobe  ip_vs_rr
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0 
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge#### 第三部分,参数/源处理 ####
#安装epel源
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl 
#时区校准
systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
#添加参数
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf
#添加kubernetes的epel源
echo '[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg' > /etc/yum.repos.d/kubernetes.repo
#下载
sudo yum-config-manager \--add-repo \https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
yum makecache fast#### 第四部分,开始安装 ####
yum -y install docker-ce
yum install --enablerepo="kubernetes" $kubelet $kubeadm  $kubectl
systemctl enable kubelet.service && systemctl start kubelet.service
systemctl start docker.service &&  systemctl enable docker.service
#安装tab快捷键
yum -y  install bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
#创建集群
docker pull quay.io/coreos/flannel:v0.12.0-amd64
echo  '请手动查看主节点$key文件的密钥将其他节点接入集群'

 

 


附录代码七:mandatory.yaml

##############
##配置ingress-nginx服务##
##############
apiVersion: v1
kind: Namespace
metadata:name: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---kind: ConfigMap
apiVersion: v1
metadata:name: nginx-configurationnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: tcp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: udp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
apiVersion: v1
kind: ServiceAccount
metadata:name: nginx-ingress-serviceaccountnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:name: nginx-ingress-clusterrolelabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
rules:- apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secretsverbs:- list- watch- apiGroups:- ""resources:- nodesverbs:- get- apiGroups:- ""resources:- servicesverbs:- get- list- watch- apiGroups:- ""resources:- eventsverbs:- create- patch- apiGroups:- "extensions"- "networking.k8s.io"resources:- ingressesverbs:- get- list- watch- apiGroups:- "extensions"- "networking.k8s.io"resources:- ingresses/statusverbs:- update---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:name: nginx-ingress-rolenamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
rules:- apiGroups:- ""resources:- configmaps- pods- secrets- namespacesverbs:- get- apiGroups:- ""resources:- configmapsresourceNames:# Defaults to "<election-id>-<ingress-class>"# Here: "<ingress-controller-leader>-<nginx>"# This has to be adapted if you change either parameter# when launching the nginx-ingress-controller.- "ingress-controller-leader-nginx"verbs:- get- update- apiGroups:- ""resources:- configmapsverbs:- create- apiGroups:- ""resources:- endpointsverbs:- get---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:name: nginx-ingress-role-nisa-bindingnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: nginx-ingress-role
subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: nginx-ingress-clusterrole-nisa-bindinglabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: nginx-ingress-clusterrole
subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-ingress-controllernamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxtemplate:metadata:labels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxannotations:prometheus.io/port: "10254"prometheus.io/scrape: "true"spec:hostNetwork: true # wait up to five minutes for the drain of connectionsterminationGracePeriodSeconds: 300serviceAccountName: nginx-ingress-serviceaccountnodeSelector:Ingress: nginxkubernetes.io/os: linuxcontainers:- name: nginx-ingress-controllerimage: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.29.0args:- /nginx-ingress-controller- --configmap=$(POD_NAMESPACE)/nginx-configuration- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services- --udp-services-configmap=$(POD_NAMESPACE)/udp-services- --publish-service=$(POD_NAMESPACE)/ingress-nginx- --annotations-prefix=nginx.ingress.kubernetes.io- --http-port=8080 # 如果你的master服务器已经安装了nginx,这里需要修改下,否则无法启动ingress-nginx服务- --https-port=8443 # 同上securityContext:allowPrivilegeEscalation: truecapabilities:drop:- ALLadd:- NET_BIND_SERVICE# www-data -> 101runAsUser: 101env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceports:- name: httpcontainerPort: 80protocol: TCP- name: httpscontainerPort: 443protocol: TCPlivenessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 10readinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPperiodSeconds: 10successThreshold: 1timeoutSeconds: 10lifecycle:preStop:exec:command:- /wait-shutdown---apiVersion: v1
kind: LimitRange
metadata:name: ingress-nginxnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
spec:limits:- min:memory: 90Micpu: 100mtype: Container

 

参考文献:

https://blog.csdn.net/qq_37746855/article/details/116173976

https://blog.csdn.net/weixin_46152207/article/details/111355788

https://blog.csdn.net/catcher92/article/details/116207040

https://blog.51cto.com/u_14306186/2523096

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/301934.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

数据库系统原理及mysql应用教程第二版_数据库系统原理及MySQL应用教程(第2版十三五普通高等教育规划教材)...

导语内容提要本书共19章&#xff0c;全面地讲述了数据库技术的基本原理和应用。主要内容包括&#xff1a;数据库概述、信息与数据模型、关系代数与关系数据库理论、数据库设计方法、MySQL的安装与使用、MySQL存储引擎与数据库操作管理、MySQL表定义与完整性约束控制、MySQL数据…

Eclipse 安装配置总结(WST WTP)(转)

为什么80%的码农都做不了架构师&#xff1f;>>> Eclipse 安装配置总结(WST WTP)(转) Eclipse 是最流行的功能强大的java IDE&#xff0c;有丰富的插件&#xff0c;配合插件可以作为j2ee、c、c、.net等开发工具。需要安装插件才能支持Web开发和其他应用的开发&#…

2019最佳年会,新东方6名员工冒着离职的风险,在年会上怒怼老板

全世界只有3.14 % 的人关注了数据与算法之美今天早上&#xff0c;数据汪打开微信朋友圈一看&#xff0c;发现大家都在疯传昨晚北京新东方学校年会节目《释放自我》。新东方员工们把歌曲《沙漠骆驼》的歌词给改编过后&#xff0c;把许多奋战在一线的员工心声都给唱出来了&#x…

mysql列增减_Mysql基本操作——增减改查

1 创建数据库&#xff1a;两种方法&#xff1a;create database my_db;createdatabase if not exists my_db;2 删除数据库&#xff1a;两种方法&#xff1a;drop databasemy_db;drop database if exists my_db;3 创建表&#xff1a;createtable table_name (column_name column…

使用mysql-proxy 快速实现mysql 集群 读写分离

为什么80%的码农都做不了架构师&#xff1f;>>> 使用mysql-proxy 快速实现mysql 集群 读写分离 目前较为常见的mysql读写分离分为两种&#xff1a; 1、 基于程序代码内部实现&#xff1a;在代码中对select操作分发到从库&#xff1b;其它操作由主库执行&#xff1…

50万年薪程序员,被百万网民怒喷后,却迎来大撕逼

全世界只有3.14 % 的人关注了数据与算法之美前几天&#xff0c;我们年轻气盛的小卢写了一篇关于“程序员锁库跑路&#xff0c;最终致创业公司倒闭”的文章&#xff0c;语言有些偏激&#xff0c;数据汪在此替小卢给大伙道个歉&#xff0c;至于为何不让他本人来呢&#xff1f;因为…

.NET轻量级配置中心AgileConfig

描述基于NetCore开发的轻量级配置中心&#xff0c;部署简单、配置简单&#xff0c;使用简单&#xff0c;可以根据个人或者公司需求采用。部署简答&#xff0c;最少只需要一个数据节点&#xff0c;支持docker部署支持多节点分布式部署来保证高可用配置支持按照应用隔离&#xff…

人生苦短,我用Python!

在大数据时代&#xff0c;信息更新非常快速&#xff0c;计算机语言也犹如雨后春笋般被我们所熟知。C语言、C、Java等可谓是各领风骚、独占鳌头&#xff0c;而Python则是一门近几年崛起很快也很火的编程语言。虽说编程语言难分好坏&#xff0c;各有千秋。但Python到底有什么魔力…

预售┃没有标题,配得上这款“俄罗斯方块”

▲数据汪特别推荐点击上图进入玩酷屋在之前的文章时&#xff0c;马斯提到数学存在一种现象叫“梯次掉队”&#xff0c;原因在于孩子的数学思维地基没有打牢。&#xff08;传送门&#xff09;提到初中孩子需要空间想象能力时&#xff0c;很多父母疑惑为何需要&#xff1f;关于这…

读Getting Started With Windows PowerShell笔记

使用中Powershell的操作跟Linux中的终端操作很多地方是一致的&#xff0c;当然&#xff0c;还是有着Windows自己的特色&#xff0c;比如&#xff0c;不分大小写。之前命令行中的命令大部分在这里也可以用&#xff0c;而且用法一样。选中后点右键&#xff0c;即复制到剪切板。不…

NET问答: String 和 string 到底有什么区别?

咨询区 Peter O.&#xff1a;开门见山&#xff0c;参考如下例子&#xff1a;string s "Hello world!"; String s "Hello world!";请问这两者有什么区别&#xff0c;在实际使用上要注意一些什么&#xff1f;回答区 Derek Park&#xff1a;string 是 C# 中…

LVS负载均衡-NET、DR模式配置

模型一&#xff1a;NAT模型的配置 实验环境&#xff1a; 采用VMware虚拟机&#xff0c;版本6.0.5 操作系统&#xff1a;Red Hat Enterprise Linux 5 (2.6.18) 虚拟机1&#xff1a;充当Director&#xff1a;网卡1(桥接):192.168.0.33&#xff08;对外&#xff09;&#xff0c;网…

编程语言的“别样”编年史

全世界只有3.14 % 的人关注了数据与算法之美代码是一门语言&#xff0c;这门语言搭建了人与计算机沟通的桥梁。通过编写代码&#xff0c;人类可以“命令”计算机开发网页、开发软件、搭建游戏... ... 这门语言并不是上帝的发明&#xff0c;它是前辈们发挥聪明才智创造出来的&am…

也可以改为while(input[0])或while(cininput[0])

2019独角兽企业重金招聘Python工程师标准>>> <<c primer plus>> // static.cpp -- using a static local variable #include <iostream> // constants const int ArSize 10; // function prototype void strcount(const char * str); int main()…

.NET Core HttpClient请求异常分析

【导读】最近项目上每天间断性捕获到HttpClient请求异常&#xff0c;感觉有点奇怪&#xff0c;于是乎观察了两三天&#xff0c;通过日志以及对接方沟通确认等等&#xff0c;查看对应版本源码&#xff0c;尝试添加部分配置发布后&#xff0c;观察十几小时暂无异常情况出现&#…

python 小甲鱼 代码_Python小代码

先自我介绍一下&#xff0c;本人是正在自学Python的小白&#xff0c;没事分享一下自己写的小代码&#xff0c;欢迎在评论区补充。游戏管理系统&#xff1a;代码如下&#xff1a;def healthe(m):if m"Y"or y:print("欢迎&#xff0c;请进入游戏&#xff01;"…

原来这些行业的“潜规则”是这样的...

全世界只有3.14 % 的人关注了数据与算法之美在日常生活中&#xff0c;我们往往受限于专业和工作&#xff0c;对自己所处行业之外的事物了解不多。今天&#xff0c;数据汪带大家扒一扒各个行业中不为人知的“潜规则”&#xff0c;看看你们知道几个&#xff1f;看完上面20个“潜规…

“工业互联网平台“将成为工业制造企业的标配

目 录1. 概述2. 背景3. 评述1. 概述“‘工业互联网平台’将成为工业制造企业的标配”的命题既是基于工业生产企业现实情况的判断&#xff0c;又是对工业企业未来发展的需求判断。前途是光明的&#xff0c;但是道路是曲折的。前途的光明是基于工业企业现实…

预售┃每个人都应该学习编程,因为它会教你如何思考

▲数据汪特别推荐点击上图进入玩酷屋扎克伯格11岁开始学习编程&#xff0c;创办Facebook&#xff1b;比尔盖茨13岁学习编程&#xff0c;创办微软……乔布斯说&#xff1a;“每一个人都应该学习电脑编程&#xff0c;因为它会教你如何思考。"现在在北京上海&#xff0c;顶级…

python3抓取图片_通过Python3 爬虫抓取漫画图片

引言&#xff1a;最近闲来无事所以想着学习下python3&#xff0c;看了好长时间的文档&#xff0c;于是用python3写了一个漫画抓取的程序&#xff0c;好了 废话不多说上码&#xff01;第一步&#xff1a;准备环境 和类库&#xff0c;我用的是python3.5 禁用 python2.x &…