目录
- k8s kubeadm在安装 基于arm架构
- 第一章 k8s及中间件安装
- 1.主机名解析
- 2.主机名设置
- 3.禁用iptables和firewalld
- 4. 禁用selinux(linux下的一个安全服务,必须禁用)
- 5.禁用swap分区(主要是注释最后一行)
- 6.修改系统的内核参数
- 7.配置ipvs功能
- 8.安装docker
- 9.安装kubernetes1.23.9
- 10. 集群初始化
- 11. 安装ingress
- 11. 文件存储NAS
- 12. mysql
- 13. nacos安装
- 14. redis安装
- 15. rabbitmq安装
- 16. rocketmq安装
- 17. pgsql 控制台(pgadmin)页面安装
- 18. mysql 控制台(phpmyadmin)页面安装
- 19. redis 控制台(redis-sinsight)页面安装
- 第二章:离线软件包下载
- 第三章:本地仓库配置YUM
- 第一章 k8s及中间件安装
k8s kubeadm在安装 基于arm架构
第一章 k8s及中间件安装
1.主机名解析
10.129.148.4 hangkong-k8s-node01
10.129.148.5 hangkong-k8s-node02
10.129.148.6 hangkong-k8s-node03
10.129.148.4 hangkong-k8s.vip.com
2.主机名设置
echo 'hangkong-k8s-node01' > /etc/hostname
echo 'hangkong-k8s-node02' > /etc/hostname
echo 'hangkong-k8s-node03' > /etc/hostnamehostname hangkong-k8s-node01
hostname hangkong-k8s-node02
hostname hangkong-k8s-node03
3.禁用iptables和firewalld
systemctl stop firewalld
systemctl disable firewalld
systemctl stop iptables
systemctl disable iptables
4. 禁用selinux(linux下的一个安全服务,必须禁用)
vim /etc/selinux/config
SELINUX=disabled
setenforce 0
5.禁用swap分区(主要是注释最后一行)
vim /etc/fstab
UUID=455cc753-7a60-4c17-a424-7741728c44a1 /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
# /dev/mapper/centos-swap swap swap defaults 0 0 //注释这条
6.修改系统的内核参数
vim /etc/sysctl.conf添加以下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
修改net.ipv4.ip_forward = 1重新加载配置:
sysctl -p加载网桥过滤模块:
modprobe br_netfilter查看网桥过滤模块是否加载成功:
lsmod | grep br_netfilter
7.配置ipvs功能
dnf install ipvsadm添加需要加载的模块写入脚本文件:cat <<EOF > /etc/sysconfig/modules/ipvs.modules#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrackEOF为脚本文件添加执行权限:
chmod +x /etc/sysconfig/modules/ipvs.modules执行脚本文件:
/bin/bash /etc/sysconfig/modules/ipvs.modules查看对应的模块是否加载成功:
lsmod | grep -e ip_vs -e nf_conntrack
8.安装docker
下载安装包wget https://download.docker.com/linux/static/stable/aarch64/docker-20.10.19.tgz
安装tar -xzf docker-20.10.19.tgz
移动解压后的全部内容到/usr/bin/下mv docker/* /usr/bin/编辑docker.service文件
vi /usr/lib/systemd/system/docker.service
[Unit]Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s[Install]
WantedBy=multi-user.target添加docker.service文件的权限chmod +x /usr/lib/systemd/system/docker.service
systemctl daemon-reload创建daemon.json文件
mkdir /etc/docker
vim daemon.json{"live-restore": true,"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","graph":"/data/docker/graph","registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],"insecure-registries": ["192.168.8.73:18888","uat-harbor.bigfintax.com"],"log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"]
}reload内容、启动docker、设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker验证docker安装是否成功
docker -v &&. docker info
9.安装kubernetes1.23.9
[root@hangkong-k8s-node02 kubernetes]# pwd
/root/package/kubernetes
[root@hangkong-k8s-node02 kubernetes]#
[root@hangkong-k8s-node02 kubernetes]# ls -l
total 68408
-rw-r--r-- 1 root root 9014454 May 10 13:54 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm
-rw-r--r-- 1 root root 9921370 May 10 13:54 49658d033fddfa48e1345c21498197642b376412bfa4ba72ce36eb3f360f81d7-kubectl-1.23.9-0.x86_64.rpm
-rw-r--r-- 1 root root 9476670 May 10 13:54 4f2cd27ecd6913e34408df70f465a104feb1fbe1f73c8d828ce5bd0ab9c37c3c-kubeadm-1.23.9-0.x86_64.rpm
-rw-r--r-- 1 root root 208824 May 10 13:53 conntrack-tools-1.4.4-10.el8.x86_64.rpm
-rw-r--r-- 1 root root 21510866 May 10 13:56 d3abccc1e93912e877085abf9e1daa3e2b3b2bb360df93eb6411510e81c9399c-kubelet-1.23.9-0.x86_64.rpm
-rw-r--r-- 1 root root 19487362 May 10 13:57 db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
-rw-r--r-- 1 root root 24660 May 10 13:53 libnetfilter_cthelper-1.0.0-15.el8.x86_64.rpm
-rw-r--r-- 1 root root 24700 May 10 13:53 libnetfilter_cttimeout-1.0.0-11.el8.x86_64.rpm
-rw-r--r-- 1 root root 31976 May 10 13:53 libnetfilter_queue-1.0.4-3.el8.x86_64.rpm
-rw-r--r-- 1 root root 330692 May 10 13:53 socat-1.7.4.1-1.el8.x86_64.rpm
[root@hangkong-k8s-node02 kubernetes]#
[root@hangkong-k8s-node02 kubernetes]# yum localinstall *^C
[root@hangkong-k8s-node02 kubernetes]#
[root@hangkong-k8s-node02 kubernetes]# rpm -qa|grep kube
kubectl-1.23.9-0.x86_64
kubelet-1.23.9-0.x86_64
kubernetes-cni-0.8.7-0.x86_64
kubeadm-1.23.9-0.x86_64
10. 集群初始化
kubeadm init --control-plane-endpoint hangkong-k8s.vip.com:6443 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.23.9 --upload-certsmaster可以调度kubectl taint node hangkong-k8s-node01 node-role.kubernetes.io/master-
11. 安装ingress
编辑ingress的 yaml
apiVersion: v1
kind: Namespace
metadata:name: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginx---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
data:allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmname: ingress-nginx
rules:- apiGroups:- ''resources:- configmaps- endpoints- nodes- pods- secrets- namespacesverbs:- list- watch- apiGroups:- ''resources:- nodesverbs:- get- apiGroups:- ''resources:- servicesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch- apiGroups:- ''resources:- eventsverbs:- create- patch- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmname: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx
subjects:- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
rules:- apiGroups:- ''resources:- namespacesverbs:- get- apiGroups:- ''resources:- configmaps- pods- secrets- endpointsverbs:- get- list- watch- apiGroups:- ''resources:- servicesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch- apiGroups:- ''resources:- configmapsresourceNames:- ingress-controller-leaderverbs:- get- update- apiGroups:- ''resources:- configmapsverbs:- create- apiGroups:- ''resources:- eventsverbs:- create- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx
subjects:- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controller-admissionnamespace: ingress-nginx
spec:type: ClusterIPports:- name: https-webhookport: 443targetPort: webhookappProtocol: httpsselector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:annotations:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
spec:type: LoadBalancerexternalTrafficPolicy: LocalipFamilyPolicy: SingleStackipFamilies:- IPv4ports:- name: httpport: 80protocol: TCPtargetPort: httpappProtocol: http- name: httpsport: 443protocol: TCPtargetPort: httpsappProtocol: httpsselector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: ingress-nginx
spec:replicas: 3selector:matchLabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controllerrevisionHistoryLimit: 10minReadySeconds: 0template:metadata:labels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/component: controllerspec:dnsPolicy: ClusterFirstcontainers:- name: controller#image: registry.baidubce.com/k8s.gcr.io/ingress-nginx/controller:v1.1.0image: aaa.big.com/ingress-nginx-arm/ingress-nginx-controller:v1.1.1imagePullPolicy: IfNotPresentlifecycle:preStop:exec:command:- /wait-shutdownargs:- /nginx-ingress-controller- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller- --election-id=ingress-controller-leader- --controller-class=k8s.io/ingress-nginx- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller- --validating-webhook=:8443- --validating-webhook-certificate=/usr/local/certificates/cert- --validating-webhook-key=/usr/local/certificates/keysecurityContext:capabilities:drop:- ALLadd:- NET_BIND_SERVICErunAsUser: 101allowPrivilegeEscalation: trueenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: LD_PRELOADvalue: /usr/local/lib/libmimalloc.solivenessProbe:failureThreshold: 5httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1readinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1ports:- name: httpcontainerPort: 80hostPort: 80protocol: TCP- name: httpscontainerPort: 443hostPort: 443protocol: TCP- name: webhookcontainerPort: 8443protocol: TCPvolumeMounts:- name: webhook-certmountPath: /usr/local/certificates/readOnly: trueresources:requests:cpu: 100mmemory: 90MinodeSelector:kubernetes.io/os: linuxserviceAccountName: ingress-nginxterminationGracePeriodSeconds: 300volumes:- name: webhook-certsecret:secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: nginxnamespace: ingress-nginx
spec:controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:labels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookname: ingress-nginx-admission
webhooks:- name: validate.nginx.ingress.kubernetes.iomatchPolicy: Equivalentrules:- apiGroups:- networking.k8s.ioapiVersions:- v1operations:- CREATE- UPDATEresources:- ingressesfailurePolicy: FailsideEffects: NoneadmissionReviewVersions:- v1clientConfig:service:namespace: ingress-nginxname: ingress-nginx-controller-admissionpath: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: ingress-nginx-admissionannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
rules:- apiGroups:- admissionregistration.k8s.ioresources:- validatingwebhookconfigurationsverbs:- get- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: ingress-nginx-admissionannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx-admission
subjects:- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
rules:- apiGroups:- ''resources:- secretsverbs:- get- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: ingress-nginx-admissionnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx-admission
subjects:- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:name: ingress-nginx-admission-createnamespace: ingress-nginxannotations:helm.sh/hook: pre-install,pre-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
spec:template:metadata:name: ingress-nginx-admission-createlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookspec:containers:- name: createimage: aaa.big.com/ingress-nginx-arm/kube-webhook-certgen:v1.1.1imagePullPolicy: IfNotPresentargs:- create- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc- --namespace=$(POD_NAMESPACE)- --secret-name=ingress-nginx-admissionenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacesecurityContext:allowPrivilegeEscalation: falserestartPolicy: OnFailureserviceAccountName: ingress-nginx-admissionnodeSelector:kubernetes.io/os: linuxsecurityContext:runAsNonRoot: truerunAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:name: ingress-nginx-admission-patchnamespace: ingress-nginxannotations:helm.sh/hook: post-install,post-upgradehelm.sh/hook-delete-policy: before-hook-creation,hook-succeededlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhook
spec:template:metadata:name: ingress-nginx-admission-patchlabels:helm.sh/chart: ingress-nginx-4.0.15app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.1.1app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: admission-webhookspec:containers:- name: patchimage: aaa.big.com/ingress-nginx-arm/kube-webhook-certgen:v1.1.1imagePullPolicy: IfNotPresentargs:- patch- --webhook-name=ingress-nginx-admission- --namespace=$(POD_NAMESPACE)- --patch-mutating=false- --secret-name=ingress-nginx-admission- --patch-failure-policy=Failenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacesecurityContext:allowPrivilegeEscalation: falserestartPolicy: OnFailureserviceAccountName: ingress-nginx-admissionnodeSelector:kubernetes.io/os: linuxsecurityContext:runAsNonRoot: truerunAsUser: 2000
# 镜像地址改为,这里是我们公司的harbor地址,镜像我们自己可以去dockerhub上搜 ingress-nginx-controller:v1.1.1 就可以,我们这个也是从官网拉下来没改直接推到harbor上的- image:aaa.com/ingress-nginx-arm/ingress-nginx-controller:v1.1.1aaa.bigfintax.com/ingress-nginx-arm/kube-webhook-certgen:v1.1.1 #修改两次
#安装ingress
kubectl apply -f ingress-deploy.yaml
到这里,k8s已经安装完成了,下面是我们其他中间件的安装,请忽略。
11. 文件存储NAS
dnf -y install
nfs-utils-2.5.1-5.ky10.x86_64
nfs-utils-help-2.5.1-5.ky10.x86_6创建目录:mkdir /data/nfs/cge/
mkdir /data/nfs/cbest/
mkdir /data/nfs/package/vim /etc/exports
/data/nfs/cge/ *(insecure,rw,sync,no_root_squash,no_subtree_check)
/data/nfs/cbest/ *(insecure,rw,sync,no_root_squash,no_subtree_check)
/data/nfs/package *(insecure,rw,sync,no_root_squash,no_subtree_check)如果/etc/exports文件被修改,我们需要运行下面的命令使之生效。exportfs -ra
启动rpcbind服务
sudo systemctl enable rpcbind
sudo systemctl restart rpcbind启动nfs服务
sudo systemctl enable nfs-server
sudo systemctl start nfs-server
12. mysql
# 镜像地址改为 - image: aaa.big.com/store-arm/mysql:5.7.43 #镜像tag修改# 给node节点添加标签
kubectl label node zhongliang-k8s-node1 mysql=
#创建挂载目录及授权
mkdir /data/mysql
chown -R 1001 /data/mysql/#修改 02-mysql-dep.yamlvolumeMounts:- name: mysql-data mountPath: /bitnami/mysql/data #pod内的数据挂载到该目录- name: localtime # 新增mountPath: /etc/localtime # 新增 挂载本地时间到podreadOnly: true # 新增volumes:- hostPath:path: /data/mysql # 外挂持久化到本地type: "DirectoryOrCreate" name: mysql-data - name: localtime # 新增 挂载本地时间到pod内hostPath:path: /etc/localtime# 挂载到 /data/mysql后,mysql没有权限写入,可以先挂载到一个临时权限较高的目录,查看文件的属主,然后修改外面/data/mysql的属主, chown -R 1001 /data/mysql
kubectl apply -f 01-mysql-svc.yaml
kubectl apply -f 02-mysql-dep.yaml
13. nacos安装
# 镜像地址改为- image: aaa.big.com/store-arm/nacos:1.4.2
# 获取mysql的pod
kubectl get pods -n store | grep mysql
# 将sql导入mysql
bash 05-nacos-mysql-import.sh mysql-64846d7d58-f47pg
# 登陆mysql的pod查看数据是否导入成功
kubectl exec -it MYSQLPOD -n store bashmysql -uroot -pMhxzKhl@123 -e "show databases;"# 给node节点添加标签
kubectl label node zhongliang-k8s-node3 nacos=# 创建无头服务和nodeport
kubectl apply -f 01-nacos-cluster.yml
# 创建confimap 配置文件
kubectl apply -f 02-nacos-configmap.yaml
# 创建pod
kubectl apply -f 03-nacos-deployment.yml
# 修改ingress
- host: "nacos.cofco.com" #修改域名
# 创建ingress
kubectl apply -f 04-nacos-ingress.yaml# 查看日志是否正常
kubectl logs -f nacos-0 -n store
14. redis安装
# 03-redis-master-sts.yaml 镜像地址改为- image: aaa.big.com/store-arm/redis:4.0.14
# 07-create-redis-cluster.yaml 镜像地址改为- image: aaa.big.com/store-arm/redis:6.0# 给node节点添加标签
kubectl label node zhongliang-k8s-node1 redis-cluster=
kubectl label node zhongliang-k8s-node2 redis-cluster=
kubectl label node zhongliang-k8s-node3 redis-cluster=# 部署完成后,登陆redis6的pod,通过redis6自带的cli,将另外启动的6个pod做成cluster集群 (下面地址需要替换成实际的ip地址)
# 查看6个端口IP
kubectl get pods -A -o wide | grep redis
redis-cli --cluster create 192.168.210.108:6379 192.168.210.250:6379 192.168.210.170:6379 192.168.210.108:7379 192.168.210.250:7379 192.168.210.170:7379 --cluster-replicas 1# 登陆redis6的pod验证
kubectl exec -it redis-6b4bbf7bd8-dv5zf -n store bashredis-cli -h 192.168.210.250 -c # 登陆redis cluster集群cluster info # 执行命令查看集群信息cluster nodes # 执行命令查看主从节点#验证没问题后删除这个单点的redis6
kubectl delete -f 07-create-redis-cluster.yaml
15. rabbitmq安装
# 03-ss.yaml 镜像地址改为
- image: aaa.big.com/store-arm/rabbitmq:3.8# 给node节点添加标签
kubectl label node zhongliang-k8s-node1 rabbitmq=
kubectl label node zhongliang-k8s-node2 rabbitmq=
kubectl label node zhongliang-k8s-node3 rabbitmq=# 创建rbac
kubectl apply -f 00-rabc.yaml
# 创建配置文件
kubectl apply -f 01-cm.yaml
# 创建svc
kubectl apply -f 02-svc.yaml
# 创建deployment
kubectl apply -f 03-ss.yaml
# 修改ingress
- host: "rabbitmq.cofco.com" #修改域名
# 创建ingress
kubectl apply -f 04-ingress.yaml# 查看日志
kubectl logs -f rabbitmq-0 -n store
kubectl logs -f rabbitmq-1 -n store
16. rocketmq安装
# 02-rocketmq-namesrv-prod.yaml 镜像地址改为
- image: aaa.big.com/store-arm/rocketmq-namesrv:4.5.1_centos8
# 04-rocketmq-broker-master-prod.yaml 镜像地址改为
- image: aaa.big.com/store-arm/rocketmq-broker:4.5.1_centos8# 给node节点添加标签
kubectl label node zhongliang-k8s-node1 node-role.kubernetes.io/rocketmq="true"
kubectl label node zhongliang-k8s-node2 node-role.kubernetes.io/rocketmq-master: "true"# 创建namesrv的pod
kubectl apply -f 02-rocketmq-namesrv-prod.yaml
# 创建svc服务发现
kubectl apply -f 03-rocketmq-broker-master-svc.yaml
# 创建broker的pod
kubectl apply -f 04-rocketmq-broker-master-prod.yaml
# 创建console控制台
kubectl apply -f 07-rocketmq-console-ng-prod-ingress.yaml
# 修改ingress
- host: "rocketmq.cofco.com" #修改域名
#创建ingress
kubectl apply -f 07-rocketmq-console-ng-prod-ingress.yaml
# 当broker需要创建slave备份节点的时候才会执行
05-rocketmq-broker-slave-svc.yaml
06-rocketmq-broker-slave-prod.yaml# 验证
kubectl logs -f rocketmq-broker-master-0 -n store
kubectl logs -f namesrv-0 -n store
17. pgsql 控制台(pgadmin)页面安装
# 02-pgadmin-dep.yaml 镜像地址改为
- image: aaa.big.com/tool-arm/pgadmin4:8.6
# 03-pgadmin-ingress.yaml 修改域名地址
- host: "pgadmin.cofco.com" # 这里是示例,需要修改双引号内部分
# 通过kubectl创建svc,pod,ingress
kubectl apply ...
# 验证 ,修改本地的host文件,
123.249.91.174 pgadmin.cofco.com # 123.249.91.174为公网地址,修改完成后浏览器访问pgadmin.cofco.com测试
18. mysql 控制台(phpmyadmin)页面安装
# 02-phpmyadmin-dep.yaml 镜像地址改为
- image: aaa.big.com/tool-arm/phpmyadmin:latest
# 03-phpmyadmin-ingress.yaml 修改域名地址
- host: "phpadmin.cofco.com" # 这里是示例,需要修改双引号内部分
# 通过kubectl创建svc,pod,ingress
kubectl apply ...
# 验证 ,修改本地的host文件,
123.249.91.174 phpadmin.cofco.com # 123.249.91.174为公网地址,修改完成后浏览器访问 phpadmin.cofco.com 测试
19. redis 控制台(redis-sinsight)页面安装
# 01-redis-sinsight-dep.yaml 镜像地址改为
- image: aaa.big.com/tool-arm/redisinsight:1.13.1
# 03-redis-sinsight-ingress.yaml 修改域名地址
- host: "redisinsight.cofco.com" # 这里是示例,需要修改双引号内部分
# 通过kubectl创建svc,pod,ingress
kubectl apply ...
# 验证 ,修改本地的host文件,
123.249.91.174 redisinsight.cofco.com # 123.249.91.174为公网地址,修改完成后浏览器访问redisinsight.cofco.com测试
第二章:离线软件包下载
使用repotrack下载指定rpm包及其全量依赖
先添加kubernetes.repo
vim /etc/yum.repos.d/kubernetes.repo[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgkylin sp1验证dnf -y install kubeadm-1.23.9-0 kubernetes-cni-0.8.7 kubelet-1.23.9-0 kubectl-1.23.9-0 kubernetes-cni-0.8.7-0 --downloadonly --destdir=/root/package/kubernetes/安装软件包cd /root/package/kubernetes/yum localinstall *.rpm -y
第三章:本地仓库配置YUM
创建一个新的目录来存储你的RPM包:
mkdir /path/to/myrepo将你的RPM包复制到这个目录中。安装createrepo工具,如果尚未安装
yum install createrepo运行createrepo来创建仓库元数据:
createrepo /path/to/myrepo/创建一个新的repo文件,在/etc/yum.repos.d/目录下:
vi /etc/yum.repos.d/myrepo.repo在myrepo.repo文件中添加以下内容
[myrepo]
name=My Local Repository
baseurl=file:///path/to/myrepo/
enabled=1
gpgcheck=0yum命令来安装、更新或者搜索仓库中的包
yum install package-name