此部署使用传统的pv,pvc方式做持久化数据存储,而是使用storageclass调用provisioner,自动给pod创建的pvc分配pv并绑定,从而达到持久化存储的效果。可根据自己需求创建相关的pv,pvc。
安装NFS服务
NFS Server IP(服务端):172.30.93.2
NFS Client IP(客户端):172.30.93.3
NFS Server端安装NFS
操作主机 172.30.93.2
# 1.安装nfs与rpc
yum install -y nfs-utils rpcbind
# 查看是否安装成功
rpm -qa | grep nfs
rpm -qa | grep rpcbind
# 2.创建共享存储文件夹,并授权
mkdir -p /nfs/k8s_data
chmod 777 /nfs/k8s_data/
# 3.配置nfs
vim /etc/exports
/nfs/k8s_data 172.30.93.0/24(rw,no_root_squash,no_all_squash,sync)
# 4.启动服务
systemctl start nfs
systemctl start rpcbind
#添加开机自启
systemctl enable nfs
systemctl enable rpcbind
# 5.配置生效
exportfs -r
# 6.查看挂载情况
showmount -e localhost
#输出下面信息表示正常
NFS Client安装NFS
操作主机:除了NFS server,其他所有主机
yum -y install nfs-utils
创建持久卷PVC
当很多的数据卷需要创建或者管理时,Kubernetes解决这个问题的方法是提供动态配置PV的方法,可以自动创建PV。管理员可以部署PV配置器(provisioner),然后定义对应的StorageClass,这样开发者在创建PVC的时候就可以选择需要创建存储的类型,PVC会把StorageClass传递给PV provisioner,由provisioner自动创建PV。
所以这里使用了StorageClass的类型当做就持久化方案。
1、创建一个持久卷
pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: pv-nfslabels:pv: nfs
spec:capacity: #容量为2Gstorage: 2GaccessModes: #访问模式为允许多节点以读写方式挂载,可以有多个访问模式- ReadWriteManypersistentVolumeReclaimPolicy: Retain #回收策略nfs: #定义nfs服务器的信息server: 172.30.93.2path: /home/k8s_datareadOnly: false
storage修改为目录大小
server ip换成主机的ip
2、创建ServiceAccount账号
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisionernamespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: default
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: default
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: default
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: default
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
# 创建资源
kubectl apply -f rbac.yaml
3、创建provisioner
provisioner(也可称为供应者、置备程序、存储分配器)(nfs-client-provisioner.yaml)
修改yaml里面的ip
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisionernamespace: default
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisioner #这个serviceAccountName就是上面创建ServiceAccount账号containers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAME #PROVISIONER_NAME的值就是本清单的顶部定义的namevalue: nfs-client-provisioner- name: NFS_SERVER #这个NFS_SERVER参数的值就是nfs服务器的IP地址value: 172.30.93.2- name: NFS_PATH #这个NFS_PATH参数的值就是nfs服务器的共享目录value: /home/k8s_datavolumes:- name: nfs-client-rootnfs: #这里就是配置nfs服务器的ip地址和共享目录server: 172.30.93.2path: /home/k8s_data
# 创建资源
kubectl apply -f nfs-client-provisioner.yaml
4、创建StorageClass
nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-storageclass
provisioner: nfs-client-provisioner #provisioner参数定义置备程序
reclaimPolicy: Retain #回收策略,默认是Delete
parameters:archiveOnDelete: "false"
# 创建资源:
kubectl apply -f nfs-storageclass.yaml
以上创建好以后,如下显示running
5、测试storageclass是否可用
绑定即可。