1. 介绍
NFS(Network File System), 是一种通过网络,让不同计算机共享文件的实现方式;
2. 部署实现
2.1. 配置NFS Server
# install in all vms
yum install -y nfs-utils
我们选择一台机子作为NFS服务器
# 我们将 /root/kubernetes/data/nfs 作为NFS服务器共享目录
echo "/root/kubernetes/data/nfs *(insecure,rw,no_root_squash)" > /etc/exportsmkdir -p /root/kubernetes/data/nfs
systemctl enable rpcbind --now
systemctl enable nfs-server --now
exportfs -r
2.2. 部署StorageClass
HINT: 注意将如下文件中 192.168.31.175
替换为你的NFS Server的IP地址;
HINT: 注意由于我将/root/kubernetes/data/nfs
作为了NFS地址,因此下文也需要保持一致;
kind: ServiceAccount
apiVersion: v1
metadata:name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: default
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployed
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
---
kind: Deployment
apiVersion: apps/v1
metadata:name: nfs-client-provisioner
spec:replicas: 1selector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: willdockerhub/nfs-subdir-external-provisioner:v4.0.2imagePullPolicy: IfNotPresentvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: nfs-provisioner #PROVISIONER_NAME的Value值与StorageClass的Provisioner字段值必须保持一致- name: NFS_SERVERvalue: 192.168.31.175 #请使用文件存储的挂载目标IP地址替换- name: NFS_PATHvalue: /root/kubernetes/data/nfs #请使用挂载目标支持的目录替换,默认挂载到/cfs目录volumes:- name: nfs-client-rootnfs:server: 192.168.31.175 #请使用文件存储的挂载目标IP地址替换path: /root/kubernetes/data/nfs
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-clientannotations:"storageclass.kubernetes.io/is-default-class": "true" #添加此注释
provisioner: nfs-provisioner #or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:archiveOnDelete: "false"
3. 开始使用
我们在声明pvc的时候显式声明storageClass即可
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: prometheus-storage-pvcnamespace: monitoring
spec:accessModes:- ReadWriteOncestorageClassName: nfs-clientresources:requests:storage: 5Gi
4. Reference
kubernetes Volume