1. 启动/关闭/查看glusterd服务
#启动:
systemctl daemon-reload
systemctl start glusterd#开机自动启动glusterd服务:
systemctl enable glusterd#关闭:
systemctl stop glusterd#查看状态:
systemctl status glusterd
2. 为存储池添加/移除服务器节点
在其中一个节点上操作即可:
gluster peer probe <SERVERNAME>
#eg: gluster peer probe gfs-6#注意,移除节点时,需要提前将该节点上的Brick移除:
gluster peer detach <SERVERNAME>
#eg: gluster peer detach gfs-6#查看当前gfs集群信任池(显示的时候不包括本节点):
gluster peer status
3. 创建/启动/停止/删除卷
gluster volume create [stripe | replica ] [transport [tcp | rdma | tcp,rdma]] ...
gluster volume start <VOLNAME>
gluster volume stop <VOLNAME>
gluster volume delete <VOLNAME>
#eg: gluster volume [start | stop | delete] test-volume
注意:删除卷的前提是先停止卷,如果要删除卷下面的数据,现在客户端将卷下的数据删除完毕,再停止卷,再删除卷
4. 查看卷信息
#列出集群中的所有卷:
gluster volume list#查看集群中的卷信息:
gluster volume info <VOLNAME> #查看指定卷信息
gluster volume info all #查看所有卷信息#查看集群中的卷状态:
gluster volume status <VOLNAME>
gluster volume status all#带指标参数查看具体的指标信息
gluster volume status <VOLNAME> [detail | clients | mem | inode | fd | callpool]
gluster volume status all [detail | clients | mem | inode | fd | callpool]
#eg: gluster volume status all detail
#eg: gluster volume status test-volume detai
#eg: gluster volume status test-volume server4:/exp4 detail
5. 配置卷参数
gluster volume set <VOLNAME> <OPTION> <PARAMETER>#eg: gluster volume quota gfs-data enable #磁盘配额开关
gluster volume set <VOLNAME> cluster.nufa enable #开启NUFA(在卷中创建任何数据之前,应启用 NUFA)
6. 配额限制
#启用配额(先启用配额,才能设置磁盘限制):
gluster volume quota <VOLNAME> enable
#eg: gluster volume quota test-volume enable#禁用配额:
gluster volume quota <VOLNAME> disable
#eg:gluster volume quota test-volume disable#设置磁盘限制:
gluster volume quota <VOLNAME> limit-usage <DIR> <HARD_LIMIT>
#eg: gluster volume quota test-volume limit-usage /data 10GB#查看磁盘限制信息:
gluster volume quota <VOLNAME> list
#eg: gluster volume quota test-volume list
#eg: gluster volume quota test-volume list /dat#删除磁盘限制:
gluster volume quota <VOLNAME> remove <DIR>
#eg: gluster volume quota test-volume remove /data
7. 扩展/收缩卷
# 扩展卷: 在现有的卷中新增加一个brick
gluster volume add-brick <VOLNAME> <NEW-BRICK>
#eg: gluster volume add-brick test-volume a5000-glusterfs-server2:/glusterfs/test a5000-glusterfs-server1:/glusterfs/test a5000-glusterfs-server3:/glusterfs/test#收缩卷(先将数据迁移到其它可用的Brick,迁移结束后才将该Brick移除):
gluster volume remove-brick <VOLNAME> <BRICKNAME> start
#eg: gluster volume remove-brick test-volume a5000-glusterfs-server2:/glusterfs/test a5000-glusterfs-server1:/glusterfs/test a5000-glusterfs-server3:/glusterfs/test start #查看移除进度Brick的状态:
gluster volume remove-brick <VOLNAME> <BRICKNAME> status
#eg: gluster volume remove-brick test-volume server2:/exp2 status#状态显示“已完成”后,提交删除砖操作:
gluster volume remove-brick <VOLNAME> <BRICKNAME> commit
#eg: gluster volume remove-brick test-volume server2:/exp2 commit
8.更改传输类型
#例如:要同时启用 tcp 和 rdma,请执行 followimg 命令:
gluster volume set test-volume config.transport tcp,rdma OR tcp OR rdma
9. 挂载/卸载卷
#客户端以glusterfs方式挂载
mount -t glusterfs <SERVER>:/<VOLNAME> <MOUNTDIR>#带传输方式挂载
mount -t glusterfs -o transport=rdma <SERVER>:/<VOLNAME> <MOUNTDIR>
#例如:若要使用 rdma 传输进行挂载,请使用以下命令:
#eg: mount -t glusterfs -o transport=rdma server1:/test-volume /mnt/glusterfs#卸载卷
umount <MOUNTDIR>
10.重新均衡卷
# 在任何 Gluster 服务器上启动重新平衡操作:
gluster volume rebalance <VOLNAME> fix-layout start
#eg: gluster volume rebalance test-volume fix-layout start#在任一服务器上启动重新平衡操作:
gluster volume rebalance <VOLNAME> start
#eg: gluster volume rebalance test-volume start#在任一服务器上强制启动迁移操作:
gluster volume rebalance <VOLNAME> start force
#eg: gluster volume rebalance test-volume start force#检查重新平衡操作后的状态:
gluster volume rebalance <VOLNAME> status
#eg: gluster volume rebalance test-volume status
#当status栏显示“in progress”,则重新平衡操纵尚未完成
#当status栏显示“completed”,则已完成重新平衡操作#停止重新平衡操作:
gluster volume rebalance <VOLNAME> stop
#eg: gluster volume rebalance test-volume stop
11. 复制时触发自我修复
#仅在需要修复的文件上触发自我修复:
gluster volume heal
#eg: gluster volume heal test-volume
#在卷的所有文件上触发自我修复:
gluster volume heal <VOLNAME> full
#eg:gluster volume heal test-volume full
#查看需要修复的文件列表:
gluster volume heal <VOLNAME> info
#eg: gluster volume heal test-volume info
#查看自我修复的文件列表:
gluster volume heal <VOLNAME> info healed
#eg:gluster volume heal test-volume info healed
#查看特定卷的自我修复失败的文件列表:
gluster volume heal <VOLNAME> info failed
#eg: gluster volume heal test-volume info failed
#查看指定卷中处于脑裂状态的文件列表:
gluster volume heal <VOLNAME> info split-brain
#eg: gluster volume heal test-volume info split-brain
12. 更换有故障的Brick
#步骤1-添加新的Brick:
gluster volume add-brick <VOLNAME> <NEW-BRICK>
#eg:gluster volume add-brick test-volume server4:/exp4#步骤2-移除问题Brick:
gluster volume remove-brick <VOLNAME> <BRICKNAME> start
#eg:gluster volume remove-brick test-volume server2:/exp2 start#步骤3-查看移除进度Brick的状态:
gluster volume remove-brick <VOLNAME> <BRICKNAME> status
#eg:gluster volume remove-brick test-volume server2:/exp2 status#步骤4-状态显示“已完成”后,提交删除砖操作:
gluster volume remove-brick <VOLNAME> <BRICKNAME> commit
#eg:gluster volume remove-brick test-volume server2:/exp2 commit
13. 监控命令
#开启监控命令,先启动监控命令,后面的命令才能执行:
gluster volume profile <VOLNAME> start
#eg: gluster volume profile test-volume start#显示I/O信息
gluster volume profile <VOLNAME> info
#eg: gluster volume profile test-volume info#关闭监控命令:
gluster volume profile <VOLNAME> stop
#eg: gluster volume profile test-volume stop
14. 监控命令top
#完整命令
gluster volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] | {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>]#查看打开的 fd数量和最大的fd数量,并列出前10条
gluster volume top <VOLNAME> open [brick <BRICK>] [list-cnt <COUNT>]
#eg: gluster volume top test-volume open brick a5000-glusterfs-server2:/glusterfs/sunwenbo-test list-cnt 10 #查看最高文件读取调用
gluster volume top <VOLNAME> read [brick <BRICK>] [list-cnt <COUNT>]
#eg: gluster volume top test-volume read brick server2:/exp2 list-cnt 10#查看最高文件写入调用
gluster volume top <VOLNAME> write [brick <BRICK>] [list-cnt <COUNT>]
#eg: gluster volume top test-volume write brick server2:/exp2 list-cnt 10#查看目录上的最高打开调用数
gluster volume top <VOLNAME> opendir [brick <BRICK>] [list-cnt <COUNT>]
#eg: gluster volume top test-volume opendir brick server2:/exp2 list-cnt 10#查看目录上的最高读取调用
gluster volume top <VOLNAME> test-volume readdir [brick BRICK] [list-cnt {0..100}]
#eg: gluster volume top test-volume readdir brick server2:/exp2 list-cnt 10#查看每个块上的读取性能列表
gluster volume top <VOLNAME> read-perf [bs <BLOCK-SIZE> count <COUNT>] [brick <BRICK>] [list-cnt <COUNT>]
#eg: gluster volume top test-volume read-perf bs 256 count 1 brick server2:/exp2 list-cnt 10#查看每个块上的写入性能列表
gluster volume top <VOLNAME> write-perf [bs <BLOCK-SIZE> count <COUNT>] [brick <BRICK>] [list-cnt <COUNT>]
#eg: gluster volume top test-volume write-perf bs 256 count 1 brick server2:/exp2 list-cnt 10
15. 更新内存缓存大小
#在软超时的情况下,每N秒刷新一次volume的内存缓存大小
gluster volume set <VOLNAME> features.soft-timeout <time>#在硬超时的情况下,每N秒刷新一次volume的内存缓存大小
gluster volume set <VOLNAME> features.hard-timeout <time>
# eg:在硬超时的情况下每 5 秒更新一次volume的内存缓存大小
# gluster volume set test-volume features.hard-timeout 5
16. 设置警报时间
警报时间是您希望在达到软限制后记录使用情况信息的频率。
#设置警报时间(默认警报时间为一周):
gluster volume quota <VOLNAME> alert-time <time>#要将警报时间设置为一天:
# eg: gluster volume quota test-volume alert-time 1d
17. 快照命令
GlusterFS 卷快照功能基于精简配置的 LVM 快照。
官网要求,要使用快照功能,GlusterFS 卷应满足以下要求先决条件:
每个块都应位于独立的精简配置的 LVM 上。
Brick LVM 不应包含除Brick以外的任何其他数据。
任何Brick都不应该放在厚重的 LVM 上。
Gluster 版本应为 3.6 及以上。
#创建快照:
gluster snapshot create <snapname> <volname> [no-timestamp] [description <description>]#快照克隆
gluster snapshot clone <clonename> <snapname>#恢复快照
gluster snapshot restore <snapname>#删除快照
gluster snapshot delete (all | <snapname> | volume <volname>)#查看快照列表:
gluster snapshot list [volname]#查看快照信息:
gluster snapshot info [(snapname | volume <volname>)]#查看快照状态:
gluster snapshot status [(snapname | volume <volname>)]#配置快照
snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>])| ([auto-delete <enable|disable>])| ([activate-on-create <enable|disable>])#激活快照
gluster snapshot activate <snapname>#停用快照
gluster snapshot deactivate <snapname>#访问快照的2种方式:
mount -t glusterfs <hostname>:/snaps/<snap-name>/<volume-name> <mount-path>
#eg: mount -t glusterfs host1:/snaps/my-snap/vol /mnt/snapshot#设置用户可用性,默认从隐藏目录.snaps访问快照
gluster volume set <volname> snapshot-directory <new-name>