一、资源池 Pool 管理(在admin和node三个节点都可)
1、资源池介绍
-
上面我们已经完成了 Ceph 集群的部署,但是我们如何向 Ceph 中存储数据呢?首先我们需要在 Ceph 中定义一个 Pool 资源池。Pool 是 Ceph 中存储 Object 对象抽象概念。我们可以将其理解为 Ceph 存储上划分的逻辑分区,Pool 由多个 PG 组成;而 PG 通过 CRUSH 算法映射到不同的 OSD 上;同时 Pool 可以设置副本 size 大小,默认副本数量为 3。
-
Ceph 客户端向 monitor 请求集群的状态,并向 Pool 中写入数据,数据根据 PGs 的数量,通过 CRUSH 算法将其映射到不同的 OSD 节点上,实现数据的存储。 这里我们可以把 Pool 理解为存储 Object 数据的逻辑单元;当然,当前集群没有资源池,因此需要进行定义。
-
创建一个 Pool 资源池,其名字为 mypool,PGs 数量设置为 64,设置 PGs 的同时还需要设置 PGP(通常PGs和PGP的值是相同的):
PG (Placement Group),pg 是一个虚拟的概念,用于存放 object,PGP(Placement Group for Placement purpose),相当于是 pg 存放的一种 osd 排列组合
2、命令行操作
- 增
[root@admin ~]# cd /etc/ceph
[root@admin ceph]# ceph osd pool create mypool 64 64
pool 'mypool' already exists
- 查
[root@admin ceph]# ceph osd pool ls #查看 Pool 资源池
mypool
[root@admin ceph]# rados lspools
mypool
[root@admin ceph]# ceph osd lspools
1 mypool
[root@admin ceph]# ceph osd pool get mypool size #查看资源池副本的数量
size: 2
[root@admin ceph]# ceph osd pool get mypool pg_num #查看 PG 和 PGP 数量
pg_num: 128
[root@admin ceph]# ceph osd pool get mypool pgp_num
pgp_num: 128
- 改
[root@admin ceph]# ceph osd pool set mypool pg_num 128 #修改 pg_num 和 pgp_num 的数量为 128
pg_num: 128
[root@admin ceph]# ceph osd pool set mypool pgp_num 128
pgp_num: 128
[root@admin ceph]# ceph osd pool get mypool pg_num
pg_num: 128
[root@admin ceph]# ceph osd pool get mypool pgp_num
pgp_num: 128
[root@admin ceph]# ceph osd pool set mypool size 2 # 修改 Pool 副本数量为 2
set pool 1 size to 2
[root@admin ceph]# ceph osd pool get mypool size
size: 2
vim ceph.conf #修改默认副本数为 2......osd_pool_default_size = 2
[root@admin ceph]# ceph-deploy --overwrite-conf config push node01 node02 node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node01 node02 node03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : push
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4ed78e7c68>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['node01', 'node02', 'node03']
[ceph_deploy.cli][INFO ] func : <function config at 0x7f4ed7d261b8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.config][DEBUG ] Pushing config to node01
[node01][DEBUG ] connected to host: node01
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node02
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node03
[node03][DEBUG ] connected to host: node03
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- 删
① 删除存储池命令存在数据丢失的风险,Ceph 默认禁止此类操作,需要管理员先在 ceph.conf 配置文件中开启支持删除存储池的操作
vim ceph.conf #删除 Pool 资源池...... [mon]mon allow pool delete = true
② 推送 ceph.conf 配置文件给所有 mon 节点
[root@admin ceph]# ceph-deploy --overwrite-conf config push node01 node02 node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node01 node02 node03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : push
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0dc767fc68>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['node01', 'node02', 'node03']
[ceph_deploy.cli][INFO ] func : <function config at 0x7f0dc7abe1b8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.config][DEBUG ] Pushing config to node01
[node01][DEBUG ] connected to host: node01
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node02
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node03
[node03][DEBUG ] connected to host: node03
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
③ 所有 mon 节点重启 ceph-mon 服务
[root@admin ceph]# systemctl restart ceph-mon.target
④ 执行删除 Pool 命令
[root@admin ceph]# ceph osd pool rm pool01 pool01 --yes-i-really-really-mean-it
pool 'pool01' does not exist