Pools
To create a pool with the custom crush rule:
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
ex: ceph osd crush rule create-replicated SSD default osd ssd
# create a pool with SSD crush rule:
ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \
[crush-rule-name] [expected-num-objects] #replicated
ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] erasure \
[erasure-code-profile] [crush-rule-name] [expected_num_objects] [--autoscale-mode=<on,off,warn>]After creating rgw service, Ceph creates 5 extra pools such as ZONENAME.rgw.(log| buckets.index | meta | control) and .rgw.root.
Bucket.index: this pool is used for buckets' index and sharding indexes whenever needed.
log: This pool stores data more than just logs. it stores Objects that flagged as deleted but they are still in Garbage Collector list. also it stores Orphan objects list.
control: This pool stores notify objects with sample notify.x.
meta: This pool is used for metadata of data objects. Best practice is for this pool to use all-flash NVME Storage.
mgr: This pool is used for managers, and all data for managers is stored in this pool.
rgw.root: This pool is used for objects that RGW includes, realms, zone groups, and zones.
To delete a pool, set the mon_allow_pool_delete to true in the mgr configs.
ceph config set mon mon_allow_pool_delete trueChange the Pool Replica
To change the replica (From 3 to 2 or else) enter the following command:
ceph osd pool set <pool_name> size NUMLast updated