The pool of Ceph learning

Source: Internet
Author: User

The pool is a logical partition of Ceph's stored data, and it acts as a namespace. Other distributed storage systems, such as MogileFS, Couchbase, and Swift, have the concept of pool, but they are called different. Each pool contains a certain number of objects in the PG,PG are mapped to different OSD, so pool is distributed throughout the cluster.

Apart from isolating data, we can also set different optimization strategies for different pool, such as number of replicas, number of data cleansing, data block and object size.

View Pool

There are several ways to view the pool:

[Root@mon1 ~]# rados lspools
RBD
testpool
testpool2

[Root@mon1 ~]# ceph OSD lspools
0 rbd,1 testpool,2 testpool2,
[Root@mon1 ~]# ceph OSD dump |grep pool
pool 0 ' RBD ' replicated size 3 min_size 2 Crush_ruleset 0 Object_hash rjenkins Pg_num pgp_num Last_change 1 flags hashpspool stripe_width 0
Pool 1 ' testpool ' replicated size 2 Min_size 2 Cru Sh_ruleset 0 Object_hash Rjenkins pg_num, Pgp_num, Last_change, hashpspool stripe_width 0
Pool 2 ' testpo Ol2 ' Replicated size 2 min_size 2 Crush_ruleset 0 object_hash rjenkins pg_num "pgp_num" Last_change "Flags HASHPSP" Ool crash_replay_interval stripe_width 0
[Root@mon1 ~]#

There is no doubt that the information on the Ceph OSD dump output is the most detailed, including pool ID, number of copies, crush rule set, PG and PGP number, etc. Create pool

Typically, before you create a pool, you need to override the default Pg_num, which is recommended: if you have fewer than 5 OSD, set Pg_num to 128. 5~10 an OSD, set Pg_num to 512. 10~50 an OSD, set Pg_num to 4096. More than 50 OSD, can refer to Pgcalc calculation.

[Root@mon1 ~]# ceph OSD Pool Create pool1
"Pool1" created
[Root@mon1 ~]#

Set Pg_num to adjust pool copy when creating pool

[Root@mon1 ~]# ceph OSD Pool Set pool1 size 2
set Pool 3 size to 2
[Root@mon1 ~]#
Delete Pool
[Root@mon1 ~]# ceph OSD Pool Delete pool1
Error EPERM:WARNING:this would *permanently destroy* all data stored in poo L Pool1.  If you were *absolutely certain* that's what's want, pass the pool name *twice*, followed by--yes-i-really-really-mean- it.
[Root@mon1 ~]# ceph OSD Pool Delete pool1 pool1  --yes-i-really-really-mean-it
pool ' pool1 ' removed

Note: When you delete a pool, the pool name is entered two times to add the--yes-i-really-really-mean-it parameter to set the pool quota

[Root@mon1 ~]# ceph OSD Pool Set-quota pool1 max_objects           #最大100个对象
set-quota max_objects = for pool pool1< C2/>[root@mon1 ~]# ceph OSD Pool Set-quota pool1 max_bytes $ ((Ten * 1024x768 * 1024x768 * 1024x768))    #容量大小最大为10G
Rename pool
[Root@mon1 ~]# ceph OSD Pool rename pool1 pool2
pool ' pool1 ' renamed to ' Pool2 '
[Root@mon1 ~]#
View pool status information
[Root@mon1 ~]# Rados DF
Pool name                 KB      objects       clones     degraded      unfound           Rd        Rd KB           WR        WR KB
pool2                      0            0            0            0            0            0            0            0            0
RBD                        0            0            0            0            0            0            0            0            0
testpool                   0            0            0            0            0            0            0            0            0
testpool2                  0            0            0            0            0            0            0            0            0
  Total Used          118152            0 Total
  avail       47033916 total
  space       47152068
[Root@mon1 ~]#
Create a snapshot

Ceph supports the creation of snapshots across the pool (and the OpenStack Cinder Consistency group distinction.) ), acting on all objects of this pool. But note that Ceph has two pool modes: Pool Snapshot, the mode we are going to use. This mode is also the default when creating a new pool. Self Managed snapsoht, user-managed snapshot, this user refers to LIBRBD, which means that if a RBD instance is created in the pool, it is automatically converted to this mode.

These two modes are mutually exclusive and can only use one of them. Therefore, if a RBD object has been created in the pool (even if all of the image instances are currently deleted), the pool can no longer be snapped. Conversely, if you take a snapshot of a pool, you cannot create a RBD image.

[Root@mon1 ~]# ceph OSD Pool Mksnap pool2 Pool2_snap
created Pool pool2 snap Pool2_snap
[Root@mon1 ~]#
Deleting a snapshot
[Root@mon1 ~]# ceph OSD Pool rmsnap
                                     #remove snapshot <snap> from <pool>
[root@mon1 ~]# ceph OSD Pool R Msnap pool2 pool2_snap
removed pool pool2 snap Pool2_snap
[Root@mon1 ~]#

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.