CEpH: mix SATA and SSD within the same box

Source: Internet
Author: User
Tags emit

The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. in order to achieve our goal, we need to modify the crush map. my example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.

To revoke strate, please refer to the following picture:

I. Crush Map

Crush is very flexible and topology aware which is extremely useful in our scenario. we are about to create two different root or entry point from which the crush algorithm will go through to store our objects. we will have one root for our SSD disks and another one for our SATA disks. looking at the crush map below you will see that we duplicated our topology, it is like we let crush thinking that we had two different platforms which not entirely true. we only represented a logical view of what we wish to accomplish.

Here the crush map:

### OSD SATA DECLARATION##host ceph-osd2-sata {  id -2   # do not change unnecessarily  # weight 0.000  alg straw  hash 0  # rjenkins1  item osd.0 weight 1.000  item osd.3 weight 1.000}host ceph-osd1-sata {  id -3   # do not change unnecessarily  # weight 0.000  alg straw  hash 0  # rjenkins1  item osd.2 weight 1.000  item osd.5 weight 1.000}host ceph-osd0-sata {  id -4   # do not change unnecessarily  # weight 0.000  alg straw  hash 0  # rjenkins1  item osd.1 weight 1.000  item osd.4 weight 1.000}### OSD SSD DECLARATION##host ceph-osd2-ssd {  id -22    # do not change unnecessarily  # weight 0.000  alg straw  hash 0  # rjenkins1  item osd.6 weight 1.000  item osd.9 weight 1.000}host ceph-osd1-ssd {  id -23    # do not change unnecessarily  # weight 0.000  alg straw  hash 0  # rjenkins1  item osd.8 weight 1.000  item osd.11 weight 1.000}host ceph-osd0-ssd {  id -24    # do not change unnecessarily  # weight 0.000  alg straw  hash 0  # rjenkins1  item osd.7 weight 1.000  item osd.10 weight 1.000}

Now we create our two roots containing our osds:

### SATA ROOT DECLARATION##root sata {  id -1   # do not change unnecessarily  # weight 0.000  alg straw  hash 0  # rjenkins1  item ceph-osd2-sata weight 4.000  item ceph-osd1-sata weight 4.000  item ceph-osd0-sata weight 4.000}### SATA ROOT DECLARATION##root ssd {  id -21    # do not change unnecessarily  # weight 0.000  alg straw  hash 0  # rjenkins1  item ceph-osd2-ssd weight 4.000  item ceph-osd1-ssd weight 4.000  item ceph-osd0-ssd weight 4.000}

I create 2 new rules:

### SSD RULE DECLARATION### rulesrule ssd { ruleset 0 type replicated min_size 1 max_size 10 step take ssd step chooseleaf firstn 0 type host step emit}### SATA RULE DECLARATION##rule sata { ruleset 1 type replicated min_size 1 max_size 10 step take sata step chooseleaf firstn 0 type host step emit}
Compile and inject the new map:
$ crushtool -c lamap.txt -o lamap.coloc$ sudo ceph osd setcrushmap -i lamap.coloc

Then see the result:

$ sudo ceph osd tree# id  weight  type name up/down reweight-21 12  root ssd-22 4       host ceph-osd2-ssd6 1             osd.6 up  19 1             osd.9 up  1-23 4       host ceph-osd1-ssd8 1             osd.8 up  111  1           osd.11  up  1-24 4       host ceph-osd0-ssd7 1             osd.7 up  110  1           osd.10  up  1-1  12  root sata-2  4       host ceph-osd2-sata0 1             osd.0 up  13 1             osd.3 up  1-3  4       host ceph-osd1-sata2 1             osd.2 up  15 1             osd.5 up  1-4  4       host ceph-osd0-sata1 1             osd.1 up  14 1             osd.4 up  1
Ii. Crush rules pools Configuration

Create pools:

[email protected]:~# ceph osd pool create ssd 128 128pool ‘ssd‘ created[email protected]:~# ceph osd pool create sata 128 128pool ‘sata‘ created

Assign rules to the pools:

[email protected]:~# ceph osd pool set ssd crush_ruleset 0set pool 8 crush_ruleset to 0[email protected]:~# ceph osd pool set sata crush_ruleset 1set pool 9 crush_ruleset to 1

Result from CEpH OSD dump:

pool 8 ‘ssd‘ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 116 flags hashpspool stripe_width 0pool 9 ‘sata‘ replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 117 flags hashpspool stripe_width 0
Iii. osds Configuration

Yes, you can disable updating the crushmap on start of the daemon:

[osd]osd crush update on start = false

CEpH: mix SATA and SSD within the same box

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.