0. Introduction
This article describes how to configure the cache pool tiering. The role of the cache pool is to provide a scalable cache for caching Ceph hotspot data or for direct use as a high-speed pool. How to create a cache pool: First make a virtual bucket tree from an SSD disk,
Then create a cache pool, set its crush mapping rule and related configuration, and finally associate the pool to the cache pool that you need to use.
1. Build SSD bucket Tree
it is the new SSD bucket (vrack) after the OSD tree. One of the Osd.1 osd.0 Osd.2 uses an SSD disk. How to create will be simple, nothing more than adjusting or adding OSD to the bucket tree.
# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 6.00000 root default -2 6.00000 room test -3 3.00000 rack r1 -7 1.00000 host H09 3 1.00000 osd.3 up 1.00000 1.00000 -9 1.00000 host H07 5 1.00000 osd.5 up 1.00000 1.00000 -10 1.00000 host H06 6 1.00000 osd.6 up 1.00000 1.00000 -4 3.00000 rack vrack -6 1.00000 host vh06 1 1.00000 osd.1 up 1.00000 1.00000 -8 1.00000 host vh07 2 1.00000 osd.2 up 1.00000 1.00000 -5 1.00000 host vh09 0 1.00000 osd.0 up 1.00000 1.00000
2. Modify Crushmap
#ceph osd getcrushmap -o map #crushtool -d map -o map.txt #vi map.txt Add Replicated_ruleset_cache crush policy, Select osdrule replicated_ruleset { ruleset 0 from Vrack rack type replicated min_size 1 max_size 10 step take r1 step chooseleaf firstn 0 type host step emit}rule replicated_ruleset_cache { ruleset 1 type replicated min_size 1 max_size 10 step take vrack step chooseleaf firstn 0 type host step emit} #crushtool -c map.txt -o map.new #ceph osd Setcrushmap -i map.new
3. Create the Cache pool
Specify the crush rules for the newly created pool as Replicated_ruleset_cache
#ceph OSD Pool Create Rbd.cache #ceph OSD Pool Set Rbd.cache Crush_ruleset 1
4. Add the cache pool to the RBD pool
# ceph OSD Tier Add RBD Rbd.cache # ceph OSD Tier Cache-mode rbd.cache writeback # ceph OSD Tier Set-overlay RBD Rbd.cach E
5. Setting the cache pool related parameters
Please refer to the official website for parameter meanings
# ceph osd pool set rbd.cache hit_set_type bloom # ceph osd pool set rbd.cache hit_set_count 1 # ceph osd pool set rbd.cache hit_set_period 1800 # ceph osd pool set rbd.cache target_max_bytes 30000000000 # ceph osd pool set rbd.cache min_read_recency_for _promote 1 # ceph osd pool set rbd.cache min_write_ recency_for_promote 1 # ceph osd pool set rbd.cache Cache_target_dirty_ratio .4 # ceph osd pool set rbd.cache cache_target_dirty_high_ratio .6 # ceph osd pool set rbd.cache cache_target_full_ratio .8
6. Reference documentation
"CACHE POOL" http://docs.ceph.com/docs/master/dev/cache-pool/
Ceph Cache Pool Configuration