Edit Crush Map:
1, obtain crush map;
2, anti-compilation crush map;
3. Edit at least one device, bucket, rule;
4, recompile crush map;
5, re-inject crush map;
Get Crush Map
To get the crush map of the cluster, execute the command:
Ceph OSD Getcrushmap-o {Compiled-crushmap-filename}
Ceph will crush output (-O) to the file you specify, and because crush map is compiled, it needs to be recompiled;
Anti-compilation Crush map
To decompile the crush map, execute the command:
crushtool-d {Compiled-crushmap-filename}-o {decompiled-crushmap-filename}
Ceph will decompile (-D) The binary crush graph and output (-O) to the file you specify;
Compiling crush Map
To compile the crush map, execute the command:
crushtool-c {Decompiled-crushmap-filename}-o {compiled-crushmap-filename}
Ceph saves the compiled crush map to the file you specify;
Inject Crush Map
To apply the crush map to the cluster, execute the command:
Ceph OSD Setcrushmap-i {Compiled-crushmap-filename}
Ceph will input the compiled crush map you specified into the cluster;
Crush Map Parameters:
The CRUSH chart consists of 4 main passages.
- The device consists of any object storage device, which corresponds to a memory for a ceph-osd process. Each OSD in the Ceph configuration file should have one device.
- Bucket Type: defines the bucket type ( types ) to be used in the CRUSH hierarchy, and the buckets are composed of storage locations (such as rows, cabinets, enclosures, hosts, etc.) that are aggregated and their weights.
- Bucket routines: Once you have defined the bucket type, you must also declare the bucket type of the host, as well as other planned fault domains.
- rule: consists of the method of selecting buckets.
If you use one of our "starter manuals" to match Ceph, you should be aware that you do not need to create a CRUSH diagram. The Ceph deployment tool generates the default CRUSH run diagram, which lists the OSD devices that you define in the Ceph configuration file, and declares each OSD host defined under the configuration file [OSD] as a bucket. To keep your data safe and available, you should create your own CRUSH diagram to reflect the fault domain of your cluster
Note: The generated CRUSH diagram does not take into account the large-granularity fault domain, so you should consider when you modify the CRUSH diagram, such as cabinets, rows, data centers.
Equipment of CRUSH Chart:
To map the collocated group to the OSD, the CRUSH diagram requires an OSD list (the OSD daemon name defined by the profile), so they appear first in the CRUSH diagram. To declare a device in the CRUSH diagram, create a new line after the device list , enter the device, then the unique numeric ID, and then the corresponding ceph-osd daemon routine name.
Ceph's Crush Map