1. Disable the CEpH OSD process.
Service CEpH stop OSD
2. Balance the data in the CEpH Cluster
3. Delete the OSD node when all PG balances are active + clean.
CEpH cluster status before deletion
[[Email protected] ~] # CEpH OSD tree
# ID weight type name up/down reweight
-1 4 root default
-2 1 Host os-node3
0 1 osd.0 down 0
-3 1 Host os-node4
1 1 osd.1 up 1
-4 1 Host os-node5
2 1 osd.2 up 1
-5 1 Host os-node6
3 1 osd.3 up 1
1) Delete An OSD hard disk from the cluster
[[Email protected] ~] # CEpH osd rm 0
Removed osd.0
[[Email protected] ~] # CEpH OSD tree
# ID weight type name up/down reweight
-1 4 root default
-2 1 Host os-node3
0 1 osd.0 dne
-3 1 Host os-node4
1 1 osd.1 up 1
-4 1 Host os-node5
2 1 osd.2 up 1
-5 1 Host os-node6
3 1 osd.3 up 1
2) Delete An OSD hard disk crush map in the Cluster
[[Email protected] ~] # CEpH OSD crush RM osd.0
Removed item ID 0 name 'oss. 0' from crush Map
[[Email protected] ~] # CEpH OSD tree
# ID weight type name up/down reweight
-1 3 root default
-2 0 host os-node3
-3 1 Host os-node4
1 1 osd.1 up 1
-4 1 Host os-node5
2 1 osd.2 up 1
-5 1 Host os-node6
3 1 osd.3 up 1
3) Delete An OSD node from the cluster
[[Email protected] ~] # CEpH osdcrush RM os-node3
Removed item ID-2 Name 'OS-node3' from crush Map
[[Email protected] ~] #
[[Email protected] ~] # CEpH OSD tree
# ID weight type name up/down reweight
-1 3 root default
-3 1 Host os-node4
1 1 osd.1 up 1
-4 1 Host os-node5
2 1 osd.2 up 1
-5 1 Host os-node6
3 1 osd.3 up
This article is from the "zhanguo1110" blog, please be sure to keep this source http://zhanguo1110.blog.51cto.com/5750817/1535781