This section reads as follows:
Increase Monitoring Node
Adding OSD Nodes
Remove the OSD node
1: Increase monitoring node
Here we use the last environment, to increase the monitoring node is very simple
Let's get the Monitoring node environment ready: Change the Hosts file and hostname, and update the hosts file for the Deploy node
On the Deployment node
CD first-ceph/
Ceph-deploy new Mon2 Mon3//here refers only to which nodes will become the monitoring node
or edit the configuration file directly
Vim ceph.conf
...
Mon_host = 10.0.0.25 10.0.0.26 10.0.0.27
Mon_initial_members = Master, OSD1, OSD2
Public_network = 10.0.0.0/24//You need to declare a public network address, otherwise the error will be followed
...
Ceph-deploy--overwrite-conf Mon Create Mon2
Ceph-deploy--overwrite-conf Mon Create Mon3
View Effects
[Email protected] first-ceph]# ceph Mon dump
Dumped Monmap Epoch 3
Epoch 3
Fsid 31f7ec02-3d25-4d62-a42d-ee3c3dd242db
Last_changed 2015-09-07 08:42:23.514826
Created 0.000000
0:10.0.0.25:6789/0 Mon.master
1:10.0.0.26:6789/0 MON.OSD1
2:10.0.0.27:6789/0 MON.OSD2
2: Add OSD Node
Increase the OSD node is easier, the same old, or the Hosts file and host name do well;
Here we add a hard disk to each of the two new nodes, do not partition and format
Ceph-deploy OSD Prepare Osd3:/dev/vdb Osd4:/dev/vdb
Ceph-deploy OSD Activate OSD3:/DEV/VDB1 osd4:/dev/vdb1
Copy configuration files and key files
Ceph-deploy Admin OSD3 osd4
Ceps-s look at the effect
3: Remove the OSD node
The steps to remove the OSD daemon are in 4 steps:
(1. Freeze the OSD that needs to be removed
Ceph osd out {Osd-num}
(2. Observe the automatic migration of content in the OSD to other OSD in the cluster
Ceph-w
You will observe the status of the OSD from "Active+clean" to "active, some degraded objects" and eventually return to "Active+clean"
After returning to Active+clean, the OSD data has been re-distributed. We can take the third step.
(3. Disable OSD Daemon Services/processes that need to be removed
Sudo/etc/init.d/ceph stop OSD. {Osd-num}
Or
Ps-ewf|grep Ceph; Kill it
(4. Remove OSD daemon information from the cluster: Crush map, key, data, journal, modify the Configure of the remaining OSD nodes.
Arbitrary ceph osd/mon Node execution:
Ceph OSD Crush Remove {name}
Ceph Auth del osd. {Osd-num}
removing nodes
Ceph OSD RM {Osd-num}
#for Example
Ceph OSD RM 1
Modify the remaining node configuration, such as VI {cluster_name}.conf:
Vim ceph.conf
Removed from
[Osd.1]
Host = {hostname}
This article is from the "Zhengerniu" blog, make sure to keep this source http://liufu1103.blog.51cto.com/9120722/1693820
2.ceph Advanced Actions