1. Current status
2. Add another Mon (mon.node2) SSH node2 in 172.10.2.172 (Node2)
Vim/etc/ceph/ceph.conf adding MON.NODE2-related configuration
Ceph-authtool/tmp/ceph.mon.keyring--import-keyring/etc/ceph/ceph.client.admin.keyring
Monmaptool--create--add node1 172.10.2.172--fsid e3de7f45-c883-4e2c-a26b-210001a7f3c2/tmp/monmap
Mkdir-p/var/lib/ceph/mon/ceph-node2
Ceph-mon--mkfs-i node2--monmap/tmp/monmap--keyring/tmp/ceph.mon.keyring
Touch/var/lib/ceph/mon/ceph-node2/done
/etc/init.d/ceph Start
3.add an MDS to 172.10.2.172 (Node2)vim/etc/ceph/ceph.conf Adding Mds.node2-related configuration/etc/init.d/ceph restart
View information more than one standby Mds/etc/init.d/ceph stop MDS shut down one after another will take over
Attached: configuration file [email protected]:/var/lib/ceph/osd# cat/etc/ceph/ceph.conf[global]fsid = 5e3a1bf3-9777-4311-a308-67a8c4b8fecemon Initial Members = Node1mon host = 172.10.2.171public Network = 172.10.2.0/24auth Cluster required = Noneauth service required = Noneauth client required = noneosd Journal size = 1024filestore xattr use o Map = Trueosd pool Default size = 2osd Pool default min size = 1OSD Pool default pg num = 128OSD pool default PGP num = 12 8osd Crush Chooseleaf type = 1
[Mon.node1]host = Node1mon addr = 172.10.2.171:6789
[Mon.node2]host = Node2mon addr = 172.10.2.172:6789
[osd.0]host = NODE1ADDR = 172.10.2.171:6789osd data =/var/lib/ceph/osd/ceph-0
[osd.1]host = NODE1ADDR = 172.10.2.171:6789osd data =/var/lib/ceph/osd/ceph-1
[osd.2]host = NODE2ADDR = 172.10.2.172:6789osd data =/var/lib/ceph/osd/ceph-2
[osd.3]host = NODE2ADDR = 172.10.2.172:6789osd data =/var/lib/ceph/osd/ceph-3
[Mds.node1]host = Node1
[Mds.node2]host = Node2
[Mds]max MDS = 2
Ceph Multiple Mon Multi MDS