Yum Install-YwgetwgetHttps//pypi.python.org/packages/source/p/pip/pip-1.5.6.tar.gz#md5=01026f87978932060cc86c1dc527903eTarZXVF pip-1.5.6.Tar. GZCD Pip-1.5.6python setup.py buildpython setup.pyInstallSsh-keygen##################################Echo "Ceph-admin">/etc/hostname#Echo "Ceph-node1">/etc/hostname#Echo "Ceph-node2">/etc/hostname#Echo "Ceph-node3">/etc/hostname#reboot #################################Cat>/etc/hosts<<EOF192.168.55.185ceph-Admin192.168.55.186ceph-Node1192.168.55.187ceph-Node2192.168.55.188ceph-node3eofSSH-copy-ID[Email protected] &&SH-copy-ID[Email protected] &&SSH-copy-ID[Email protected]Node3SSH[Email protected] systemctl Stop FIREWALLD && setenforce0SSH[Email protected]ph-node2 systemctl Stop Firewalld && setenforce0SSH[Email protected] systemctl Stop FIREWALLD && setenforce0Cat>/root/.SSH/config<<Eofhost Ceph-Node1 Hostname ceph-node1 User roothost ceph-node2 Hostname ceph-node2 User roothost ceph-node3 Hostname ceph-node3 User rooteofmkdir~/my-CLUSTERCD~/my-ClusterpipInstallceph-Deployceph-deploy New Ceph-node1 Ceph-node2 ceph-Node3ceph-deployInstallCeph-node1 Ceph-node2 ceph-Node3ceph-deploy Mon create-Initialceph-deploy Mon Create Ceph-node1 ceph-node2 ceph-Node3ceph-deploy Gatherkeys ceph-node1 Ceph-node2 ceph-node3############################################################################## Ceph-deploy--overwrite-conf Mon Create ceph-node1 ceph-node2 ceph-node3############################################################################# #mkfs. XFS/dev/sdb#Mount/dev/sdb/opt/ceph/SSH[Email protected]mkdir/opt/CephSSH[Email protected]mkdir/opt/CephSSH[Email protected]mkdir/opt/Ceph Ceph-deploy OSD Prepare Ceph-node1:/opt/ceph ceph-node2:/opt/ceph ceph-node3:/opt/Cephceph-deploy OSD Activate Ceph-node1:/opt/ceph ceph-node2:/opt/ceph ceph-node3:/opt/ceph# Adding metadata node Ceph-deploy MDS Create ceph-node1############################################################### #分发key文件 #ceph-deploy admin ceph-admin ceph-node1 ceph-node2 ceph-node3############################################################### #集群检查ceph Healthceph-Sceph-WCeph Quorum_status--format json-Pretty #客户端挂载Yum Install-Y ceph-Fusemkdir/mnt/Ceph[[email protected]-admin ~]# ceph OSD Pool Create metadata the the[[Email protected]-admin ~]# ceph OSD Pool Create data the the[[Email protected]-admin ~]# ceph fs New filesystemnew metadata Data[[email protected]-admin ceph]# Ceph FSlsname:filesystemnew, metadata pool:metadata, data pools: [Data][[email protected]-admin ceph]# Ceph MDSState5:1/1/1Up {0=ceph-node1=Up:active}ceph-fuse-m192.168.55.186:6789/mnt/ceph### #end # # # # #添加osd节点SSHceph-Node1sudo mkdir/var/local/Osd2exit[[email protected]-admin my-cluster]# Ceph-deploy OSD Prepare ceph-node1:/var/local/Osd2[[email protected]-admin my-cluster]# ceph-deploy OSD Activate ceph-node1:/var/local/Osd2[[email protected]-admin my-cluster]# Ceph-W[[Email protected]-admin my-cluster]# Ceph-s cluster 8f7a79b6-ab8d-40c7-abfa-6e6e23d9a26d Health HEALTH_OK monmap E1:1Mons at {ceph-node1=192.168.55.186:6789/0}, Election epoch2, quorum0ceph-node1 osdmap E13:3OSDS:3Up3 inchpgmap v38: -PGs1Pools,0Bytes data,0Objects18600MB used,35153MB/53754MB Avail -Active+Clean #添加monitors节点 [[email protected]-admin my-cluster]# ceph-deploy New Ceph-node2 ceph-Node3[[email protected]-admin my-cluster]# Ceph-deploy Mon create-Initial[[email protected]-admin my-cluster]# ceph-deploy--overwrite-conf Mon Create Ceph-node2 ceph-Node3
Installation of the Ceph file system