Note: All operations below are performed at the admin node
1, prepare three virtual machines, one as the admin node, the other two as the OSD node, and corresponding with the hostname command to modify the hostname to ADMIN,OSD0,OSD1, and finally modify the/etc/hosts file as shown below
127.0.0.1 localhost10.10.102.85 admin10.10.102.86 osd010.10.102.87 OSD1
2. Configure password-free access
Ssh-keygen //press ENTER to generate a public key to Ssh-copy-id-i/root/.ssh/id_rsa.pub osd0//Copy Local public key to osd0 machine for password-free access ssh-copy-id-i/ Root/.ssh/id_rsa.pub OSD1
3, Installation Ceph-deploy
Apt-get Install Ceph-deploy
4. Create the cluster directory and enter
mkdir MY-CLUSTERCD My-cluster
5, create the cluster, will see in the current directory ceph.conf Ceph.log ceph.mon.keyring three files
Ceph-deploy New Admin
6. Installing Ceph
Ceph-deploy Install admin osd0 osd1
But using this command is always very slow and problematic, I always apt-get install Ceph on each node ....
7, add a ceph cluster monitor, the admin node can be created
Ceph-deploy Mon Create admin
8, the collection of secret keys, the directory will be more than ceph.bootstrap-mds.keyring ceph.client.admin.keyring ceph.client.admin.keyring these several files
Ceph-deploy Gatherkeys Admin
9. Add 2 OSD, for fast installation, use a single directory instead of a single disk for each ceph OS daemon
SSH Osd0sudo mkdir/tmp/osd0exitssh osd1sudo mkdir/tmp/osd1exit
10. Prepare the OSD
Ceph-deploy OSD Prepare osd0:/tmp/osd0 OSD1:/TMP/OSD1
11. Activate OSD
Ceph-deploy OSD Activate osd0:/tmp/osd0 OSD1:/TMP/OSD1
12, the configuration files and management keys are copied to the management node and the Ceph node, the next time you use the Ceph command interface, you do not need to specify the cluster monitor address, the execution of the command does not need to specify each time ceph.client.admin.keyring
Ceph-deploy Admin osd0 OSD1
13. Check the health status of the cluster
Ceph Health
But the return is not the state of health, but Health_warn 192 pgs degraded; 192 pgs stuck unclean, then add an additional OSD node
14. Expand the cluster, add an OSD to the admin node, and Ceph Health will return to the HEALTH_OK state.
Mkdir/tmp/osd2ceph-deploy OSD Prepare Admin:/tmp/osd2ceph-deploy OSD Activate ADMIN:/TMP/OSD2
Ubuntu 14.04 Deployment Ceph Cluster