1, modify the/etc/hosts, so that the host name corresponding to the IP address of the machine (if you choose a loopback address 127.0.0.1 seemingly cannot parse the domain name). Note: The following host name is monster, the reader needs to change it to its own hostname
10.10.105.78 monster127.0.0.1 localhost
2. Create a directory Ceph and enter
3, prepare two block devices (can be hard disk or LVM volume), here we use LVM
DD If=/dev/zero of=ceph-volumes.img bs=1m count=8192 oflag=direct sgdisk-g--clear ceph-volumes.img sudo Vgcreate ceph-volumes $ (sudo losetup--show-f ceph-volumes.img) sudo lvcreate-l2g-nceph0 ceph-volumes sudo lvc REATE-L2G-NCEPH1 ceph-volumes sudo mkfs.xfs-f/dev/ceph-volumes/ceph0 sudo mkfs.xfs-f/dev/ceph-volumes/ CEPH1 mkdir-p/srv/ceph/{osd0,osd1,mon0,mds0} sudo mount/dev/ceph-volumes/ceph0/srv/ceph/osd0 sudo Mount/dev/ceph-volumes/ceph1/srv/ceph/osd1
Based on the above command, we created two virtual disks ceph0 and CEPH1 and mounted them in the/SRV/CEPH/OSD0 and/SRV/CEPH/OSD1 directories respectively.
4, Installation Ceph-deploy
sudo apt-get install Ceph-deploy
5. Create a working directory, enter and create a cluster
mkdir ceph-cluster cd ceph-cluster ceph-deploy new Monster//Create a fresh cluster and write cluster.conf and keyring, etc.
Because we are working on a single node, we need to modify the configuration file
echo "OSD Crush Chooseleaf type = 0" >> Ceph.confecho "OSD Pool Default size = 1" >> Ceph.confecho "OSD Journa L size = ">> ceph.conf
6. Installing the Ceph Base library (Ceph,ceph-common, Ceph-fs-common, Ceph-mds)
Ceph-deploy Gatherkeys Monster
But I have a problem with the way I install it ... So directly apt-get install Ceph is OK
7. Create a cluster monitor
Ceph-deploy Mon Create Monster
8. Collect the key from the remote node to the current folder
Ceph-deploy Gatherkeys Monster
9. Add OSD to the directory where we mount the virtual disk
Ceph-deploy OSD Prepare Monster:/srv/ceph/osd0ceph-deploy OSD prepare MONSTER:/SRV/CEPH/OSD1
10. Activate OSD
sudo ceph-deploy OSD activate Monster:/srv/ceph/osd0sudo ceph-deploy OSD Activate MONSTER:/SRV/CEPH/OSD1
[Ceph_deploy] [ERROR] Runtimeerror:failed to execute command:ceph-disk-v activate--mark-init upstart--mount/srv/ceph/osd0
When you encounter an error such as the one shown above when activating, use the command sudo chown ceph:ceph/srv/ceph/osd0 to solve the problem (but previously did the OSD disk, which seems to be solved by this method ...). )
11. Copy the admin key to other nodes
Ceph-deploy Admin Monster
12. Verification
Ubuntu 14.04 Standalone Installation CEPH