based on the high-speed configuration of the Ceph storage cluster environment, it is possible to do related object operations:
1. Set the OSD pool Min_sizefirst look at the pool with the Rados command such as the following:#rados LspoolsDatametadataRBD
The default OSD number of Min_size is configured to 2, where an OSD example is required to set it to 1ceph OSD Pool get {pool-name} {key}ceph OSD Pool set {Pool-name} {key} {value}#ceph OSD Pool Set data min_size 1
2. Test upload an objectprepare a Test.txt file and upload it using Rados
Rados put {object-name} {File-path}--pool=data
#rados put test.txt test.txt--pool=data
View the results of the upload
#rados-P data LS (the name of the object in the pool is listed here)
to view the location of an object
ceph OSD Map {pool-name} {object-name}
ceph OSD map data test.txt
Based on the mapping relationship, we are able to view the storage structure of the content to the OSD #cd/srv/ceph/osd0
You can see the relevant configuration file here. Where current stores the contents of the data object, you can see a bunch of head files and OMAP (metadata using LEVELDB storage) into it.
Based on the mapping relationship. The object stored in the file test.txt should be in a folder that starts with a 0.8 0.8_head, and you can see that the file Test.txt__head_8b0b6108__0 is the object we just stored.
The Rados command can also be used to do benchmark, restore objects, delete objects, and so on. There is no longer a list here.
3. Expand the cluster to join the new OSD into the clusteradds a ceph OSD process to the current node. #sudo mkdir-p/SRV/CEPH/OSD1
go back to the working folder of the cluster#cd/root/ceph-cluster#
Ceph-deploy OSD Prepare APUSAPP:/SRV/CEPH/OSD1
#ceph-deploy OSD Activate APUSAPP:/SRV/CEPH/OSD1
using Commands
#ceph-W
be able to see the internal cluster doing data migration
Access to the OSD1 current folder can be seen just uploaded object 0.8 at the beginning of the 0.8_temp and 0.8_head,osd0 object content also copied over.
Getting started with the Ubuntu environment Ceph configuration (ii)