Ceph's data management begins with the Ceph client's write operation, and since Ceph uses multiple replicas and strong consistency policies to ensure data security and integrity, a write request data is written to the primary OSD first and then primary The OSD further copies the data to the secondary and other tertiary OSD and waits for their completion notification before sending the final completion confirmation to the client. This article mainly from the Ceph data management, through a concrete example of how to find the location of the data stored in Ceph.
1, we first create a test file containing the data, a ceph pool and set the number of copies of the pool to
$ echo "Hello Ceph, I ' m learning the data management part." >/tmp/testfile
$ cat/tmp/testfile
Hello ceph, I ' m Learning the data management part.
$ ceph OSD Pool Create helloceph 192 192
pool ' Helloceph ' created
$ ceph OSD Pool Set Helloceph size 3
set poo L 3 Size to 3
2. Write the file to the pool you created
$ rados-p Helloceph put Object1/tmp/testfile
$ rados-p helloceph ls
object1
3. View PG Map of Object1
$ ceph OSD Map Helloceph object1
osdmap E8 pool ' Helloceph ' (3) object ' Object1 ', pg 3.bac5debc (3.BC), Up ( [0,1,2], P0) acting ([0,1,2], P0)
where:
osdmap E8 OSD map version number
pool ' helloceph ' (3) pool name and ID
object ' Object1 ' object's name
is PG 3.bac5debc (3.BC) pg number, i.e. 3.BC
up ([0,1,2], P0) OSD up set, because we set the 3 copy, So each PG will be stored on 3 OSD
acting ([0,1,2], P0) acting set, i.e. osd.0 (primary), Osd.1 (secondary) and Osd.2 (tertiary)
4, view three OSD information, mainly host information is the OSD on which machine
[Root@admin-node osd]# ceph osd Tree
ID WEIGHT TYPE NAME up/down reweight primary-affinity
-1 0.05589 Root default
-2 0.02190 host node2
0 0.01700 osd.0 up 1.00000 1.00000
3 0.00490 Osd.3 up 1.00000 1.00000
– 3 0.01700 host Node3
1 0.01700 Osd.1 up 1.00000 1.00000
-4 0.01700 host Node1
2 0.01700 Osd.2 up 1.00000 1.00000
5. Find the Testfile file from the OSD (here take Osd.1 as an example)
$ ssh node1 (Osd.1 machine)
$ cd/var/lib/ceph/osd/ceph-0/current
$ cd 3.bc_head
$ cat Object1__head_ Bac5debc__3
Hello ceph, I ' m learning the data management part.
Similarly, we can find Object1 in osd.0 and Osd.2.