Block device installation, create, map, mount, details, adjust, uninstall, curve map, delete
Make sure your ceph storage cluster is active + clean before working with ceph block devices.
vim/etc/hosts172.16.66.144 ceph-client
Perform this quick boot on the admin node.
1. On the admin node, install Ceph on your ceph-client node with Ceph-deploy
Ceph-deploy Install Ceph-client
2. On the admin node, copy the Ceph configuration file and ceph.client.admin.keyring to your ceph-client with Ceph-deploy.
Ceph-deploy Admin Ceph-client
1. On the Ceph-client node, create a mirror of the 100G block device, image_name "foo",
RBD Create foo--size 102400
2. On the Ceph-client node, to map the block device image to the kernel module, first load the Ceph RBD module and let the kernel recognize the device from the new
Modprobe Rbdpartprobe
3. On the ceph-client node, map this image to a block device.
RBD map foo--pool RBD--name client.admin
"Show mapped block devices"
[Email protected]:~# RBD showmappedid pool image snap device 0 RBD foo-/dev/rbd0
4. Use this block device to create a filesystem on a ceph-client node
Mkfs.ext4-m0/dev/rbd/rbd/foo
5. Mount the file system on your ceph-client node
Mkdir/mnt/ceph-block-devicemount/dev/rbd/rbd/foo/mnt/ceph-block-devicecd/mnt/ceph-block-device[email protected] :/mnt/ceph-block-device# ll Total dosage 24drwxr-xr-x 3 root root 4096 November 4 09:56./drwxr-xr-x 3 root root 4096 November 4 09:56.. /drwx------2 root root 16384 November 4 09:56 lost+found/[email protected]:/mnt/ceph-block-device# df-ht file system type capacity Amount used available% mount point/dev/rbd0 ext4 99G 60M 99G 1%/mnt/ceph-block-device
Test
[Email protected]:/mnt/ceph-block-device# dd If=/dev/zero bs=100m count=5 of=haha.tar.gz recorded 5+0 read-in record 5+0 Write out 524288000 bytes (524 MB) Copied, 47.5169 sec, 11.0 mb/s [email protected]:/mnt/ceph-block-device# ll-h Total usage 501m-rw-r--r--1 root Root 500M November 4 10:22 haha.tar.gzdrwx------2 root root 16K November 4 09:56 lost+found/
"Unmap block Device", Unmount mount point first
[Email protected]:~# umount/dev/rbd/rbd/foorbd UNMAP/DEV/RBD/{POOLNAME}/{IMAGENAME}RBD Unmap/dev/rbd/rbd/foo[email protected]:~# Mount/dev/rbd/rbd/foo/mnt/ceph-block-devicemount: Special Equipment/dev/rbd/rbd/foo not present
Before you can add a block device node, you must first create an image of the Ceph storage cluster. To create a mirror of a block device, execute the following command:
For example, to create a 1GB image information called Foo stored in the default RBD pool, execute the following command:
For example, to create a 1GB name called Haha, the image information is stored in the name of the Swimmingpool pool, executing the following command:
RBD Create Foo--size 1024
Create a swimmingpool pool of 10 pg,10 PGP
[Email protected]:~# ceph OSD Pool Create swimmingpool 10pool ' swimmingpool ' created
"To create a 1GB name called Haha, the image information is stored in the name Swimmingpool pool"
[Email protected]:~# RBD create haha--size 1024x768--pool swimmingpool
"List block devices in a specific pool"
[Email protected]:~# RBD ls Swimmingpoolhaha
Query Displays image information
To query information from a specific image, execute the following command, replacing the name of the associated mirror within the curly braces:
RBD--image {Image-name} info
[Email protected]:~# RBD--image Foo Info
RBD image ' Foo ':
Size GB in 51200 objects
Order (4096 KB objects)
Block_name_prefix:rb.0.5b607.238e1f29
Format:1
To query the image information within a pool, perform the following, replace the image name and pool name of {mirror} with the {pool name}:
RBD--image {Image-name}-p {pool-name} info
RBD--image bar-p swimmingpool Info
[Email protected]:~# RBD--image haha-p swimmingpool Info
RBD image ' haha ':
Size 1024x768 in objects
Order (4096 KB objects)
Block_name_prefix:rb.0.5b67f.74b0dc51
Format:1
"Resize block device mirror Size"
CEPH's block device image is thin provisioned. They don't actually use any physical storage until you start to save the data. However, they have a maximum capacity-size option set. If you want to increase (or decrease) the maximum size of a ceph mount device image, execute the following command:
Adjust the block to 400G
[Email protected]:~# RBD resize--image foo--size 409600
Resizing image:100% Complete...done.
[Email protected]:~# RBD--image foo-p RBD Info
RBD image ' Foo ':
Size 102400 Objects
Order (4096 KB objects)
Block_name_prefix:rb.0.5b607.238e1f29
Format:1
[Email protected]:~#
Remove block Device Mirroring "premise: Unmap block Devices"
To remove a block device, execute the following command, replacing the curly braces to replace the name of the image you want to remove:
RBD RM {Image-name}[email protected]:~# RBD rm fooremoving image:100% complete...done.
To remove a block device from the pool, execute the following command, replace the name of the associated mirror within the curly braces, replace the name of the pool name with {pool names}, and replace the {mirror} name in curly braces:
RBD RM {Image-name}-p {pool-name}[email protected]:~# RBD rm haha-p swimmingpoolremoving image:100% Complete...done.
================================================
"Unmap block Device", Unmount mount point first
[Email protected]:~# umount/dev/rbd/rbd/foorbd UNMAP/DEV/RBD/{POOLNAME}/{IMAGENAME}RBD Unmap/dev/rbd/rbd/foo[email protected]:~# Mount/dev/rbd/rbd/foo/mnt/ceph-block-devicemount: Special Equipment/dev/rbd/rbd/foo not present
===================================================
[Email protected]:~# rbd ls[email protected]:~# RBD ls swimmingpool
The returned results are all empty
View storage Details
[email protected]:~# ceph df detailglobal: size avail raw used %raw used objects 2940g 2748g 42920M 1.43 46 pools: name ID category used %used MAX AVAIL OBJECTS DIRTY READ WRITE rbd 0 - 8 0 1374G 1 1 533 35899 .rgw.root 1 - 848 0 1374G 3 3 &Nbsp; 6 3 .rgw.control 2 - 0 0 1374G 8 8 0 0 .rgw 3 - 0 0 1374g 0 0 0 0 .rgw.gc 4 - 0 0 1374G 32 32 20034 13376 .users.uid 5 - 0 0 1374G 0 0 0 0 test 6 - 44892k 0 1374G 1 1 0 11 swimmingpool 7 - 8 0 1374g 1 1 13 4
"Create snapshot" =================================
Use RBD to create a snapshot using the snap create option, with the appropriate pool name and image name in the curly braces.
First create a swimmingpool pool 10 pg,10 PGP
[Email protected]:~# ceph OSD Pool Create swimmingpool 10 10
Create a mirror named My_image 1G specified on swimmingpool
[Email protected]t:~# RBD Create my_image--size 1024x768--pool swimmingpool
Create a snapshot named First_snapde in Swimmingpool my_image
[Email protected]:~# RBD snap Create Swimmingpool/[email Protected]_snap
displaying snapshot information
[Email protected]:~# RBD snap ls swimmingpool/my_imagesnapid NAME SIZE 3 first_snap MB [email protected ]:~# modprobe rbd[email protected]:~# RBD map my_image--pool swimmingpool--name client.admin/dev/rbd0
"Show mapped block devices"
[Email protected]:~# RBD showmappedid pool image snap Device 0 swimmingpool my_image-/dev/rbd0
"Create File System"
[Email protected]:~# mkfs.ext4/dev/rbd/swimmingpool/my_image [email protected]:~# mount/dev/rbd/swimmingpool/my_ Image/mnt/ceph-block-device/[email protected]:~# df-ht file system type capacity used available% mount point/dev/rbd0 ext4 976M 1.3M 908M 1%/mnt/ceph-block-device
"Make a change."
[Email protected]:/mnt/ceph-block-device# dd If=/dev/zero bs=10m count=5 of=test.tar.gz[email protected]:/mnt/ ceph-block-device# ll-h Total dosage 51m-rw-r--r--1 root root 50M November 4 16:25 test.tar.gz
"Rollback SNAPSHOT" =================================
[Email protected]:/mnt/ceph-block-device# RBD snap rollback swimmingpool/[email protected]_snaprolling back to Snapshot : 100% complete...done. unload mount point [email protected]:/mnt/ceph-block-device# cd[email protected]:~# umount/dev/rbd/ Swimmingpool/my_image[email protected]:~# mount/dev/rbd/swimmingpool/my_image/mnt/ceph-block-device/"Snapshot Now" It's back to First_snap state.
"Delete Snapshot" =======================================
[Email protected]:/mnt/ceph-block-device# cd[email protected]:~# Umount/dev/rbd/swimmingpool/my_image[email protected]:~# RBD snap purge swimmingpool/my_imageremoving all snapshots:100% complete...done.
View Block devices
[Email protected]:~# rbd-p swimmingpool--image my_image inforbd image ' My_image ': size 1024x768 in + objects or Der (4096 KB objects) block_name_prefix:rb.0.5b6fd.74b0dc51 format:1
This article comes from "Life is endless, tossing and turning." "blog, declined reprint!"
installation, creation, device mapping, mounting, details, resizing, uninstalling, mapping, deleting snapshot creation rollback Delete for Ceph block device