I. Devicemapper INTRODUCTION
Device Mapper is the kernel-based Advanced volume management technology framework in Linux systems. Docker's Devicemapper storage driver is based on the framework's thin provisioning and snapshot capabilities for mirroring and container management. Note: Device mapper is a technical framework for Linux, and Devicemapper is a storage driver provided by the Docker engine based on device mapper. Early Docker runs on Ubuntu and Debian Linux and uses Aufs as back-end storage. After the popularity of Docker, more and more companies want to run Docker on the enterprise-class operating system such as Red Hat Enterprise Linux, but unfortunately the Rhel kernel does not support AUFS. This time, Red Hat Company, decided to work with Docker to develop a device mapper technology-based back-end storage, which is now the devicemapper. The Devicemapper driver stores each Docker image and container on its own virtual device with thin provisioning (thin-provisioned), write-time copy (Copy-on-write), and snapshot functionality (snapshotting). Because device mapper technology is at the block level rather than the file level, the Devicemapper storage driver for Docker engine uses block devices to store data rather than file systems.
Two. Devicemapper mode
Devicemapper is the default storage driver for Docker engine under Rhel, and it has two configuration modes: LOOP-LVM and DIRECT-LVM. LOOP-LVM is the default mode, which uses discrete files at the OS level to build thin pools (thin pool). The model is designed so that Docker can be simply "out of the Box" (Out-of-the-box) without the need for additional configuration. However, if you are deploying Docker in a production environment, it is not recommended to use this mode in the official clear. We can see the following warning using the Docker Info command: Warning:usage of loopback devices is strongly discouraged for production use. Either use ' –storage-opt dm.thinpooldev ' or use ' –storage-opt dm.no_warn_on_loop_devices=true ' to suppress this WARNING.D IRECT-LVM is the recommended model for Docker's recommended production environment, and he uses block devices to build thin pools to hold images and container data. Some time ago there is a very good article is about the old driver fill devicemapper pit of blood and blood history, after careful study found that old driver used is LOOP-LVM mode, the pit may be caused by this, the end of the old driver using OVERLAYFS storage driver solved the problem. Note: The Linux kernel is above 3.18 to support OVERLAYFS, but Rhel 7.2 has a kernel version of 3.10, so it is not supported by native. But it is true that some people have successfully applied the OVERLAYFS driver on the RHEL7.2, and the personal guess is that it is possible to manually load the overlay module inside the kernel.
Three. Configure DIRECT-LVM mode
1. Stop Docker and back up
If the Docker service is already running and has mirrors and containers that need to be retained, the data is backed up before the service is stopped. It is also strongly recommended that if you use Docker in a production environment, the DIRECT-LVM mode will be configured for the first time you get the host. (Of course you can choose other storage driver)
2. View current Devicemapper mode
[[email protected] ~]# docker infocontainers: 0 running: 0 paused: 0 Stopped: 0Images: 0Server Version: 17.12.0-ceStorage Driver: devicemapper pool name: docker-8:3-1073035-pool pool blocksize: 65.54kb base device size: 10.74gb backing filesystem: xfs udev sync Supported: true data file: /dev/loop0 metadata file: /dev/loop1 data loop file: /var/lib/docker/devicemapper/devicemapper/data metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata data space used: 11.8mb data Space Total: 107.4GB Data Space Available: 51.36GB Metadata space used: 581.6kb metadata space total: 2.147gb metadata space Available: 2.147gb thin&nbSp pool minimum free space: 10.74gb deferred removal enabled: true deferred deletion enabled: true deferred deleted device count: 0 library version: 1.02.140-rhel7 (2017-05-03) logging driver: json-filecgroup driver: cgroupfsplugins: volume: local network: bridge host macvlan Null overlay log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslogSwarm: inactiveRuntimes: runcDefault Runtime: runcinit binary: docker-initcontainerd version: 89623f28b87a6004d4b785663257362d1658a729runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8finit version: 949e6faSecurity Options: seccomp Profile: defaultKernel version: 3.10.0-327.el7.x86_64operating system: centos  linux 7 (Core) ostype: linuxarchitecture: x86_64cpus: 2total memory: 1.954gibname: dockerid: immy:ylyx:lf5e:gzid:accp:4v43:2ipt:mcsd:dinh:mkfj:dsdv:twf4docker root Dir: /var/lib/dockerDebug Mode (client): falsedebug mode (server): falseregistry: https://index.docker.io/v1/labels:experimental: falseinsecure registries: 127.0.0.0/8live restore enabled: falsewarning: devicemapper: usage of loopback devices is strongly discouraged for production use. Use '--storage-opt dm.thinpooldev ' to specify a custom block storage device. [[email protected] ~]#
Based on the results of the Docker info query, you can see that the current pattern is LOOP-LVM.
3. Stop the Docker service
[[email protected] ~]# Systemctl stop Docker
Four. Assigning Bare devices
In this example, to add a hard drive to a Docker host, it is recommended to use externally shared storage devices but not in this way, depending on your environment.
Add a 200GB hard drive
Create a volume Group
Hooking volume group to a Docker host
Create VG
1. View the device
[Email protected] ~]# fdisk-l/dev/sdbdisk/dev/sdb:214.7 GB, 214748364800 bytes, 419430400 sectorsunits = sectors of 1 * Bytessector Size (logical/physical): bytes/512 bytesi/o size (minimum/optimal): bytes/512 bytes[[ Email protected] ~]#
2. Create PV
[Email protected] ~]# pvcreate/dev/sdb physical Volume "/dev/sdb" successfully created. [Email protected] ~]#
3. Create VG
[Email protected] ~]# vgcreate docker/dev/sdb Volume Group "Docker" successfully created[[email protected] ~]#
4. View VG Information
[[email protected] ~]# vgs vg #PV #LV #SN Attr VSize VFree docker 1 0 0 wz--n- <200.00g <200.00g[[email protected] ~]# vgdisplay docker --- Volume group --- vg name docker System ID Format lvm2 metadata areas 1 metadata sequence no 1 vg access read/write vg status resizable max lv 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size <200.00 GiB pe size 4.00 mib total PE 51199 alloc pe / size 0 / 0 Free PE / Size 51199 / <200.00 GiB VG UUID m6or9g-k3ff-s5ex-w9yz-mjyn-oaa5-l2vwsw [[email protected] ~ ]#
Create Thinpool
Create Pool
[Email protected] ~]# lvcreate--wipesignatures y-n thinpool docker-l 95%vg Logical Volume "Thinpool" created. [Email protected] ~]# lvcreate--wipesignatures y-n thinpoolmeta docker-l 1%vg Logical Volume "Thinpoolmeta" created.[ [Email protected] ~]#
The data LV size is 95% of the VG, the metadata LV size is 1% of VG, and the remaining space is used for automatic expansion.
2. Convert Pool to Thinpool
[[email protected] ~]# lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%vg logical volume "Thinpoolmeta" created. [[Email protected] ~]# lvconvert -y --zero n -c 512k --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta Thin pool volume With chunk size 512.00 kib can address at most 126.50 tib of data. warning: converting logical volume docker/thinpool and docker/thinpoolmeta to thin pool ' s data and metadata volumes with metadata wiping. this will destroy content of logical volume (filesystem etc.) converted docker/thinpool_tdata to thin pool. [[email protected] ~]#
Configure Thinpool
Configuring automatic scaling of pools
[Email protected] ~]# cat/etc/lvm/profile/docker-thinpool.profileactivation {thin_pool_autoextend_threshold=80th In_pool_autoextend_percent=20}[[email protected] ~]#
2. Apply Configuration changes
[Email protected] ~]# lvchange--metadataprofile docker-thinpool docker/thinpool Logical volume Docker/thinpool changed . [Email protected] ~]#
3. Status Monitoring Check
[[email protected] ~]# lvs-o+seg_monitor LV VG Attr lsize Pool Origin data% meta% Move Log Cpy%sync Convert Monitor thinpool Docker twi-a-t---<190.00g 0.00 0.01 monitored[[ Email protected] ~]#
Configure Docker
1. Modify the service configuration file
[Email protected] ~]# Vim/usr/lib/systemd/system/docker.service--storage-driver=devicemapper--storage-opt= Dm.thinpooldev=/dev/mapper/docker-thinpool--storage-opt Dm.use_deferred_removal=true
Execstart after adding storage related configuration parameters, if configured $options can also be added in the corresponding environmentfile.
2. Clear Graphdriver
[Email protected] ~]# rm-rf/var/lib/docker/*
The data backup was previously alerted, because clearing graphdriver here will remove all data from Image,container and volume. If you do not delete it, you will encounter the following error that causes the Docker service to not come up.
Error starting Daemon:error initializing Graphdriver:devmapper:Base Device UUID and Filesystem verification Failed:dev Icemapper:error running Devicecreate (activatedevice) Dm_task_run failed
Start the Docker service
[[email protected] ~]# systemctl daemon-reload[[email protected] ~]# systemctl start Docker
Check the Devicemapper configuration
[[email protected] ~]# docker infocontainers: 0 running: 0 paused: 0 Stopped: 0Images: 0Server Version: 17.12.0-ceStorage Driver: devicemapper pool name: docker-thinpool pool blocksize: 524.3kb base Device size: 10.74gb backing filesystem: xfs udev sync supported: true data space used: 20.45mb data space total: 204gb data Space Available: 204GB Metadata Space Used: 266.2kB Metadata space total: 2.143gb metadata space available: 2.143gb thin pool Minimum free space: 20.4gb deferred removal enabled: true deferred deletion enabled: true deferred deleted device count: 0 library version: 1.02.140-rhel7 (2017-05-03) Logging driver: json-filecgroup driver: cgroupfsplugins: volume: local network: bridge host macvlan null overlay log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslogswarm : inactiveruntimes: runcdefault runtime: runcinit binary: docker-initcontainerd version: 89623f28b87a6004d4b785663257362d1658a729runc version: B2567b37d7b75eb4cf325b77297b140ea686ce8finit version: 949e6fasecurity options: seccomp Profile: defaultKernel Version: 3.10.0-327.el7.x86_64Operating System: centos linux 7 (Core) Ostype: linuxarchitecture: x86_64cpus: 2total memory: 1.954gibname: dockerid: immy:ylyx:lf5e:gzid:accp:4v43:2ipt:mcsd:dinh:mkfj:dsdv:twf4docker root dir: /var/lib/dockerdebug mode (client): falsedebug mode (server): falseregistry: https://index.docker.io/v1/ labels:experimental: falseinsecure registries: 127.0.0.0/8live restore enabled: false[[email protected] ~]#
Based on the results of the Docker info query, you can see that the current pattern is DIRECT-LVM.
Test
Pull a mirror to see if the data will be written in Thinpool;
[[email protected] ~]# lvs lv vg Attr LSize Pool Origin data% meta% move log cpy%sync convert thinpool &NBSP;DOCKER&NBSP;TWI-A-T--- <190.00g 0.01 0.01 [[email protected] ~]# docker pull centosusing default tag: latestlatest: pulling from library/centosaf4b0a2388c6: pull complete digest: sha256 : 2671f7a3eea36ce43609e9fe7435ade83094291055f1c96d9d1d1d7c0b986a5dstatus: downloaded newer image for centos:latest[[email protected] ~]# lvs LV VG Attr LSize Pool Origin Data% meta% move log cpy%sync convert thinpool docker &NBSP;TWI-A-T--- <190.00g 0.13 0.01 [[email protected] ~]#
You can see that the data% has changed from 0.01 to 0.13 after pulling a centos mirror, indicating that the DIRECT-LVM configuration was successful and working correctly.
Docker Storage Driver Devicemapper Introduction and configuration