Docker:how to configure Docker with Devicemapper

Source: Internet
Author: User
Tags docker hub arch linux

Background:

Device Mapper is a kernel-based framework this underpins many advancedvolume management technologies on Linux. Docker ' s devicemapper storage driverleverages The thin provisioning and snapshotting capabilities of this frameworkfor image and C Ontainer Management. This article refers to the Device mapperstorage driver devicemapper as, and the kernel framework as Device Mapper .


Docker originally ran on Ubuntu and Debian Linux and used AUFS for its storagebackend. As Docker became popular, many of the companies that wanted to use itwere using Red Hat Enterprise Linux (RHEL). Unfortunately, because the Upstreammainline Linux kernel did not include AUFS, RHEL do not use AUFS either.

To correct this Red Hat developers investigated getting AUFS into the mainlinekernel. Ultimately, though, they decided a better idea is to develop a newstorage backend. Moreover, they would base this new storage backend on existing Device Mapper technology.

Red Hat collaborated with Docker Inc. to contribute this new driver. As a resultof this collaboration, Docker's Engine is re-engineered to make the storagebackend pluggable. So it is that the devicemapper became the second Storagedriver Docker supported.

Device Mapper have been included in the mainline Linux kernel since. It is a core part of RHEL family of Linux distributions. This means thatthe devicemapper storage driver are based on stable code that have a lot ofreal-world production deployments and Stro Ng community Support.


The devicemapper driver stores every image and container on its own virtualdevice. These devices is thin-provisioned copy-on-write snapshot devices. Device Mapper Technology works at the block level rather than the file level. This means, devicemapper storage driver ' s thin provisioning andcopy-on-write operations work with blocks rather than entire F Iles.


How to config?

The is the devicemapper default Docker storage driver on some linuxdistributions. This includes RHEL and the most of its forks. Currently, thefollowing distributions support the driver:

    • Rhel/centos/fedora
    • Ubuntu 12.04
    • Ubuntu 14.04
    • Debian
    • Arch Linux

Docker hosts running the devicemapper storage driver default to aconfiguration mode known as loop-lvm . This mode uses sparse files to buildthe thin pool used by image and container snapshots. The mode is designed towork Out-of-the-box with no additional configuration. However, Productiondeployments should not run under loop-lvm mode.

You can detect the mode by viewing the docker info command:

$  sudo docker infocontainers: 0  Images: 0  Storage driver:devicemapper Pool Name:docker-202 : 2  -25220302 - Pool Pool Blocksize: 65.54  kB backing filesystem:xfs [ ...] Data Loop file:/var /lib/docker/devicemapper/devicemapper/data Metadata Loop file: /var /lib/docker/devicemapper/devicemapper/metadata Library Version: 1.02  .93 -rhel7  ( 2015  -01  -28    [ ...]  

The output above shows a Docker host running with the devicemapper storagedriver operating in loop-lvm mode. This was indicated by the fact, the and A are on Data loop file Metadata loop file files under /var/lib/docker/devicemapper/devicemapper . These is loopback mounted sparsefiles.

Configure DIRECT-LVM mode for production

The preferred configuration for production deployments is direct-lvm . Thismode uses block devices to create the thin pool. The following procedure showsyou How to configure a Docker host to use the devicemapper storage driver ina direct-lvm configuration.

Caution: If You had already run the Docker daemon on your Docker Hostand has images you want to keep, push them To Docker Hub or your privatedocker Trusted Registry before attempting this procedure.

The procedure below would create a logical volume configured as a thin pool touse as backing for the storage pool. It assumes that there is a spare blockdevice at and enough free space to complete the /dev/xvdf task. The Deviceidentifier and volume sizes May is different in your environment and youshould substitute your own values Throug Hout the procedure. The procedure alsoassumes that the Docker daemon are in the state stopped .

  1. Log in to the Docker host, want to configure and stop the Docker daemon.

  2. Install the LVM2 and thin-provisioning-tools packages.

    The LVM2 package includes the userspace toolset this provides logical volumemanagement facilities on Linux.

    The package thin-provisioning-tools allows activate and manage Yourpool.

    $ install -y lvm2
  3. Create a physical volume replacing with /dev/xvdf your block device.

    $ pvcreate /dev/xvdf
  4. Create a docker volume group.

    $ vgcreate docker /dev/xvdf
  5. Create a logical volume named thinpool and thinpoolmeta .

    In this example, the data logical is 95% of the ' Docker ' volume group size. Leaving this free space allows for auto expanding of either the data ormetadata if space runs low as a temporary stopgap.

    $ -l 95%VG$ -l 1%VG
  6. Convert the pool to a thin pool.

    $ --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta
  7. Configure autoextension of thin pools via an profile lvm .

    $ vi /etc/lvm/profile/docker-thinpool.profile
  8. Specify thin_pool_autoextend_threshold value.

    The value should be the percentage of space used before lvm Attemptsto autoextend the available space (= disabled).

    thin_pool_autoextend_threshold = 80
  9. Modify the for when thin_pool_autoextend_percent thin pool autoextension occurs.

    The value ' s setting is the percentage of space to increase the thin pool (=disabled)

    thin_pool_autoextend_percent = 20
  10. Check your work, your docker-thinpool.profile file should appear similar to the following:

    An example /etc/lvm/profile/docker-thinpool.profile file:

    activation {thin_pool_autoextend_threshold=80thin_pool_autoextend_percent=20}
  11. Apply your new LVM profile

    $ --metadataprofile docker-thinpool docker/thinpool
  12. Verify the is lv monitored.

    $ lvs -o+seg_monitor
  13. If The Docker daemon is previously started, move your existing graph driverdirectory out of the.

    Moving the graph driver removes any images, containers, and volumes in Yourdocker installation. These commands move the contents of The/var/lib/docker directory to a new director Y named/VAR/LIB/DOCKER.BK . If any of the following steps fail and your need to restore, you can remove/var/lib/docker< /code> and replace it with/VAR/LIB/DOCKER.BK .

    $  mkdir/var /lib/docker.bk$  mv/var /lib/docker/ *   / VAR/LIB/DOCKER.BK   
  14. Configure the Docker daemon with specific devicemapper options.

    Now that your storage are configured, configure the Docker daemon to use it. There is and ways to does this. You can set options on the command line ifyou start the daemon there:

    Note: The deferred deletion option, dm.use_deferred_deletion=true is not a yet Supportedon CentOS, RHEL, or Ubuntu 14.04 when using the de Fault kernel. Support is added in Theupstream kernel version 3.18.

    --storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool --storage-opt=dm.use_deferred_removal=true --storage-opt=dm.use_deferred_deletion=true

    You can also the set them for startup in the Thedaemon configuration File,which defaults /etc/docker/daemon.json to the configuration, for example:

    {  "storage-driver": "devicemapper",   "storage-opts": [     "dm.thinpooldev=/dev/mapper/docker-thinpool",     "dm.use_deferred_removal=true",     "dm.use_deferred_deletion=true"   ]}
  15. If using SYSTEMD and modifying the daemon configuration via unit or drop-in file, reload Systemd to scan for changes.

    $ systemctl daemon-reload
  16. Start the Docker daemon.

    $ start docker

After you start the Docker daemon, ensure your monitor your thin pool and volumegroup free space. While the volume group would auto-extend, it can still fillup. To monitor logical volumes, use lvs without options or to see lvs -a thedata and metadata sizes. To monitor volume group free space, use the vgs command.

Logs can show the auto-extension of the thin pool when it hits the threshold and Toview the Logs use:

$ journalctl -fu dm-event.service


Examine Devicemapper structures on the host

You can use the command to see the lsblk device files created above and the of the pool devicemapper storage driver creates on T Op of them.

$ sudo lsblkNAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTxvda   202:00 8G  0 disk└─xvda1   202:10 8G  0 part /xvdf   202:80010G  0 disk├─vg--docker-data   253:0090G  0 lvm│ └─docker-202:1-1032-pool 253:2010G  0 dm└─vg--docker-metadata   253:10 4G  0 lvm  └─docker-202:1-1032-pool 253:2010G  0 dm
The diagram below shows the image from prior examples updated with the Detailfrom the lsblkCommand above.


Device Mapper and Docker performance

It's important to understand the impact that Allocate-on-demand andcopy-on-write operations can has on overall container Performance.

Allocate-on-demand Performance Impact

The devicemapper storage driver allocates new blocks to a container via Anallocate-on-demand operation. This means for each time the app writes Tosomewhere new inside a container, one or more empty blocks have to be locatedfrom The pool and mapped into the container.

All blocks is 64KB. A write that uses less than 64KB still results in a single 64KB block being allocated. Writing more than 64KB of data uses multiple 64KBblocks. This can impact container performance, especially in containers thatperform lots of small writes. However, once a block is allocated to a container subsequent reads and writes can operate directly on that block.

Copy-on-write Performance Impact

Each time a container updates existing data for the first time, the devicemapper storage driver have to perform a copy-on-write ope Ration. Thiscopies the data from the image snapshot to the container ' s snapshot. Thisprocess can has a noticeable impact on container performance.

All copy-on-write operations has a 64KB granularity. As a result, updating32kb of a 1GB file causes the driver to copy a single 64KB block into Thecontainer ' s snapshot. This have obvious performance advantages over file-levelcopy-on-write operations which would require copying the entire 1GB File intothe container layer.

In practice, however, containers this perform lots of small block writes (<64KB) can perform worse with devicemapper than with A UFS.

Other Device Mapper Performance considerations

There is several other things that impact the performance of the devicemapper storage driver.

  • The mode. The default mode for Docker running, the devicemapper storagedriver is loop-lvm . This mode uses sparse files and suffers from poorperformance. It is not arecommended for production. The recommended mode forproduction environments is direct-lvm where the storage driver writesdirectly to raw block devices.

  • High speed storage. For best performance you should place the and on high speed Data file Metadata file storage such as SSD. This can is directattached storage or from a SAN or NAS array.

  • Memory usage. Is isn't the most devicemapper memory efficient dockerstorage driver. Launchingn copies of the same container loads n copies ofits files into memory. This can has a memory impact on your Docker host. As Aresult, the storage driver may is the best choice for paasand other high devicemapper density use cases.

One final point, data volumes provide the best and most predictableperformance. This is because they bypass the storage driver and does not incurany of the potential overheads introduced by thin provision ing Andcopy-on-write. For this reason, the should place heavy write workloads ondata volumes.


Related information
    • Understand images, containers, and storage drivers
    • Select a storage driver
    • AUFS Storage Driver in practice
    • Btrfs Storage Driver in practice
    • Daemon Reference

Reference:

https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/# Other-device-mapper-performance-considerations



Docker:how to configure Docker with Devicemapper

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.