Docker Advanced (i)----Volume (data volume)

Source: Internet
Author: User
Tags parent directory wrapper nginx server docker run
Introduction

The image of Docker is formed by overlaying multiple, read-only file systems. When we start a container, Docker loads the read-only layers and adds a read-write layer above the read-only layer (top of the stack). If you modify an existing file in a running container, the file will be copied from the read-only layer to the read-write layer. The read-only version of the file is still in, but it is hidden by a copy of the file above the read-write layer. When you delete docker, or restart, the previous changes will disappear. In Docker, the combination of read-only and top-level read-write layers is called the Union File System (Federated filesystem ).

In order to achieve good data saving and data sharing, Docker proposed the concept of Volume , which simply bypasses the default federated filesystem and exists on the host computer in the form of a normal file or directory. is also known as a data volume .

The role of volume
    • Data volumes allow for sharing and reuse between containers
    • Changes to the data volume take effect immediately (ideal as a development environment)
    • Updates to the data volume do not affect mirroring
    • The volume will persist until no container is used

Initialize Volume

When using Docker run , we can create a data volume by- v and mount it to the container, which can be mounted multiple times in a single run.

If you use Dockerfile to initialize, you can use VOLUME to add one or more new volumes to any container created by the image.

Create a data volume
8080: -D--name shanlei-nginx-v/usr/share/nginx/html nginx

The above command means that we have created a container called Shanlei-nginx, which maps the 8080 port of this machine to the default Web Access port 80 of the Nginx server in the container, creates a data volume, and mounts it to the container's/usr/share/nginx/ The HTML directory.

At this point we can bypass the federated file system and manipulate the directory directly on the host, and any files under the image/usr/share/nginx/html will be copied to volume.

We can find the storage location of volume on the host through the Docker inspect instructions.

Docker Inspect inspect Shanlei-nginx

The parameters following the Docker inspect directive can be followed by the container name. With this command we can get all the information about the container. We need to see this part.

......"Mounts": [            {                "Type":"Volume",                "Name":"057f911105d4c77d2cfe16ee6acb7f5a43f2643d571708da40f5db55e27b1155",                "Source":"/var/lib/docker/volumes/057f911105d4c77d2cfe16ee6acb7f5a43f2643d571708da40f5db55e27b1155/_data",                "Destination":"/usr/share/nginx/html",                "Driver":"Local",                "Mode":"",                "RW":true,                "Propagation":""            }],......

This means that Docker refers to the "Source" of the machine to the directory, which is

/var/lib/docker/volumes/057f911105d4c77d2cfe16ee6acb7f5a43f2643d571708da40f5db55e27b1155/_data

The directory to which the container's "Destination" is mounted.

In Linux we can go directly to the path that "Source" points to (note that there may be permissions issues when accessing the directory)

As long as the directory of the host is attached to the directory of the container, the change takes effect immediately. In Dockerfile, we can achieve the same goal through volume.

/usr/share/nginx/html
Directory as a data volume

With the-v flag, you can mount the host's directory to a container.

sudo docker run-d-P 8080:-v $PWD/html:/usr/share/nginx/html nginx

The above command mounts the native $pwd/html directory to the container's/usr/share/nginx/html directory. $PWD is a system environment variable that refers to the current directory environment. This feature is handy for testing, such as the ability to place programs into a local directory to see if the container is working properly. The path to the local directory must be an absolute path, and Docker will automatically create it for you if the directory does not exist.

Note: This syntax is not supported in Dockerfile.

The default permission for Docker mount data volumes is read-write, and of course we can pass the command : RO is specified as read-only.

sudo docker run-d-P 8080:-v $PWD/html:/usr/share/nginx/html:ro nginx

In the $pwd/html of my native host there is a index.html file with the following contents:

This was old

We run the above command and map the native 8080 port to the default Web port 80 of the Nginx server in the container, and now we have access to the localhost:8080

Now we modify the $pwd/html/index.html under this machine.

This is new

Visit localhost:8080

File as a data volume

We can also use the-v command to mount a single file from the host to the container.

8080: -v $PWD/html/index.html:/usr/share/nginx/html/index.html nginx

When you modify the native $pwd/html/index.html, the/usr/share/nginx/html/index.html in the container changes as well. The effect is the same as above, do not repeat here.

Note: If you mount a file directly, many file editing tools, including VI or SED--in-place, may cause changes to the file Inode, starting with Docker 1.1.0, which can cause an error message to be reported. So the simplest way is to directly mount the parent directory of the file.

Data sharing

If you want to implement data sharing between containers, you need to authorize a container to access the volume of another container. We can use the -volumes-from parameter to specify when using Docker run.

Docker run-it-h Newcontainer--volumes-from Shanlei-nginx Ubuntu/bin/bash

Note: It is important to note that it works regardless of whether the Shanlei-nginx is running. As long as the container is connected to the volume, it will not be deleted.

Data Volume container

If we have some continuously updated data that needs to be shared between containers, it's a good idea to create a data volume container. A common usage scenario is to use a pure data container to persist a database, configuration file, or data file.

A data volume container, in fact, is a normal container designed to provide data volumes for other containers to mount.

First we need to first create a data volume container

Echo  for Postgres

The data volumes in the Dbdata container are then mounted by the--volumes-from instruction parameter.

Docker run-d--volumes-from dbdata--name db1 training/postgres

Of course, we can also use multiple--volumes-from parameters to mount multiple data volumes from multiple containers. You can also get from other containers that already have data volumes mounted
To mount the data volume.

Docker run-d--name db3--volumes-from db1 training/postgres

Now we go into the container to see if the data volume container is mounted successfully

Docker exec-it Db1/bin/bash

Indicates that the data volume container has been mounted successfully.

Note: If you delete a mounted container (including dbdata, DB1, DB2, etc.), the data volume is not automatically deleted. If you want to delete a data volume, you must
Use the Docker rm-v command to specify that the associated container be deleted at the same time when you delete the last container that is still attached.

Backup and Recovery

We can use data volumes to perform backup, recovery, and migration of data in them.

First use the--volumes-from tag to create a container that loads the Dbdata container volume and mount the current/backup directory to the container from the local host. The command is as follows:

Docker run--volumes-from dbdata-v $ (pwdtar cvf/backup/backup.  Tar /dbdata

After the container is started, the tar command is used to back up the dbdata volume to the local/backup/backup.tar.

Recovery

If you want to restore data to a container, first create a container dbdata2 with the data volume.

Docker run-v/dbdata--name dbdata2 Ubuntu/bin/bash

Then create another container, mount the DBDATA2 container, and use Untar to extract the backup file to the mounted container volume.

Docker run--volumes-from dbdata2-v $ (pwdtar xvf/backup/backup.  Tar

Above

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.