Linux Learning Summary (70) docker-2

Source: Internet
Author: User
Tags git clone docker ps docker run

A Docer data management

1 mounting The local directory into the container
docker run -tid -v /data/:/data centos bash-V is used to specify the Mount directory: the previous/data/is the host local directory, the following/data/is the directory in the container and is automatically created in the container
2 Mounting a data volume
In fact, when we mount the directory, we can specify the container name, which is defined randomly if not specified. For example, we did not specify, it generated a name of Relaxed_franklin, the name can use the command Docker PS to see the right column
docker run -itd --volumes-from relaxed_franklin lv bash
In this way, we create a new container using the aming123 image and use the data volume of the Relaxed_franklin container
3 Defining a Data volume container
Sometimes we need to share data between multiple containers, similar to NFS in Linux, so we can build a dedicated data volume container and then mount the data volume directly from other containers.
1) First set up a data volume container
docker run -itd -v /data/ --name testvol centos bashNote that the/data/here is the/data directory of the container, not the local/data/directory.
2) then let the other container mount the data volume
docker run -itd --volumes-from testvol lv bash
Note:
The essence of the above operation is that when we start the container, the-v option specifies that the container can be used as a data volume container followed by a shared directory. Shared directory similar to NFS. --volumes-from defines the container's mount Target, which is the data volume container. The-v option can be followed directly by the directory in the container, or the host directory, separated by a colon, the host directory in front, meaning to mount the host directory into the container.

Backup and recovery of two Docker data volumes

1 backup
Mkdir/data/backup
docker run --volumes-from testvol -v /data/backup/:/backup centos tar cvf /backup/data.tar /data/
Note: First we need to use the Testvol data volume to open a new container, and we also need to attach the local/data/backup/directory to the/backup of the container, so that in the container/backup directory inside the new file, we can directly in the/data/ See in the backup/directory. Then, package the files under the/data/directory into a Data.tar file and place them under the/backup directory.
2 Recovery
Idea: Create a new data volume container, then build another container and mount the data volume container before unpacking the TAR package.
New Data Volume Container:docker run -itd -v /data/ --name testvol2 centos bash
Mount the data volume to create a new container, and unpack the package:docker run --volumes-from testvol2 -v /data/backup/:/backup centos tar xf /backup/data.tar

Note:
The above content is more around, we should back up the object is the data volume container, just as we back up the NFS server. The reason we are introducing a new container to back up is because we did not mount the host directory when we created the data volume container, and there is no way to append the mount to an existing container. So there will be such a tortuous way. The new container/data/directory that we created mounts the/data/directory of the Data volume container,/backup/mounts the host/data/backup/directory, so as long as in the new container, the/data/directory is backed up to the/backup/directory, This enables the purpose of backing up data from the data volume container to the host disk. Recovery is just the inverse of the process, of course, if the original data volume container exists, there is no need to create a new data volume container. Our new container uses both the-volumes and-v options, which specify the data volume container as a common container (relative to the data volume container) as a Mount object. The shared directory of the data container is/data/mounted to its/data/directory. It also acts as a data volume container to mount the host's directory/data/backup/to its own shared directory,/backup/. When we backed up, we specified the target directory as/backup/, and why was it not specified when extracting it because the container bash was in the root directory. So the direct decompression when the current directory, exactly corresponds to/data/.

Three Docker network mode

Host mode, using--net=host to specify when using Docker run
The network that Docker uses is actually the same as the host, the network card IP that is seen inside the container is the host IP
Container mode, using--net=container:container_id/container_name
Multiple containers use a common network and see the same IP
None mode, using--net=none to specify
In this mode, no network is configured
Bridge mode, using--net=bridge to specify the default mode, without specifying the default is this network mode. This mode assigns a separate network Namespace to each container. A NAT network mode similar to VMware. All containers on the same host will be able to communicate with each other under the same network segment.

Four Docker network management, external access container

First create a new container using the CentOS image, then install the HTTPD service in that container and start
Then make the container a new mirror (CENTOS-HTTPD), then create the container with the new image and specify the port mapping
docker run -itd -p 5123:80 centos-httpd bash-P can specify port mappings, in this case mapping the container's port 80 to the local 5123 port
docker exec -it container_id bash
Start Httpd:httpd-k Start
Edit 1.html:vi/var/www/html/1.html Write something.
Exit the container: Exit
Test: Curl 127.0.0.1:5123/1.html
The IP:port:ip:port format is also supported after-p, such as
-P 127.0.0.1:8080:80
can also not write the local port, write only IP, this will be arbitrarily assigned a port
-P 127.0.0.1::80//Note here is a two colon
New container, start Nginx or httpd service will error
Failed to get D-bus connection:operation not permitted
This is because Dbus-daemon does not start and resolves the issue can be done
When you start the container, add--privileged-e "Container=docker", and the last command changes to/usr/sbin/init
docker run -itd -p 5123:80 --privileged -e "container=docker" imagename /usr/sbin/init

Five Docker network management-Configuring bridging networks

To make it easier for machines and Docker containers in the local network to communicate, we often have the need to configure the Docker container to the same network segment as the host. This requirement is actually very easy to achieve, we just have to bridge the Docker container and the host's network card, and then the Docker container with IP on it.

cd /etc/sysconfig/network-scripts/; cp ifcfg-eth0  ifcfg-br0vi ifcfg-eth0 //增加BRIDGE=br0,删除IPADDR,NETMASK,GATEWAY,DNS1vi ifcfg-br0//修改DEVICE为br0,Type为Bridge,把eth0的网络设置设置到这里来systemctl restart network

At this point, the BR0 network card is generated, the original Eh0 NIC has no IP assigned to the normal state. Then go to ping the outside network, tune up so far.
If there is a problem, continue to edit ifcfg-eth0, consider not UUID, MAC address conflict.
The IFCFG-BR0 configuration is as follows:

DEVICE=br0HWADDR=00:0c:29:9b:78:e8TYPE=BridgeONBOOT=yesNM_CONTROLLED=yesBOOTPROTO=staticIPADDR=192.168.56.132NETMASK=255.255.255.0GATEWAY=192.168.56.2DNS1=119.29.29.29

The Ifcfg-eth0 configuration is as follows:

DEVICE=eth0BRIDGE=br0#HWADDR=00:0c:29:9b:78:e8TYPE=EthernetONBOOT=yesNM_CONTROLLED=yesBOOTPROTO=static#IPADDR=192.168.56.132#NETMASK=255.255.255.0#GATEWAY=192.168.56.2#DNS1=119.29.29.29

Installing Pipwork

git clone https://github.com/jpetazzo/pipeworkcp pipework/pipework /usr/local/bin/

Open a container

docker run -itd --net=none --name lv centos  bashpipework br0  lv 192.168.56.200/[email protected]  #200为容器的ip,@后面的ip为网关ipdocker exec -it lv bash #进去后ifconfig查看就可以看到新添加的ip

Note: where @ before the IP is bridged network card IP subnet IP, do not and br0 the same, @ after the gateway, fill in the Br0 gateway.

Linux Learning Summary (70) docker-2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.