Docker network modes and docker network Modes
When using docker run to create a Docker container, you can use the -- net option to specify the container network mode. Docker has the following four network modes:
· Host mode, specified using -- net = host.
· Container mode, which is specified by -- net = container: NAME_or_ID.
· None mode, specified using -- net = none.
· Bridge mode, which is specified by -- net = bridge and is set by default.
The following describes the network modes of Docker respectively.
1 host Mode
As we all know, Docker uses the Linux Namespaces Technology for resource isolation, such as PID Namespace isolation process, Mount Namespace isolation file system, and Network Namespace isolation Network. A Network Namespace provides an independent Network environment, including Network adapter, routing, and Iptable rules, which are isolated from other Network namespaces. A Docker container is usually allocated with an independent Network Namespace. However, if the host mode is used when the container is started, the container will not obtain an independent Network Namespace, but share a Network Namespace with the host. The container will not Virtualize its Nic, configure its own IP address, and so on, but use the host's IP address and port.
For example, we start a Docker container with web applications in host mode on the 10.10.101.105/24 host and listen to port tcp80. When we execute any ifconfig-like command in the container to view the network environment, the information on the host is displayed. For applications in the external access container, you can directly use 10.10.101.105: 80 without any NAT translation, just like running directly on the host machine. However, other aspects of the container, such as the file system and process list, are still isolated from the host.
2 container Mode
After understanding the host mode, this mode is easy to understand. In this mode, the newly created container shares a Network Namespace with an existing container instead of the host. The newly created container does not create its own Nic and configures its own IP address. Instead, it shares the IP address and port range with a specified container. In addition to the network, the two containers are isolated from other containers, such as file systems and process lists. The processes of the two containers can communicate with each other through the lo Nic device.
3 none Mode
This mode is different from the first two. In this mode, the Docker container has its own Network Namespace, but does not configure any Network for the Docker container. That is to say, the Docker container does not have Nic, IP, route, and other information. We need to add NICs and configure IP addresses for the Docker container.
4 bridge Mode
Bridge mode is the default Network setting for Docker. This mode assigns Network Namespace and IP addresses to each container, and connects the Docker container on a host to a virtual Network bridge. The following describes the mode.
4.1 bridge Mode Topology
When the Docker server is started, a virtual bridge named docker0 is created on the host, and the Docker container started on the host is connected to the Virtual Network Bridge. The working method of the virtual bridge is similar to that of the physical switch, so that all containers on the host are connected to a L2 network through the switch. Next, we need to assign an IP address to the container. Docker will select a different IP address and subnet from the private IP segment defined in RFC1918 and assign it to docker0, the container connected to docker0 selects an unused IP address from this subnet. For example, Docker uses the CIDR Block 172.17.0.0/16 and assigns 172.17.42.1/16 to the docker0 bridge. You can see docker0 by running the ifconfig command on the host, it can be considered as the management interface of the bridge and used as a virtual network card on the host ). The network topology in a standalone environment is as follows. The host address is 10.10.101.105/24.
The above network configuration process of Docker is roughly as follows:
1. Create a pair of virtual network card veth pair devices on the host. Veth devices always appear in pairs. They form a data channel. Data enters from one device and is then transmitted from another device. Therefore, veth devices are often used to connect two network devices.
2. Docker places one end of the veth pair device in the newly created container and name it eth0. The other end is placed in the host, named after a similar name such as veth65f9, and the network device is added to the docker0 bridge. You can run the brctl show command to view it.
3. assign an IP address from the docker0 subnet to the container, and set the IP address of docker0 to the default gateway of the container.
After introducing the network topology, we will introduce how containers communicate in bridge Mode.
Container communication in 4.2 bridge Mode
In bridge Mode, containers connected to the same bridge can communicate with each other (for security reasons, communication between containers can also be disabled by setting -- icc = false in the DOCKER_OPTS variable, in this way, only -- link can be used to communicate with two containers ).
The container can also communicate with the external container. Let's take a look at the Iptable rules on the host and we can see such
-A postrouting-s 172.17.0.0/16! -O docker0-j MASQUERADE
This rule will convert the source address into a package with the source address of 172.17.0.0/16 (that is, the package generated from the Docker container) and not from the docker0 Nic, the address of the host Nic. This may not be easy to understand. Here is an example. Assume that the host has a NIC of eth0, the IP address is 10.10.101.105/24, and the gateway is 10.10.101.254. Ping Baidu (180.76.3.151) from a container with the IP address 172.17.0.1/16 on the host ). The IP package is first sent from the container to its own default gateway docker0, and then to the host after the package reaches docker0. Then, the route table of the host is queried, and the packet should be sent from the eth0 of the host to the gateway 10.10.105.254/24 of the host. The packet is then forwarded to eth0 and sent from eth0 (the host's ip_forward forwarding should already be enabled ). At this time, the above Iptable rule will take effect, perform SNAT conversion on the package, and change the source address to the eth0 address. In this way, it seems that the package is sent from 10.10.101.105, And the Docker container is invisible to the outside.
So how do external machines access the Docker Container service? First, use the following command to create a container containing web applications and map port 80 of the container to port 80 of the host.
Docker run-d -- name web-p 80: 80 fmzhen/simpleweb
View the changes in the Iptable rule and find that there is another rule:
-A docker! -I docker0-p tcp-m tcp -- dport 80-j DNAT -- to-destination 172.17.0.5: 80
This rule is to convert the tcp traffic received by the target port 80 on the host eth0, and send the traffic to 172.17.0.5: 80, that is, the Docker container we created above. Therefore, you only need to access 10.10.101.105: 80 to access services in the container.
In addition, we can also customize the IP address, DNS, and other information used by Docker, and even use our own bridge, but it works in the same way.