Objective
Docker as the hottest lightweight container technology at present, there are many commendable features, such as Docker mirroring management. However, Docker also has many imperfect places, the network aspect is the Docker relatively weak part. Therefore, it is necessary for us to have an in-depth understanding of Docker's network knowledge to meet the higher network requirements.
Docker Network mode selection
At present, many articles have introduced the network model of Docker, but there are still many pits and points to be noticed in practical application.
When Docker is applied to production environment, the choice of network model mainly has the following kinds
1, the native bridge NAT mode
2, Linux Bridge VLAN mode
3, using the Third party network scheme
Native Bridge NAT Mode
This is the Docker native network mode in which each host container is in a separate subnet, and external access must be mapped through the host port. Also, access between containers between different hosts must be mapped through this host port. In other words, a container on a host is actually not aware of another host's container. Whether this way can be used for production, at the beginning I was melancholy, at the same time write the article write Nat performance loss is relatively large, in the absence of resources to do a complete test, our original plan did not dare to use this network scheme. But the results of recent tests of their own tests show that NAT performance is acceptable (QPS and latency and VLAN are relatively close to each other), as long as there is a suitable solution to the different host of the container unicom can be. It is possible to use the Mesos+marathon+bamboo+haproxy method.
Linux Bridge VLAN Mode
This is the reason I consider the Docker network model from the outset to decide the good, there are several main reasons:
1, the way NAT is not used in the beginning
2, the other third rice tools are not very mature
3, a beginning of the number of containers can not be too much (because the mode of the VLAN is limited to the overall number of VLANs, only 4,096 containers), if according to a host 10-16 container count, can support to 256 host, this is still acceptable
4, each host needs a separate IP, and can interconnect interoperability
5, operation and maintenance management to simple, after all, our operation system or physical machine system
6, can do the host network and VLAN network isolation
We use the host is the Centosos 7.X series, the host network configuration is as follows: Two 1G network cards, bond by the way tied together, and then configure the host a virtual network card on Valn 1, container Docker0 Bridge on another VLAN 1
NIC configuration:
It is important to note that the bridge-utils, NetworkManager package must be installed
Steps
Configure two NIC, do not configure network parameters such as Ip/gateway, increase MASTER=bond0
andSLAVE=yes
Complete examples such as:
Type=ethernet
bootproto=none
defroute=yes
peerdns=yes
peerroutes=yes
ipv4_failure_fatal=no
Ipv6init=no
Ipv6_autoconf=yes
ipv6_defroute=yes
ipv6_peerdns=yes
ipv6_peerroutes=yes
ipv6_failure_ Fatal=no
name=enp2s0f0
uuid=7f6fa8e9-0177-46a8-b8ea-55c2187bea11
device=enp2s0f0
ONBOOT= Yes
master=bond0
slave=yes
Increase the BOND0 network configuration and /etc/sysconfig/network-scripts/ifcfg-bond0
Select values based on usage mode
, as follows. Because you want to configure the VLAN on the bond0, there are no IP-related parameters configured. If you configure the IP and other parameters can be considered bond0 is a normal network card.
Device=bond0
name=bond0
type=bond
bonding_master=yes
onboot=yes
bootproto=none bonding_opts= "Miimon=100 mode=0"
You can cat /proc/net/bonding/bond0
look at the configuration
VLAN is configured on the BOND0, where the VLAN number is 136, so the file name is /etc/sysconfig/network-scripts/ifcfg-bond0.136
:
Vlan=yes
type=ethernet
device=bond0.136
name=bond0.136
physdev=bond0
onboot=yes Bootproto=static
Bridge=docker0
Because the subsequent need to hang Docker Bridge (DOCKER0) on this VLAN, the type must be Ethernet, and there are multiple configuration Bridge=docker0
Configuration Docker Bridge ( /etc/sysconfig/network-scripts/ifcfg-docker0
), the name can also be used in other, in order to reduce Docker engine need to increase the-B startup parameters, so the default name. No IP is configured for DOCKER0, because there is another VLAN IP to manage the host
Type=bridge
vlan=yes
device=docker0
slave=bond0.136
name=docker0
onboot=yes Bootproto=none
Configuration of Docker
Modify the Docker startup file by machine use type, /usr/lib/systemd/system/docker.service
as follows:
After the Execstart=/usr/bin/docker daemon-h fd://this line, add:
--FIXED-CIDR=172.20.56.16/28--default-gateway=172.20.56.1--registry-mirror=http://registry.xxxx.com:5000-- insecure-registry=docker.xxx.com:5000--storage-driver=overlay--ip-forward=false--iptables=false--log-driver= Journald
(Note: "--fixed-cidr=" after the completion of the machine container IP subnet segment, "--default-gateway=" after the completion of the container IP subnet gateway; "--registry-mirror=" The Docker registry domain name of the production environment is filled in later. )
Here are a few pits:
1, for Docker storage, do not use the default method, but to use overlay, but also with CentOS 7.X
2, because the beginning of the installation of the machine network configuration is not a bond model, need to configure a good bond after the original configuration needs to be cleaned
Modify the/etc/sysctl.conf file to net.ipv4.ip_forward
change the value of "" to 1
You do not need to configure gateway on the/etc/sysconfig/network file
Create file/etc/modules-load.d/bonding.conf, as follows: Bonding
3, the default, installed Docker, and no Docker user groups and Docker users, must use the root of the line, if you do not need to root:
To create a Docker user group:sudo groupadd docker
Add the current user to the Docker user group, such as a apps user currently in use:sudo usermod -aG docker apps
Create Docker users and join Docker user groups:sudo useradd docker -g docker
Modify the owner and group of the "/var/lib/docker" directory and its subdirectories:sudo chown -R docker:docker
Quit and log back on to refresh the current user's permissions.
The biggest pit is that we install some platform services such as zookeeper on some hosts, if the container wants to access these configured as containers on the host service is inaccessible, this has not been done for a long time, to find the network of colleagues, from the switch configuration, and so see no problem, and then under the guidance of Lao Luo do the following try:
-Start tcpdump on the host, then grab the ping package from the container on the host, you can find the host received the ICMP packets from the container, but the container did not receive any response, Ping always timed out, is the host abandoned the ICMP packet?
A, turn on the Martian test to see if there is a package in: sudo sysctl net.ipv4.conf.bond0/51.log_martians=1
(BOND0/51 for the corresponding bond0.51 network card) you can see the packet came in to indicate that there is an unknown source of packets.
b, from the above two points can be thought of the network we have configured two different network cards, different network breaks, the principle they should be isolated, and access to the network segment 1 of the data from the network Segment 1 network card come in, if from the network Segment 2 come in, Linux is considered illegal package
D, according to the reason, is because of the Linux RP (Reverse Path) filtering problem, in this case, the need for this host to the RP shutdown can be
sudo sysctl net.ipv4.conf.bond0/51.rp_filter=0
sudo sysctl net.ipv4.conf.bond0/52.rp_filter=0
sudo sysctl net.ipv4.conf.all.rp_filter=0
sudo sysctl net.ipv4.conf.bond0.rp_filter=0
Third-party Network plugin
At present, the better choice is the following, but still groping
Calico, http://projectcalico.org/
Contiv, Http://docs.contiv.io
Summarize
The above is Docker VLAN network mode configuration Detailed introduction, hope this article of all content to everyone to learn or use Docker can help, if there is doubt you can message exchange.