"Editor's words" When you successfully run the Docker container on a host, and confidently intend to extend it to more than one host, but found that the previous attempt is equivalent to writing a Hello World entry program, the network settings of multiple hosts to the next threshold. When you try a variety of options may wish to look at this article, may be suddenly enlightened, found that the original is not complex. Well, yes, this paper uses the Openvswitch.
Running Docker is nothing new, there are many introductory tutorials online to help you run a container on a single host. This host can be either a Linux server or a Mac (with the help of a similar Boot2docker project).
Running on multiple hosts is another matter ...
Optional options:
Run Docker on each host separately, exposing the ports on the public or intranet network cards so that the containers can communicate with each other. This can be cumbersome and can lead to security issues.
Run a middle-tier solution similar to weave to completely abstract the network. The project has a good outlook, but it is still too young to be integrated with compose (fig) or maestro-ng such choreography tools.
Run a docker multiple-host one-stop solution similar to Deis or Flynn. This may not be within your consideration.
Create a shared network bridge in the network between the hosts, allowing the Docker service to run the container there. It sounds a little complicated, but ... In this article we will see that this can be done very easily!
Overview
Basically, we will perform the following steps:
- Install Docker on each server;
- Install Openvswitch on each server;
- Custom network settings are used to automatically create bridges/tunnels between hosts (in the/etc/network/interfaces of each server);
- Customize each Docker service configuration to handle only a small portion of the DOCKER0 IP range, preventing new container IP addresses from overlapping.
That's it. After restarting the service or restarting the server, you will have a fully meshed network of connectivity redundancy (link redundancy), Docker services can run containers on a dedicated IP range (not overlapping) and can interconnect without exposing all the ports on a public or intranet network card. It's great, right?
Technology
Simply make a list of the technologies we use:
- Docker: Well ... This is an article about Docker and the Internet, so ...
- Openvswitch: The great Virtual network switch project, the scalability is very good, according to this guide, you can run the "arbitrary" size of the network.
We will assume that the server is running the Ubuntu server 14.04.02 LTS x64, and for other systems you may need to modify the configuration provided below.
Installation
Docker
Needless to say, follow the guidelines provided by the official website. Later we'll drill down to the configuration so that different Docker services running on the server can collaborate with each other.
Openvswitch
Unfortunately, the Openvswitch installation package is not available (or expired) in the default warehouse, and we need to build the. deb file (once) and distribute it to different hosts. In order to keep the production machine clean, you can find a small host to install the development package, and build the installation package.
A detailed build manual is available on the Openvswitch GitHub.
Execute the following command to build the installation package (new Edition please modify as required):
#获取最新存档
wget http://openvswitch.org/releases/openvswitch-2.3.1.tar.gz
tar xzvf openvswitch-2.3.1.tar.gz
CD openvswitch-2.3.1
#安装依赖
sudo apt-get install-y build-essential fakeroot debhelper \
autoconf Automake bzip2 libssl-dev \
OpenSSL graphviz python-all procps \ python-qt4
python-zopeinterface \
Python-twisted-conch Libtool
# Build (do not use parallel checks)
deb_build_options= ' parallel=8 nocheck ' fakeroot debian/rules Binary
# Gets the latest Deb file and copies it to a
CD somewhere ...
Ls-al *deb
Now that you have a new. deb installation package, then push it and install it on all hosts.
# Copy packages to each host and SSH login
scp-r *deb user@remote_host:~/.
SSH user@remote_host
# installs some dependencies (later needed) and installs the package
sudo apt-get install-y bridge-utils
sudo dpkg-i Openvswitch-common_2.3.1-1_amd64.deb \
Openvswitch-switch_2.3.1-1_amd64.deb
Configuration
Internet
You can use the different command-line tools provided by Openvswitch to build a mesh network (such as Ovs-vsctl), but Ubuntu provides an assistive tool that allows you to define the network through/etc/network/interfaces files.
Three hosts are assumed: 1.1.1.1, 2.2.2.2 and 3.3.3.3, which can be ping through the IP above, which is not important in the public or intranet. Host1 's/etc/network/interfaces is probably as follows.
...
# eth0, eth1 and lo configuration
...
# Auto: To effectively launch it when the host is started
# br0=br0: Prevents Ifquery--list from being found in
auto br0=br0
Allow-ovs
br0 iface br0 Manual
Ovs_type ovsbridge
ovs_ports gre1 gre2 ovs_extra
set Bridge ${iface} stp_enable=true
MTU 1462
# No auto, this is an extra configuration for OvS
# The GRE names of the two hosts must match
allow-br0 gre1
iface gre1
inet Manual Ovsport
Ovs_bridge br0
Ovs_extra set interface ${iface} type=gre options:remote_ip=2.2.2.2
allow-br0 Gre2
iface gre2 inet manual
ovs_type ovsport ovs_bridge br0
Ovs_extra set interface ${iface} type= GRE options:remote_ip=3.3.3.3
# Auto: Create at startup
# define Docker to use DOCKER0, and (when available) connect to BR0 Network Bridge created by Openvswitch
# Each host needs to use a different IP address (do not conflict with each other!) )
Auto Docker0=docker0
iface docker0 inet static address
172.17.42.1
Network 172.17.0.0 netmask 255.255.0.0
bridge_ports br0
MTU 1462
Make adjustments to this configuration on other hosts: REMOTE_IP IP addresses are paired with each other.
A few notes:
- Spanning Tree Protocol: If this configuration is applied, a network loop will be created on 3 servers, which is not a good spanning. Adding Stp_enable=true to the Br0 Bridge will ensure that some of the GRE tunnels are cut off. While ensuring the redundancy of the mesh network, the network is allowed to recover when one of the hosts is offline.
- MTU: This is a key setting! Without this, you may have some unexpected "surprises": The network appears to work (for example, ping), but it cannot support large packets (such as iperf in BW tests, requests for large amounts of data, or Simple File replication). Note that the GRE tunnels need to encapsulate multiple protocols:
- Ethernet: 14 Bytes--We are talking about the 2nd floor between the bridges;
- IPV4:20 byte--container/host communication;
- Gre:4 bytes--Because, um, it's a GRE tunnel;
- That is, the physical Nic MTU subtracts 38 bytes, and the result is 1462 (based on the regular 1500 MTU card).
- Use "=" in Auto definition: This is not required for servers with fixed IP, but some cloud service providers (this is not the case). Digital Ocean (translator: Soft wide again) uses an init service (--allow) that relies on the ifquery--list/etc/init/cloud-init-container.conf Auto. Not adding the "=" number will contain the Openvswitch network card and delay the entire boot process until the Init script fails and times out.
- DOCKER0 Network Bridge: Each server needs its own IP address (such as 172.17.42.1, 172.17.42.2). Because Docker0 Bridge is on the Br0 Bridge, they will (also should!) ) can be connected to each other. Imagine how messy it would be to resolve an IP conflict ... That's why we're defining it at startup, not relying on the Docker service to create this bridge for us.
- GRE tunnel: You can start with gre0 (rather than gre1), and it works perfectly. But for some reason, you can see the gre0 when you enter Ifconfig, but you can't see the other tunnels. This may be a side effect of GRE0 as a virtual NIC. Starting with Gre1 will allow all the GRE tunnels to Ifconfig "stealth" (well too much to see only one). Don't worry, you can still use the OVS-VSCTL command to display the tunnel/Network Bridge.
- More than 3 hosts: You can follow the same logic, and:
- Add additional tunnels (Iface Grex) to connect to the new host.
- Update Ovs_ports in the Br0 bridge definition to include all the GRE tunnels defined in the interfaces file.
- Be smart ... Do not link each server to another host one by one ... STP Convergence (convergence) will take a longer time and cannot provide any useful value other than multiple additional link redundancy.
If you restart the server now, you will have a network of redundant mesh, you can run the following command to test:
- Ping 172.17.42.2 or other IP from host1;
- Run Iperf on the host and view the links in use through ifconfig;
- When you ping the third host, stop the "middle" and the ping interrupts for a few seconds while viewing network convergence (through the STP).
Docker
We now have a perfect network, and each Docker service can hook up their containers to the DOCKER0 Network Bridge. Isn't it great to have Docker do this automatically? The answer is that Docker has the ability to allocate a minimum pool of IP addresses!
For this example, we assume that:
- Each host (1.1.1.1, 2.2.2.2, 3.3.3.3) is hooked up to the previously created Docker0 Bridge with its respective IP address 172.17.42.1, 172.17.42.2, 172.17.42.3;
- A/16 IP range is specified for the DOCKER0 NIC;
- A small block of Docker0 IP ranges is assigned to each host and is saved in their Docker service configuration in/18 fixed-cidr form. respectively, 172.17.64.0/18, 172.17.128.0/18, 172.17.192.0/18.
If you have more than 3 hosts, you need to subdivide each range, or reconsider the entire network topology as required by your organization.
The host1 configuration file (/etc/default/docker) is as follows:
Bridge=docker0
cidr=172.17.64.0/18
wait_ip () {
address=$ (IP add show $BRIDGE | grep ' inet ' | awk ' {print $} ' )
[-Z "$address"] && Sleep $ | |:
}
wait_ip 5
wait_ip
docker_opts= "-
H unix:///var/ Run/docker.sock-
H tcp://0.0.0.0:2375
--fixed-cidr= $CIDR
--bridge $BRIDGE
--mtu 1462
"
You can modify the docker_opts configuration as needed, add mirrors, unsafe registry, DNS, and so on.
Description
- WAIT_IP: Because the DOCKER0 Network Bridge was finally created, it may take some time to get the IP address. Using the WAIT_IP feature, you can safely wait a few seconds before returning to the Docker init script. The configuration file is referenced by the true init script (/etc/init/docker.conf).
- MTU: For the same reason as before, it is just a precaution to ensure that each NIC is created with the correct MTU specified.
- ----H tcp://... : If you don't want to "expose" it through 0.0.0.0 (or bind to one of the server's "real" cards), you can also securely bind it to ... The host's DOCKER0 IP address (such as 172.17.42.2)! In this way, you can access any one of the Docker services in a private mesh network from any host.
Conclusion
Reboot (at least make sure everything comes online automatically at startup).
You can try the following commands to see if everything is OK.
# access host1
ssh user@host1
# Run a new container
Docker run-ti ubuntu bash
# check IP (run in container)
IP Add | grep eth0
#< c7/># in other Windows #
Access another host (HOST2 or 3)
ssh user@host2
# Run a new container
Docker run-ti Ubuntu Bash
# Ping the other containers!
Ping $IP
This is not an authoritative guide on how to set up Docker on multiple hosts, and you are welcome to criticize (Translator Note: The translation is the same, please correct me). Many ideas are generated during the overall installation, and this article explains, in as much detail as possible, why this or that option is selected.
If you include a tiered network bridge, VLAN, and so on, things will be more complicated, but that's beyond the scope of this article. ;)
Clearly, a more complete network is in demand, and it looks like this is already under development.