Docker 1.10 RC New Network Overlay Network

Source: Internet
Author: User
Tags docker run

Overlay Network


Overlay network is a new data format that encapsulates two beginning text over an IP message without altering the existing network infrastructure, through some kind of protocol communication.


This will not only take full advantage of the mature IP routing protocol process data distribution, but also in the overlay technology, the use of extended isolated identity bits, can break the VLAN 4000 number limit,


Support up to 16M users, and when necessary, can convert broadcast traffic to multicast traffic, avoid broadcast data flooding.


Therefore, the overlay network is actually the most mainstream container cross-node data transmission and routing scheme.


Overlay network can be implemented in many ways, in which the IETF (Internet Engineering Task Force) has developed three kinds of overlay implementation standards


1. Virtual extensible LAN (VXLAN)


2. Network virtualization with Generic Routing encapsulation (NVGRE)


3. Stateless transport Protocol (SST)


Docker's built-in overlay network is a Vxlan approach that uses the IETF standard and is the SDN controller pattern commonly seen in Vxlan as the most suitable for large-scale cloud-based virtualized environments.



Docker's overlay network functionality is tightly integrated with its swarm cluster, so the simplest way to use Docker's built-in cross-node communication capabilities is to adopt swarm as a cluster solution.


In Docker 1.9, the following conditions are required to use the Swarm + overlay network Architecture:


1. The Linux system kernel version of all swarm nodes is not less than 3.16 (in the Docker 1.10 RC version, kernel 3.10 is already supported, upgrading the kernel is a hassle)


2. Requires an additional Configuration Storage service, such as Consul, ETCD, or zookeeper


3. All nodes are able to connect properly to the IP and port of the Configuration Storage service


4. The Docker daemon process running on all nodes needs to specify the Configuration Storage service address used by the "–cluster-store" and "-–cluster-advertise" parameters



-------------------------------------------------------------------------------------------

The server 3 units are as follows:


10.6.17.12


10.6.17.13


10.6.17.14



------------------------------------------------------------------------------------------

Docker version

Client:

Version:1.10.0-rc1

API version:1.22

Go version:go1.5.3

Git commit:677c593

Built:fri Jan 15 20:50:15 2016

Os/arch:linux/amd64

------------------------------------------------------------------------------------------

The first thing to do is to modify the host name


10.6.17.12 Management node is not modifiable



10.6.17.13 = Hostnamectl--static set-hostname swarm-node-1


10.6.17.14 = Hostnamectl--static set-hostname swarm-node-2



------------------------------------------------------------------------------------------




In the above 4 conditions, the first condition has been satisfied by default in the Docker 1.10 RC version.


Let's create the configuration store service in the second condition, configure the storage service to choose a configuration store for you according to your usage habits.


Since our Java project has been using ZooKeeper, so choose ZooKeeper as the storage service, in order to facilitate testing, this side only configure single-machine ZooKeeper service



-------------------------------------------------------------------------------------------


Pull a centos mirror down


[10.6.17.12]# Docker Pull CentOS



The following is the Dockerfile of zookeeper


-------------------------------------------------------------------------------------------

From CentOS


maintainer [email protected]

USER Root

# Add Erepo Source

RUN rpm--import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && RPM-UVH http://www.elrepo.org/ elrepo-release-7.0-2.el7.elrepo.noarch.rpm


RUN yum-y Install--enablerepo base wget java tar.x86_64 && mkdir-p/opt/local && wget-q-o-http://ap ache.fayea.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz | TAR-XZF--c/opt/local/&& mv/opt/local/zookeeper-3.4.6/opt/local/zookeeper && Cp/opt/local/zookeepe R/conf/zoo_sample.cfg/opt/local/zookeeper/conf/zoo.cfg && mkdir-p/opt/local/zookeeper/data && Mkdir-p/opt/local/zookeeper/log


ENV java_home/usr/


ADD start.sh/start.sh


Workdir/opt/local/zookeeper


#修改配置文件

RUN sed-i ' s/datadir=\/tmp\/zookeeper/datadir=\/opt\/local\/zookeeper\/data/g '/opt/local/zookeeper/conf/zoo.cfg


entrypoint ["/start.sh"]


# Make sure the foreground is running

CMD ["Start-foreground"]

-------------------------------------------------------------------------------------------



[10.6.17.12]# Docker build-t= "Zookeeper".


[10.6.17.12]# Docker run--restart=always-d-v/opt/data/zookeeper/data:/opt/local/zookeeper/data-v/opt/data/ Zookeeper/log:/opt/local/zookeeper/log-p 2181:2181 Zookeeper



After zookeeper is created, you need to modify the contents of variables in the boot script of each host Docker daemon and configure the Swarm node.


[10.6.17.12]# sed-i ' s/-h fd:\/\//-h tcp:\/\/10.6.17.12:2375--cluster-store=zk:\/\/10.6.17.12:2181/store-- Cluster-advertise=10.6.17.12:2376/g '/lib/systemd/system/docker.service


[10.6.17.13]# sed-i ' s/-h fd:\/\//-h tcp:\/\/10.6.17.13:2375--cluster-store=zk:\/\/10.6.17.12:2181/store-- Cluster-advertise=10.6.17.13:2376/g '/lib/systemd/system/docker.service


[10.6.17.14]# sed-i ' s/-h fd:\/\//-h tcp:\/\/10.6.17.14:2375--cluster-store=zk:\/\/10.6.17.12:2181/store-- Cluster-advertise=10.6.17.14:2376/g '/lib/systemd/system/docker.service



After modifying the contents of the variable, execute


Systemctl Daemon-reload



Restart Docker


Systemctl Restart Docker.service



Since it is cumbersome to modify and restart the Docker daemon itself, if the user business may use cross-node network communication, it is recommended that you prepare the storage service in advance of the Docker cluster. The corresponding parameters can then be added to the Docker's boot configuration directly when adding the host node.



Next create the overlay network, we want to build this network is across all nodes, that is, each node should have a name, ID and property exactly the same network, they also mutually recognize each other as their own in different nodes of the copy. How do you achieve this effect? The current Docker Network command is not yet available, so only with swarm, we create the swarm cluster



First we choose 10.6.17.12 this machine as the master node to create swarm:


[10.6.17.12]# docker-h tcp://10.6.17.12:2375 run--name master--restart=always-d-P 8888:2375 Swarm manage zk://10.6.17 .12:2181/swarm



Run the Swarm Agent service on the nodes that are running on the other two Docker business containers:


[10.6.17.13]# docker-h tcp://10.6.17.13:2375 run--name node_1--restart=always-d swarm join--addr=10.6.17.13:2375 zk:/ /10.6.17.12:2181/swarm


[10.6.17.14]# docker-h tcp://10.6.17.14:2375 run--name node_2--restart=always-d swarm join--addr=10.6.17.14:2375 zk:/ /10.6.17.12:2181/swarm



To view information on all nodes:


[10.6.17.12]# docker-h tcp://10.6.17.12:8888 ps-a


CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

5FC7753CAA2C swarm "/swarm join--addr=1" less than a second ago up less than a second 2375/tcp Swarm-node-1/node_1

330b964ba732 swarm "/swarm join--addr=1" less than a second ago up less than a second 2375/tcp Swarm-node-2/node_2



At this point the swarm cluster has been built and completed.


Swarm provides an API that is fully compatible with the Docker service, so you can work directly with Docker commands.


Note the external port number 8888 specified when the master service was created in the above command, which is the address used to connect to the Swarm service.



Now we can create a overlay type of network:


[10.6.17.12]# docker-h tcp://10.6.17.12:8888 Network Create--driver=overlay ovr0


This command was sent to the Swarm service, and Swarm adds a overlay type network with identical properties on all agent nodes.


Use Docker network LS on each node to see the overlay networks that already have a ovr0



In the Swarm network, each network name is prefixed with the node name.

such as: Swarm-node-1/node_1

Swarm-node-2/node_2


However, the overlay type of network does not have this prefix, which also shows that such networks are shared by all nodes.



Below we create two containers connected to the overlay network in swarm and restrict the two containers to run on separate nodes with swarm filters.


-------------------------------------------------------------------------------------------


From CentOS


maintainer [email protected]


RUN yum-y Update; Yum Clean All

RUN yum-y Install epel-release; Yum Clean All

RUN yum-y install wget; Yum Clean All

ADD./nginx.sh/root/

run/bin/bash/root/nginx.sh

RUN rm-rf/root/nginx.sh

RUN rm-rf/opt/local/nginx/conf/nginx.conf

ADD./nginx.conf/opt/local/nginx/conf/

RUN mkdir-p/opt/local/nginx/conf/vhost

ADD./docker.conf/opt/local/nginx/conf/vhost

RUN Chown-r Upload:upload/opt/htdocs/web

EXPOSE 80 443

CMD ["/opt/local/nginx/sbin/nginx", "-G", "daemon off;"]


-------------------------------------------------------------------------------------------



[10.6.17.12]# docker-h tcp://10.6.17.12:8888 run--name nginx_web_1--net ovr0--env= "constraint:node==swarm-node-1"-D -v/opt/data/nginx/logs:/opt/local/nginx/logs Nginx


[10.6.17.12]# docker-h tcp://10.6.17.12:8888 run--name nginx_web_2--net ovr0--env= "Constraint:node==swarm-node-2"-D -v/opt/data/nginx/logs:/opt/local/nginx/logs Nginx




Once you have created two containers, let's test the connectivity of the OVR0 network.


[10.6.17.12]# docker-h tcp://10.6.17.12:8888 exec-it nginx_web_1 Ping nginx_web_2


PING nginx_web_2 (10.0.0.3) bytes of data.

Bytes from nginx_web_2.ovr0 (10.0.0.3): icmp_seq=1 ttl=64 time=0.360 ms

Bytes from nginx_web_2.ovr0 (10.0.0.3): icmp_seq=2 ttl=64 time=0.247 ms

Bytes from nginx_web_2.ovr0 (10.0.0.3): icmp_seq=3 ttl=64 time=0.234 ms

Bytes from nginx_web_2.ovr0 (10.0.0.3): icmp_seq=4 ttl=64 time=0.241 ms

Bytes from nginx_web_2.ovr0 (10.0.0.3): icmp_seq=5 ttl=64 time=0.212 ms



As shown above, we have successfully performed cross-node data communication on the overlay network of Docker.




This article is from the "Learning Path" blog, please be sure to keep this source http://jicki.blog.51cto.com/1323993/1738371

Docker 1.10 RC New Network Overlay Network

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.