Docker Swarm Learning Course

Source: Internet
Author: User
Tags redis docker hub docker run docker swarm etcd

Swarm Introduction

Swarm is the Docker company released in early December 2014 a relatively simple set of tools to manage the Docker cluster, it will be a group of Docker host into a single, virtual host. Swarm uses the standard Docker API interface as its front-end access portal, in other words, all forms of Docker Client (dockerclient in Go, docker_py, Docker, etc.) can communicate directly with swarm. Swarm almost all of the development in go language, on Friday, April 17, Swarm0.2 released, compared to the 0.1 version, the 0.2 version adds a new strategy to dispatch the container in the cluster, so that they can be spread on the available nodes, as well as support more Docker commands and cluster drivers.

Swarm Deamon is just a scheduler (Scheduler) and router (router), Swarm does not run the container, it simply accepts the request sent by the Docker client, dispatching the appropriate node to run the container, which means that Even if Swarm is hung for some reason, the nodes in the cluster will run as usual, and when swarm is restored, it collects the rebuild cluster information. The following is the swarm chart:

How to use Swarm

There are 3 machines, SCLU083,IP address is 10.13.181.83,sclu084,ip address is 10.13.181.84,atsg124, The IP address is 10.32.105.124, which uses the three machines to create a docker cluster in which sclu083 serves as the Swarm Manager management cluster.

Swarm installation

The simplest way to install swarm is to use the swarm image provided by the Docker official:

sudo docker pull Swarm
Docker Cluster management requires service discovery (Discovery service backend) functionality. Swarm supports the service discovery features built into the following discovery service Backend:docker hub, a local static file description cluster (static files describing the cluster), ETCD (incidentally Etcd this thing looks very hot. Very promising, have time to study under), Consul,zookeeper and some static IP lists (a static list of IPs). This article will detail the use of the previous two methods backend.

Before using swarm for cluster management, you need to modify the listening port of the Docker Deamon of all nodes that are ready to join the cluster to 0.0.0.0:2375, and you can use sudo docker–h tcp://0.0.0.0:2375 & commands directly , or you can modify it in the configuration file

sudo vim/etc/default/docker

Add the following sentence to the last side of the file

d0ocker_opts= "-H 0.0.0.0:2375–h unix:///var/run/docker.sock"

Note: Be sure to make changes on all nodes, and then restart Docker Deamon

sudo service docker restart
The first approach: using the service discovery feature built into the Docker hub

The first step: Execute the Swarm create command on any node to create a cluster flag. Upon completion of this command, Swarm will go to the discovery service built on the Docker hub to obtain a globally unique token to uniquely identify swarm managed Docker clusters.

sudo docker run–-rm swarm create

We execute the above command on the sclu084 machine, and the effect is as follows:

The return of the token is D947B55AA8FB9198B5D13AD81F61AC4D, this token must be remembered, because the next operation will use this one token.

Step two: Execute the SWARM join command on all the machines to join the cluster, and add the machine to the cluster

The test is to execute the command on all three machines:

sudo docker run–-rm swarm join–addr=ip_address
: 2375 token://d947b55aa8fb9198b5d13ad81f61ac4d

The effect performed on the IP address for the 10.13.181.84 machine is shown below:

This command does not return immediately, and we manually return it via CTRL + C.

Step three: Start Swarm manager

Because we're going to let sclu083 act as the Swarm Management node, we're going to execute the Swarm manage command on this machine:

sudo docker run–d–p 2376:2375 swarm manage
token://d947b55aa8fb9198b5d13ad81f61ac4d

Note: In this command, the first: to run the Swarm in daemon form. Second: Port mapping: 2376 can be changed to any one of the local machine does not occupy the port, must not be 2375. Otherwise there will be problems.

The results of the execution are as follows:

After executing this command, the entire cluster has been started.

You can now view all nodes on the cluster on any one node.

You can then run the Dcoker container operation on this cluster by using the command (to indicate the IP address and port of the Swarm Maneger machine in the command) on any machine that has Docker installed.

Now look at the cluster nodes on the 10.13.181.85 machine. The info command can be replaced with any swarm supported Docker command that can view official documents

sudo docker–h 10.13.181.83:2376 info

From the result of the above figure, we can find a problem: Obviously this small cluster has 3 nodes, but the info command shows only 2 nodes. The node 10.32.105.124 is also missing. Why is this happening?

Because 10.32.105.124 this machine does not set up the above Docker daemon monitor 0.0.0.0:2375 this port, so swarm can not do this node to join the cluster.

When using the discovery service built into the Docker hub, a problem arises when you use swarm create

Time= "2015-04-21t08:56:25z"
Level=fatal msg= "Get
https://discovery-stage.hub.docker.com/v1/clusters/
D947b55aa8fb9198b5d13ad81f61ac4d:dial tcp:i/o Timeout "
Similar to such a mistake, do not know what the reason is to be resolved.

The following second method can be used when there is a problem with the service discovery feature built into the Docker hub.

Second method: Working with files

The second method is comparatively simpler and less prone to timeout problems than the first method.

Step one: Create a new file on the sclu083 machine, write the IP address of the machine to which you want to join the cluster

Step two: Execute the Swarm manage command on the sclu083 machine:

sudo docker run–d–p 2376:2375–v $ (PWD)/cluster:/tmp/
Cluster swarm manage File:///tmp/cluster


Note: You must use the-V command here, because the cluster file is on the computer, the boot container is not reachable by default, so it is shared by the-v command. Also, file:///must not forget

As you can see, the swarm is already running. Now you can view the cluster node information and use the command:

sudo docker run–rm–v $ (PWD)/cluster:/tmp/cluster swarm list File:///tmp/cluster


(When using a file as a service discovery, it appears that the Manage List command can only be used on swarm manage nodes, and not on other nodes)

Well, now that the cluster is already running, you can use clustering on other machines just like the first method. Also test on the sclu085 machine:

You can see that the successful access and node information is correct. You can then replace the info command above with other Docker executable commands to use this Docker cluster.

Swarm scheduling Strategy

Swarm when the container is run on the schedule node, the node that is best suited to run the container is computed according to the specified policy, and the currently supported policies are: Spread,binpack,random.

Random, as the name suggests, is to randomly select a node to run the container, which is generally used as debugging, and the spread and binpack policies compute the nodes that should run the container based on the available cpu,ram of each node and the number of containers that are running.

Under the same conditions, the spread policy selects the node that runs the least container to run the new container, and the Binpack policy selects the machine running the most centralized container to run the new node (the Binpack strategy causes Swarm to optimize For the container which is most packed.).

Using the spread policy allows the container to be evenly distributed across nodes in the cluster, once a node has been hung out of a container that will only lose a fraction of it.

Binpack strategy to maximize the avoidance of container fragmentation, that is, Binpack strategy as far as possible to the unused nodes to the need for more space to run the container, as far as possible to run the container on a node.

Constraint Filter

A label is used to run the container above the specified node. These labels are specified when Docker daemon in the burrow, or you can uninstall/etc/default/docker this configuration file.

sudo docker run–h 10.13.181.83:2376 run–name redis_083–d–e constraint
: label==083 Redis
Affinity Filter

Use the-e affinity:container==container_name/container_id–-name container_1 to let the container container_1 next to the container container_name/ container_id execution, which means that two containers are executed on a node (you can schedule 2 containers and make the container #2 the next to the container #1.)

Start a container on a single machine first

sudo docker-h 10.13.181.83:2376 run--name redis_085
-D-E constraint:label==085 Redis
Next, start the container redis_085_1 and let the Redis_085_1 run next to the redis_085 container, which is running on a node

sudo docker–h 10.13.181.83:2376 run–d–name redis_085_1–e Affinity
: container==redis_085 Redis
The-e affinity:image=image_name command allows you to specify that only the machine that has downloaded image_name will run the container (you can schedule a container just on nodes where the images are already pulled)

The following command launches the Redis container above a node that has only Redis mirrors:

sudo docker–h 100.13.181.83:2376 run–name redis1–d–e Affinity:image==redis Redis
The following command achieves the effect of starting an R-name Redis container on a node with a Redis mirror, and if there are no Redis containers on each node, start the Redis container according to the default policy.

sudo docker-h 10.13.181.83:2376 run-d--name redis-e Affinity:image==~redis
Port Filter

Port will also be considered a unique resource.

sudo docker-h 10.13.181.83:2376 run-d-P 80:80 nginx
After executing this command, any container that uses port 80 is failed to start.

Conclusion:

In this paper, two methods are introduced in detail to manage Docker cluster using swarm. But Swarm is a relatively new project, is still in the research and development phase, the swarm is very fast, the function and characteristics of the change generation is very frequent. Therefore, it can be said that swarm is not recommended to be used in the production environment, but it is certain that swarm is a promising technology.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.