Swarm Introduction
Swarm is the Docker company released in early December 2014 a relatively simple set of tools to manage the Docker cluster, it will be a group of Docker host into a single, virtual host. Swarm uses the standard Docker API interface as its front-end access portal, in other words, all forms of Docker client (Docker client in Go, docker_py, Docker, etc.) can communicate directly with swarm. Swarm almost all of the development in go language, on Friday, April 17, Swarm0.2 released, compared to the 0.1 version, the 0.2 version adds a new strategy to dispatch the container in the cluster, so that they can be spread on the available nodes, as well as support more Docker commands and cluster drivers.
Swarm Deamon is just a scheduler (Scheduler) plus a router (router), Swarm itself does not run the container, it simply accepts the Docker client sent over the request, scheduling the appropriate node to run the container, which means that Even if Swarm is hung for some reason, the nodes in the cluster will run as usual, and when swarm is restored, it collects the rebuild cluster information. The following is the swarm chart:
How to use swarm
There are 3 machines:
sclu083:10.13.181.83
sclu084:10.13.181.84
atsg124:10.32.105.124
Use the three machines to create a docker cluster, where sclu083 also acts as the Swarm Manager management cluster. Swarm installation
The simplest way to install swarm is to use the swarm image provided by the Docker official:
$ sudo docker pull swarm
Docker Cluster management requires service discovery (Discovery service backend) functionality. Swarm supports the service discovery features built into the following discovery service Backend:docker hub, a local static file description cluster (static files describing the cluster), ETCD (incidentally Etcd this thing looks very hot. Very promising, with time to study under), Consul,zookeeper and some static IP lists (a static list of IPs). This article will detail the use of the previous two methods backend.
Before using swarm for cluster management, you need to modify the listening port of the Docker Deamon of all nodes that are ready to join the cluster to 0.0.0.0:2375, and you can use sudo docker–h tcp://0.0.0.0:2375 & commands directly , or you can modify it in the configuration file
$ sudo vim/etc/default/docker
Add the following sentence to the last side of the file
d0ocker_opts= "-H 0.0.0.0:2375–h unix:///var/run/docker.sock"
Note: Be sure to make changes on all nodes, and then restart Docker Deamon
$ sudo service docker restart
The first approach: using the service discovery feature built into the Docker hub
First Step
Execute the Swarm create command on any one node to create a cluster flag. Upon completion of this command, Swarm will go to the discovery service built on the Docker hub to obtain a globally unique token to uniquely identify swarm managed Docker clusters.
$ sudo docker run--RM swarm create
We executed the above order on the sclu084 machine.
The return of the token is D947B55AA8FB9198B5D13AD81F61AC4D, this token must be remembered, because the next operation will use this one token.
Second Step
Execute the SWARM join command on all the machines to join the cluster, and add the machine to the cluster.
The test is to execute commands on all three machines.
$ sudo docker run–-rm swarm join–addr=ip_address:2375 token://d947b55aa8fb9198b5d13ad81f61ac4d
Executes on the IP address for the 10.13.181.84 machine
This command does not return immediately, and we manually return it via CTRL + C.
Third Step
Start Swarm Manager.
Because we want sclu083 to act as a swarm management node, we're going to execute the Swarm manage command on this machine.
$ sudo docker run–d–p 2376:2375 swarm manage token://
The key points to note is: In this order, the first: to daemon the form of running swarm; second: port mapping: 2376 can be replaced by any one of the local machine does not occupy the port, must not be 2375, otherwise there will be problems.
The results of the execution are as follows:
After executing this command, the entire cluster has been started.
You can now view all nodes on the cluster on any one node.
You can then run the Dcoker container operation on this cluster by using the command (to indicate the IP address and port of the Swarm Maneger machine in the command) on any machine that has Docker installed.
Now look at the cluster nodes on the 10.13.181.85 machine. The info command can be replaced with any swarm supported Docker command that can view official documents
$ sudo docker–h 10.13.181.83:2376 info
From the result of the above figure, we can find a problem: Obviously this small cluster has 3 nodes, but the info command shows only 2 nodes. The node 10.32.105.124 is also missing. Why is this happening?
Because 10.32.105.124 this machine does not set up the above Docker daemon monitor 0.0.0.0:2375 this port, so swarm can not do this node to join the cluster.
When using the discovery service built into the Docker hub, a problem arises when you use swarm create
Time= "2015-04-21t08:56:25z" Level=fatal msg= "Get https://discovery-stage.hub.docker.com/v1/clusters/ D947b55aa8fb9198b5d13ad81f61ac4d:dial tcp:i/o Timeout "
Similar to such a mistake, do not know what the reason is to be resolved. (may be a problem with the firewall)
The following second method can be used when there is a problem with the service discovery feature built into the Docker hub. Second method: Working with Files
The second method is comparatively simpler and less prone to timeout problems than the first method.
First Step
Create a new file on the sclu083 machine, write the IP address of the machine to which you want to join the cluster
Second Step
Execute the Swarm manage command on the sclu083 machine:
$ sudo docker run–d–p 2376:2375–v $ (PWD)/cluster:/tmp/cluster swarm manage File:///tmp/cluster
Note: You must use the-V command here, because the cluster file is on the computer, the boot container is not reachable by default, so it is shared by the-v command. Also, file:///must not forget.
As you can see, the swarm is already running. Now you can view the cluster node information and use the command:
$ sudo docker run–rm–v $ (PWD)/cluster:/tmp/cluster swarm list File:///tmp/cluster
(When using a file as a service discovery, it appears that the Manage List command can only be used on swarm manage nodes, and not on other nodes)
Well, now that the cluster is already running, you can use clustering on other machines just like the first method. Also test on the sclu085 machine:
You can see that the successful access and node information is correct. You can then replace the info command above with other Docker executable commands to use this Docker cluster. Swarm scheduling Strategy
Swarm when the container is running on the schedule node, the node that is best suited to run the container is calculated according to the specified policy, and currently supports the following policies: Spread, binpack, random.
Random, as the name suggests, is to randomly select a node to run the container, which is generally used as debugging, and the spread and binpack policies compute the nodes that should run the container based on the number of available CPUs, RAM, and running containers for each node.
Under the same conditions, the spread policy selects the node that runs the least container to run the new container, and the Binpack policy selects the machine running the most centralized container to run the new node (the Binpack strategy causes Swarm to optimize For the container which is most packed.).
Using the spread policy allows the container to be evenly distributed across nodes in the cluster, once a node has been hung out of a container that will only lose a fraction of it.
Binpack strategy to maximize the avoidance of container fragmentation, that is, Binpack strategy as far as possible to the unused nodes to the need for more space to run the container, as far as possible to run the container on a node. Filtering Device Constraint Filter
A label is used to run the container above the specified node. These labels are specified when the Docker daemon is started, or they can be written in the/etc/default/docker configuration file.
$ sudo docker run–h 10.13.181.83:2376–name redis_083–d–e constraint:label==083 Redis
Affinity Filter
Use the-e affinity:container==container_name/container_id–-name container_1 to let the container container_1 next to the container container_name/ container_id execution, which means that two containers are executed on a node (you can schedule 2 containers and make the container #2 the next to the container #1.)
Start a container on a single machine first
$ sudo docker-h 10.13.181.83:2376 run--name redis_085-d-e constraint:label==085 Redis
Next, start the container redis_085_1 and let the Redis_085_1 run next to the redis_085 container, which is running on a node
$ sudo docker–h 10.13.181.83:2376 run–d–name redis_085_1–e affinity:container==redis_085 Redis
The-e affinity:image=image_name command allows you to specify that only the machine that has downloaded image_name will run the container (you can schedule a container just on nodes where the images are already pulled)
The following command launches the Redis container above a node that has only Redis mirrors:
$ sudo docker–h 100.13.181.83:2376 run–name redis1–d–e Affinity:image==redis Redis
The following command achieves the effect of starting a container named Redis on a node that has a Redis mirror, and if there are no Redis containers on each node, start the Redis container according to the default policy.
$ sudo docker-h 10.13.181.83:2376 run-d--name redis-e Affinity:image==~redis
Port Filter
Port will also be considered a unique resource.
$ sudo docker-h 10.13.181.83:2376 run-d-P 80:80 nginx
After executing this command, any container that uses port 80 is failed to start.
Reproduced from
Docker Swarm Learning Course