Docker Swarm Learning Tutorial

Source: Internet
Author: User
Tags docker hub docker swarm etcd

Original works, reproduced please indicate the source: point I

Swarm Introduction

Swarm is a simpler set of tools that Docker launched in early December 2014 to manage the Docker cluster, turning a group of Docker hosts into a single, virtual host. Swarm uses the standard Docker API interface as its front-end access portal, in other words, the various forms of Docker Client (dockerclient in Go, docker_py, Docker, etc.) can communicate directly with swarm. Swarm almost all in the go language, on Friday, April 17, Swarm0.2 released, compared to version 0.1, version 0.2 added a new strategy to dispatch containers in the cluster, to propagate them on available nodes, and to support more Docker commands and cluster drivers.

Swarm Deamon is just a scheduler (Scheduler) and a router (router), Swarm itself does not run the container, it just accepts requests sent by the Docker client, dispatches the appropriate node to run the container, which means that Even if Swarm is hung for some reason, the nodes in the cluster will run as usual, and when swarm resumes running, it collects the rebuilt cluster information. The following is a diagram of the swarm structure:

How to use Swarm

There are 3 machines, SCLU083,IP address is 10.13.181.83,sclu084,ip address is 10.13.181.84,atsg124, The IP address is 10.32.105.124, which uses the three machines to create a docker cluster where sclu083 simultaneously acts as the Swarm Manager management cluster.

Swarm installation

The simplest way to install swarm is to use the swarm image provided by Docker:

sudo docker pull Swarm

Docker cluster management requires services discovery (Discovery service backend) functionality. Swarm supports the following service discovery features built into the discovery service Backend:docker hub, local static file description cluster (static files describing the cluster), ETCD (incidentally , ETCD this thing looks very promising, has time to study), Consul,zookeeper and some static IP list (a static list of IPs). This article details the use of the previous two methods backend.

Before you can use swarm for cluster management, you need to change the listening port of the Docker Deamon for all nodes that are ready to join the cluster to 0.0.0.0:2375, and use sudo docker–h tcp://0.0.0.0:2375 & commands directly , or you can modify it in the configuration file

sudo vim/etc/default/docker

Add the following sentence to the last side of the file

d0ocker_opts= "-H 0.0.0.0:2375–h unix:///var/run/docker.sock"

Note: It must be modified on all nodes to restart Docker Deamon after modification

sudo service docker restart

First approach: Use the service discovery feature built into the Docker hub

The first step: Execute the Swarm create command on any node to create a cluster flag. Once this command is completed, Swarm will be taken to the built-in discovery service on the Docker hub to obtain a globally unique token that uniquely identifies the swarm managed Docker cluster.

sudo docker run–-rm swarm create

We execute the above command on the sclu084 machine, with the following effect:

The token returned is D947B55AA8FB9198B5D13AD81F61AC4D, and this token must be remembered, because this token will be used for the next operation.

Step two: Execute the SWARM join command on all machines to join the cluster and join the machine to the cluster

The test is to execute commands on all three machines :

sudo docker run–-rm swarm join–addr=ip_address
: 2375 token://d947b55aa8fb9198b5d13ad81f61ac4d

The effects performed on the IP address of the 10.13.181.84 machine are as follows:

This command does not return immediately after execution, we manually return it via CTRL + C.

Step three: Start Swarm manager

Because we're going to let sclu083 act as swarm Management node, we're going to execute swarm manage this command on this machine:

sudo docker run–d–p 2376:2375 swarm manage token://d947b55aa8fb9198b5d13ad81f61ac4d

It is important to note that in this command, the first: Run swarm in the form of daemon. Second: Port mapping: 2376 can be replaced by any one of the ports not occupied by this machine, must not be 2375. Otherwise there will be a problem.

The execution results are as follows:

After executing this command, the entire cluster is up.

All nodes on the cluster can now be viewed on any one node.

You can then run the Dcoker container operation on the cluster using the command (the IP address and port of the Swarm Maneger machine in the command) on any one of the Docker-mounted machines.

Now view the cluster's node information on the 10.13.181.85 machine. The info command can be replaced by any of the Swarm supported Docker commands, which can view official documents

sudo docker–h 10.13.181.83:2376 info

By the result, we can find a problem: Obviously this small cluster has 3 nodes, but the info command only shows 2 nodes. Node 10.32.105.124 is also missing. Why is this happening?

Because 10.32.105.124 this machine does not set the above Docker daemon listens 0.0.0.0:2375 this port, therefore swarm cannot be able to join the cluster in the node.

When using the discovery service built into the Docker hub, there is a problem that occurs when you use swarm create

Time="2015-04-21t08:56:25z" level=fatal msg="Get https:// Discovery-stage.hub.docker.com/v1/clusters/d947b55aa8fb9198b5d13ad81f61ac4d:dial tcp:i/o timeout"

Similar to such a mistake, do not know what the reason is to be resolved.

The following second method can be used when a problem occurs with the service Discovery feature built into the Docker hub.

The second approach: working with files

The second method is relatively simpler than the first method and is less prone to timeout problems.

The first step: Create a new file on the sclu083 machine and write the IP address of the machine to be added to the cluster.

Step Two: Execute swarm manage command on sclu083 this machine:

sudo docker run–d–p 2376:2375–v $ (PWD)/cluster:/tmp/cluster swarm manage File:///tmp/cluster

Note: Be sure to use the- v command here, because the cluster file is on this machine, and the boot container is not accessible by default, so go through the- v command sharing . And, file:///, don't forget.

As you can see, the swarm is already running. Now you can view the cluster node information, using the command:

sudo docker run–rm–v $ (PWD)/cluster:/tmp/cluster swarm list File:///tmp/cluster

(When using the file as a service discovery, it seems that the Manage List command can only be used on the swarm manage node, not on other nodes)

Well, now that the cluster is up and running, you can use the cluster on other machines just like the first method. Also test on the sclu085 machine:

You can see that the access is successful and the node information is correct. You can then replace the above info command with other Docker executable commands to use this know Docker cluster.

Swarm scheduling Policy

Swarm when the schedule node is running the container, the node that is most suitable for running the container is calculated according to the specified policy, and the policy currently supported is: Spread,binpack,random.

Random, as the name implies, randomly selects a node to run the container, typically used for debugging purposes, and the spread and binpack policies compute the nodes that should run the container based on the available cpu,ram of each node and the number of containers that are running.

Under the same conditions, the spread strategy chooses the node that runs the fewest containers to run the new container, and the Binpack policy chooses the machine that runs the most centralized container to run the new node (the Binpack strategy causes Swarm to optimize For the container which are most packed.).

Using the spread policy will allow the container to be evenly distributed across nodes in the cluster, and once a node is hung up, it will only lose a small portion of the container.

The Binpack strategy maximizes the avoidance of container fragmentation, which means that the Binpack strategy leaves unused nodes as far as possible for containers that need more space, and as much as possible to run containers on top of one node.

Constraint Filter

Use the label to run the container above the specified node. These labels are specified in the Tunnel Docker daemon, and can also be unloaded in the/etc/default/docker configuration file.

sudo docker run–h 10.13.181.83:2376 run–name redis_083–d–e constraint:label==083 Redis

Affinity Filter

Use the-e affinity:container==container_name/container_id–-name container_1 to keep the container container_1 next to the container container_name/ CONTAINER_ID executes, which means that two containers are executed on a node (you can schedule 2 containers and make the container #2 next to the container #1.)

Start a container on a single machine first

sudo docker-h 10.13.181.83:2376 run--name redis_085-d-e constraint:label==085 Redis

Next, start the container redis_085_1, and let the Redis_085_1 run next to the redis_085 container, that is, run on a single node

sudo docker–h 10.13.181.83:2376 run–d–name redis_085_1–e affinity:container==redis_085 Redis

The-e affinity:image=image_name command allows you to specify that only machines that have downloaded image_name will run the container (you can schedule a container just on nodes where the images is already pulled)

The following command launches the Redis container on a node that has only a redis image:

sudo docker–h 100.13.181.83:2376 run–name redis1–d–e Affinity:image==redis Redis

The effect of the following command is to start a container with a redis mirror on the node with the R name called Redis, and if there are no Redis containers on each node, start the Redis container according to the default policy.

sudo docker-h 10.13.181.83:2376 run-d--name redis-e Affinity:image==~redis Redis

Port Filter
Port is also considered to be a unique resource

sudo docker-h 10.13.181.83:2376 run-d-P 80:80 nginx

After executing this command, any container that uses port 80 is failed to start.

Concluding remarks :

This article details two ways to use swarm to manage Docker clusters. But Swarm is a relatively new project, is still in the research and development stage, the development of swarm is very fast, the functions and characteristics of the changing generation is very frequent. Therefore, it can be said that swarm is not recommended for use in production environments, but it is certain that swarm is a promising technology.

Recently learning go, ready to take time to study the swarm source code. Go is a very promising language.

Reference: Docker official documentation

Docker Swarm Learning Tutorial

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.