Docker Swarm-Use experience 1+2

Source: Internet
Author: User
Tags zookeeper docker ps docker run docker swarm etcd value store

Background

With the implementation of the agile development deployment concept, I believe that for many people, Docker is no stranger to the container technology, the Docker 1.12 engine released for two months, the new engine contains many features. such as: Swarm mode, health checks for container clusters, identity encryption for nodes, Docker Service API calls, container-initiated filter matching (constraint), Docker built-in routing, and support for running Docker on multi-platform systems (MAC, Windows, AWS, AZURE), and some plug-in upgrades, and more. Features, even Docker's own product manager said the new version could be the biggest ever change in the company's history.

For a long time, Docker has been widely criticized for its management of cluster mode. The Docker service itself can only be operated on a single host, and there is no real cluster management solution in the official sense. Until now 1.12 of the emergence of the engine in the multi-host, multi-container cluster management has been further improved and perfected, the version itself embedded in the Swarm mode cluster management mode.

This paper mainly introduces the new features of swarm cluster management mode, and how to realize cluster building and service deployment in this mode.

Introduction to new features of Swarm cluster mode

  1. Batch creation of services

The Docker Service command in the 1.12 engine is similar to the previous Docker Run command, but the difference is that it can manage containers in multiple hosts at the same time. These features are described in the following swarm clusters of 1 manager nodes and 5 worker nodes.

First look at the creation of the container:

  $ docker Network create-d overlay MyNet

  $ docker Service Create–replicas 3–name frontend–network mynet–publish 80:80/tcp frontend_image:latest

  $ docker Service Create–name redis–network mynet redis:latest

Create a container before creating a overlay network, to ensure that the network mode of the container network on different hosts, the following two commands are used to create three identical copies of the Web container in the same overlay network named MyNet, and a redis copy, And each Web container provides a unified port mapping relationship. Just like this:

  2. Robust Cluster fault tolerance

Since it is a cluster, of course, there will be some node failure situation:

When two of the web nodes in the three web replicas are down, cluster will pull up two web copies on its own service registration discovery mechanism, as well as the previously set value of –replicas 3, on the remaining idle nodes in the cluster. It is easy to see that Docker service is not simply a batch start service, but a state defined in the cluster. Cluster continuously detects the health of the service and maintains the high availability of the cluster.

The new nodes are distributed as follows:

  

  3. Scalability of the service node

Swarm Cluster not only provides excellent high availability, but also provides the ability to extend the elasticity of the node. When the Web container group wants to dynamically scale to six nodes, it can replicate three new copies immediately by executing the Docker service scales frontend=6.

Sharp-eyed's friends may have noticed that all the new Web replica nodes that were extended run under the original Web node, if there is a need to run an identical copy on each node is there any way? The answer is yes:

  $ docker Service Create–mode=global–name Extend_frontend frontend_image:latest

A command is done in minutes!

  4. Scheduling mechanism

Docker1.12 's scheduling mechanism is also worth mentioning.

The main function of the so-called dispatch is the server side of cluster to choose the action of creating and starting a container instance on which server node. It is composed of a boxing algorithm and a filter. Each time the container is started through a filter (constraint), the Swarm cluster invokes the scheduling mechanism to filter out the server that matches the constraint and runs the container on it.

With just that example, plus the –constraint parameter, you can specify that the container only run on the node where the server hard disk is an SSD (provided that the node is added to the cluster, and when the daemon is started, it needs to add parameters--label Com.example.storage= "SSD"):

  $ docker Service Create–replicas 3–name frontend–network mynet–publish 80:80/tcp–constraint engine.labels.com.example . STORAGE=SSD Frontend_image:lastest

Build a swarm cluster

With these introductions, we should have a preliminary understanding of some of the new features of swarm cluster, and then look at an example of a mock site rolling_update, which is believed to be something that DevOps really wants to see in the release of many regular releases.

  1. Build a swarm cluster

Prepare three machines.

  node1:192.168.133.129

  node2:192.168.133.137

  node3:192.168.133.139

Before building a swarm cluster, you need to release 2377/TCP (Cluster management port), 7946/UDP (Inter-node communication port), 4789/UDP (Overlay network port) on the cluster node's firewall

First run the $docker swarm init on the Node1 to start a cluster manager node, and then run Docker swarm Join–token on any node that needs to be added to the cluster * * * 192.168.133.129:2377 will be able to add nodes to the Cluser (the node identity added to the cluster can be set to worker or manager at the back). Now the swarm cluster node is like the picture below, the boxes are ready, the goods are loaded inside.

By $docker node LS, you can see the running status of all swarm nodes:

  The creation process of the P.s.swarm cluster consists of the following three steps:

    1. Discovers individual nodes in a docker cluster, collects node status, role information, and monitors node status changes

    2. Initializing the Internal dispatch (scheduler) module

    3. Create and start the API Listener service module

Once this cluster is created, you can use the command Docker service to bulk manipulate the containers within the cluster. Building cluster Only two steps, isn't it very cool?

  2. Make a demo image

The image contains a simple HTTP Web service written by Python: env.py, which is intended to show the containerid of the container:

  From flask import Flask

  Import OS

  App = Flask (__name__)

  @app. Route ("/")

  Def env ():

  return os.environ["HOSTNAME"]

  App.run (host= "0.0.0.0")

  3. Create a service task with swarm mode

With this image, then create a task named Test with the Docker Service create command:

$ docker Service Create--name test-p 5000:5000 demo python env.py

Take a peek at Docker PS

Hey Why didn't you get up? Then use Docker service LS to view the status of the task:

  

Note that the value of this Repolicas, 0/1 indicates that Docker create has created a replica but not yet, wait a minute and run the command again:

  Add:

In some cases, the container is already running, but if you run Docker PS on this machine or can't see the container, why?

In fact, Docker runs the task task on the load-optimized node based on the current load of each swarm node, and Docker service PS + TaskID can see which node the task is running on.

Okay, container's up and running on Node1.

Open the address in your browser to see the ID of the container:

  4. Add Service node

After you have a single container instance, next try to expand the number of instances dynamically

$ Docker Service Scale test=6

Node1:

Node2

Node3

A command that now swarm three nodes in cluster, each running two test copy instances.

At this point you have noticed that a natural ha cluster has emerged. Docker sends HTTP requests for each host evenly to each of the task replicas based on the polling algorithm.

  5. Simulate one of the Swarm cluster nodes offline

Normally let a node in a swarm cluster out of the cluster by running the Docker swarm leave command on the node to be rolled out, but in order to make the experiment crazier, I'm on node3 directly stop Docker daemon

Then go to any of the remaining two nodes to view the task status:

The two test tasks that were originally run on Node3: Test3, Test4, were on the Node1 and Node2 two hosts. The entire replica migration process does not require manual intervention, and the original cluster's load balance is still alive after the migration!

---second article---


The previous request for Node1, Node2 and Node3 was tested by tying the host, and we went on to talk about this issue.

Load Balancing and service discovery

In the test, only load balancing is implemented between the containers in each host node, and when the production environment is rolling_update, it must be ensured that at least one container will be able to provide the service at the same time.

So the question is, is there any way to customize the running state of the application detected in each node, if one of the services is not working properly, immediately notify the front of the reverse proxy HTTP server, let it automatically remove the abnormal node, wait until the node repair and then re-register the node information to the load balancer? And there is no human intervention throughout.

The answer is yes. Here are two ways to implement service registration discovery:

1. docker1.12 built-in service registration discovery mechanism

Before referring to Docker's service discovery mechanism, it was forced to mention the overlay network, which was first featured in the docker1.9 release feature, which was characterized by the ability to network the containers on different hosts.

Before that, there are several ways to communicate between containers that are located on different hosts:

Use port mapping: Directly map the container's service port to the host, and the host communicates directly through the mapped port

Place the container on the network segment where the host is located

Through third-party tools such as Flannel,weave or pipework, these programs are generally built through SDN overlay network to achieve container communication

Docker1.12 still inherits the overlay network model, and provides a strong network guarantee for its service registration discovery.

Docke's registration Discovery principle is actually using a distributed key-value storage as the abstraction layer of storage. Docker 1.12 provides built-in Discovery services so that the cluster does not need to rely on external Discovery services such as consul or ETCD. (You can also use these discovery services under Swarm mode, which is described in detail in the next section). Currently, Swarm mode offers 6 discovery mechanisms: Token (default), Node, File, Consul, Etcd, Zookeeper. There are two ways in which a node needs to maintain a configuration file or other related content, and other service discoveries need to be done only through the join command line. Both of these methods are node and file discovery, respectively.

Okay, all right. To continue the experiment, first create a custom overlay network:

$docker Network create-d Overlay test

Then, on the same network, separate the application container and HTTP service containers together:

$ docker Service Create--name test-p 5000:5000--replicas 6–network test demo python env.py

$ docker Service Create--name nginx--replicas 3--network test-p 80:80 nginx-2

The default.conf configuration of the Nginx container is as follows, where test:5000 corresponds to the test task previously created by the Docker service, and the Docker engine maps the IP relationship of the Task name to internal DNS resolution.

server {

Listen 80;

server_name localhost;

Access_log/var/log/nginx/log/host.access.log main;

Location/{

# root/usr/share/nginx/html;

# index index.html index.htm;

Proxy_pass http://test:5000;

}

Error_page 502 503 504/50x.html;

Location =/50x.html {

root/usr/share/nginx/html;

}

}

At this point, when the browser accesses Http://node2, the HTTP request is distributed evenly to the 6 Python containers on the 3 swarm cluster node to respond to the request based on the VIP load Balancing algorithm, and no matter which backend container is hung, As long as three Docker swarm cluster node not at the same time, will not affect the normal Web services.

For the above VIP load balancing algorithm to do the following supplement: Docker1.12 uses the Linux own Ipvs as a load balancing method. Ipvs actually a load balancer module called Ip_vs in the Linux kernel. Unlike DNS load balancing, which polls IP lists sequentially, Ipvs distributes the load evenly to each container. The Ipvs is a four-tier redirector capable of forwarding TCP, UDP, DNS, and supports eight load balancing algorithms.

2. Docker combined with external Configuration Storage service

There are many options for this type of service, consul and Etcd,zookeeper, which take Consul for example. Consul is a service registration discovery software, itself is a key/value store. Before Docker1.12 was released, many people chose to combine it with Docker to provide a highly scalable Web service.

Before you start the experiment, modify the main configuration file for Docker and replace the default Docker's Key/value store center with consul

Execstart=/usr/bin/dockerd--registry-mirror=http://057aa18c.m.daocloud.io-h Unix:///var/run/docker.sock-- cluster-store=consul://192.168.133.137:8500

1) or in the above demonstration of several machines choose a node2 to do Consul server (consul server is best also configured to cluster mode, to achieve consul own ha, this article in order to quickly introduce the function will not be set up, only one node).

It is also important to note that this article selects a business node as the running location for the Configuration Storage service, but it is generally recommended that the base service be separated from the node running the business container, using a separate service node, in order to ensure that all nodes running the business container are stateless. Can be equally dispatched and assigned to operational tasks.

2) $docker run–d--restart=always-h node-p 8500:8500-p 8600:53/udp progrium/consul-server-bootstrap-advertise 192.16 8.133.137-log-level Debug

Concul comes with UI, open 192.168.133.137:8500 you can see, Consul boot will open two ports, one thing 53/udp, there is a 8500/tcp, from the dashboard can see their condition.


3) Start the Registrator container to register Docker container information in the consul cluster

$docker run-d--restart=always-v/var/run/docker.sock:/tmp/docker.sock-h 192.168.133.137 gliderlabs/registrator consul://192.168.133.137:8500

4) Start a simplest HTTP server to verify that it has registered its own information in Consul, enabling the automatic discovery function:

$docker run-d-P 7070:80--name httpd httpd

5) Finally, install Consul-template on the test machine to get the data from consul and update a local template configuration file.

Install Consul-template:

$ Curl Https://releases.hashicorp.com/consul-template/0.15.0/consul-template_0.15.0_linux_amd64.zip-o Consul-template.zip && unzip Consul-template.zip && mv./consul-template/usr/bin/

To generate a template file:

$ Echo-e ' {{range service ' httpd '}}\nserver {. address}}:{{. Port}}{{end}} ' >/tmp/consul.ctmpl

Fill in the Template:

$consul-template-consul 192.168.133.137:8500-template "/tmp/consul.ctmpl:/tmp/consul.result"--once

Now stop the HTTPD container and re-execute the command to fill in the template.

Can see registered into the Consul container information is filled in the template! If the template is made Nginx configuration file, it can be based on consul to detect whether the container is started to dynamically update the Nginx configuration file.

Upstream Consul_nodes {

Server 192.168.133.137:7070;

Server 192.168.133.139:7070;

}

Location/{

root HTML;

Index index.html index.htm;

Proxy_pass Http://consul_nodes;

}

The above two is to realize the service registration found in the way, are listed to show you, the comparison can be seen in the ease of configuration in terms of docker1.12, or to occupy a more obvious advantage.

Rolling deployment

In previous versions of Docker, the container had to be manually blue-green, or the handwriting script could be used for rolling upgrades. 1.12 With a rolling update, we don't need to write the update rule as a script to implement a transparent deployment. In Swarm mode, the service can update the stepwise node and control the delay between the deployment of the service to a different node collection. If there are any errors, you can roll back the previous task immediately and go back to the previous version of the service.

You can do this now when you want to update the mirror referenced by the test task:

$docker Service Update--update-parallelism 2--image demo:2.0--update-delay 10s test

Where the--update-parallelism parameter is used to specify the maximum number of synchronous update tasks. This means that we can update the container copy safely and transparently. For transparency, of course, make sure your container is backwards compatible, otherwise it's best to destroy the old container and then update all the containers.

The container will then be followed by the new 2 containers every 10 seconds, up to 30 seconds after the update operation is complete.

Finally, the Swarm cluster feature option for Docker 1.12 is available to start, not a mandatory option. The original single master mode of operation is still retained. But are you still willing to turn off this option when you see these cool new features?

Docker's future is still bright, as it has evolved from purely mirrored ecosystems to management of container clusters.

(GO) Docker swarm-Use experience 1+2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.