Docker Swarm Cluster Practice--management Chapter

Source: Internet
Author: User
Tags join redis docker ps docker run docker swarm advantage

Before we have deployed a Docker swarm cluster environment, we will briefly introduce the management of the swarm cluster.


Cluster scheduling strategy

Since it is a cluster, there is a scheduling policy, that is, the cluster contains so many sub-nodes, I exactly set a strategy to allocate it.


We look at the official Docker documentation to see that Swarm's cluster schedule consists of three strategies:

To choose a ranking strategy, pass the--STRATEGY flag and a strategy value to the swarm manage command. Swarm currently supports these values:spread binpack random

The spread and binpack strategies compute rank according to a node ' s available CPUs, its RAM, and the number of containers It has. The random strategy uses no computation.

It selects a node at random and was primarily intended for debugging.

Your goal in choosing a strategy was to the best optimize Your cluster According to Your company's needs. Under the spread strategy, Swarm optimizes for the node with the least number of containers. The Binpack strategy causes Swarm to optimize for the node which are most packed. Note that a container occupies resource during it life cycle, including exited state. Users should is aware of this condition to schedule containers. For example, spread strategy only checks number of containers disregarding their states. A node with no active containers but high number of stopped containers is not being selected, defeating the purpose of load Sharing. User could either remove stopped containers, or start stopped containers to achieve load spreading. The random STrategy, like it sounds, chooses nodes at random regardless of their available CPU or RAM. Using The spread strategy results in containers spread thinly over many machines.

The advantage of this strategy is, if a node goes down, you only lose a few containers. The Binpack strategy avoids fragmentation because it leaves for bigger containers on unused machines.  The strategic advantage of Binpack is so you use fewer machines as Swarm tries to pack as many containers as it can on a Node.

Briefly summarize:

Random policy: Randomly select nodes. Generally used for development testing phase.  Spread policy: The default policy, swarm priority to select the resource (such as CPU, memory, etc.) The least node, can guarantee the uniform use of all node resources in the cluster. Binpack strategy: Instead of spread, it is designed to fill as many nodes as possible to ensure more free nodes.

To do this, add the policy parameter to the start Swarm Manager

root@controller:~# Docker run-p 2376:2375-d swarm manage TOKEN://88B70A0603A97F3E51BE1D83F471A1DF--strategy random
  b1f075e99d5a3fab795a2d3c33f8229e3d06a142dd421651e05af08279fb7488
Looking at the cluster node information, you can see that the policy has modified the random.


Next, we create a container instance that is randomly dispatched by default.

Swarm Filtration

Swarm clusters can be created not only by setting policies for automated scheduling, but also by setting filters to customize settings. Filters is divided into two categories: node filter and container filter.

Node Filter: Filtered by the configuration of Docker Damon or the characteristics of the Docker host, which includes:
-Constraint
-Health

Container filter: The Docker container is configured as a conditional filter, which includes:
-Affinity
-Dependency
-Port

When using a container filter, the filter applies to all containers, including the container for the stopped state. The dependency and port two in the container filter will automatically select the node that meets the criteria using the relevant parameters of Docker when running the container, without the need to specify additional parameters.

using contraint filtering

Using contraint filtering requires that you use tag for the clustered server or add a label, such as/etc/default/docker settings lable for each cluster.

docker_opts= "-H 0.0.0.0:2375-h unix:///var/run/docker.sock--label Node=controller--insecure-registry 192.168.12.132:5000 "

Other machines modify their machine name:--label node= Native machine name

Restart the Docker service

We look at the Swarm Manager node information, such as the Red Box logo.

1. We create n-times container instances by default

Docker-h 192.168.12.132:2376 Run-id ubuntu:14.04

Looking at the Docker ps-a for each container node, you can see that the instances are generated randomly on each machine.

2, for example, create an instance only on a node that is set, for example, only the controller132 node is created

root@controller:~# docker-h 192.168.12.132:2376 Run-id-  e constraint:node==controller  ubuntu:14.04

Our limit is not just to filter the specified nodes, but also to implement filters like storage drivers, kernel versions, OS, and so on.

Storagedriver Executiondriver kernelversion OperatingSystem

Viewing the controller machine can see that all is created on the machine

root@controller:~# Docker ps-a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
6bc9f97d5c95        ubuntu:14.04        "/bin/bash"              seconds ago up one      seconds                                Stoic_golick
9fad12f8c073        ubuntu:14.04        "/bin/bash" to              seconds ago up      seconds
Silly_murdock eb6a03627e7a        ubuntu:14.04        "/bin/bash"              2 minutes ago up       2 minutes                                 tiny_wing
B1F075E99D5A        swarm               "/swarm Manage token:"   minutes ago       up minutes 0.0.0.0:2376-> 2375/tcp   Angry_mirzakhani
4bc94ec9f4a2        swarm               "/swarm join-addr=19" minutes   ago      Up minutes       2375/tcp                 jovial_mcnulty


3, we can view the container information of the cluster through the docker-h swarm-manager-ip:port  command, command can be the related commands of Docker, for example

root@controller:~# docker-h 192.168.12.132:2376 ps-a CONTAINER ID IMAGE COMMAND CR eated STATUS PORTS NAMES 6bc9f97d5c95 ubuntu:14.04 "/bin/ Bash "5 minutes ago up 5 minutes Controller/stoic_golick 9fad12f                                        8c073 ubuntu:14.04 "/bin/bash" 5 minutes ago up 5 minutes  Controller/silly_murdock eb6a03627e7a ubuntu:14.04 "/bin/bash" 7 minutes ago up 7              Minutes controller/tiny_wing 6f24519e302a ubuntu:14.04 "/bin/bash" 7 minutes ago up 7 minutes Docker2/condescending_mcclintock 2ae50                                        D7963ca ubuntu:14.04 "/bin/bash" 7 minutes ago up 7 minutes Docker1/goofy_Cray B1F075E99D5A Swarm "/swarm Manage token:" Minutes ago up to minutes 192.168.12.1 32:2376->2375/tcp Controller/angry_mirzakhani 9065ee5166cd Swarm "/swarm join-addr=19" min               Utes ago up minutes 2375/tcp Docker2/small_davinci 0bd66f118d09 Swarm "/swarm join-addr=19" minutes ago up minutes 2375/tcp Docker1/adoring_boy                        D 4bc94ec9f4a2 Swarm "/swarm join-addr=19" minutes ago up minutes 2375/tcp
 Controller/jovial_mcnulty


using the health filter

When the node is down or unable to communicate with cluster, the node is in the Unhealth state.
The health filter is specified by--filter=health when the cluster is started, forcing the cluster to use the state's node when the node is selected for subsequent operations.

using the affinity Filter
Affinity filter for selecting nodes that meet the following criteria (with affinity):
-Container (container name or ID)
-Images (mirror name)
-Label (the custom label on the container)

Use the-e affinity:container==container_name/container_id–-name container_1 to keep the container container_1 next to the container container_name/ container_id execution, which means that two containers are executed on a node

Of course, there are many usage scenarios for the Affinity filter, for example: the-e affinity:image=image_name command allows you to specify that only machines that have downloaded image_name will run the container


using dependency filtering

When you run the container, you can specify the following three dependencies:
-–volumns-from=dependency
-–link=dependency:alias
-–net=container:dependency

When you create a container using swarm, the nodes that are running as containers are automatically based on the host environment that satisfies these three dependencies. using port filtering

When you run the container, specify the port mappings for the host and container by-p, and in the swarm environment, the hosts that specify the ports that are available (not occupied) are automatically selected to avoid port collisions.

Filters-expression

The filters above, constraints and affinity can specify the filter expression with the-e parameter extra.

<filter-type>:<key><operator><value>

    filter-type:constraints and affinity two
    key types: container, nodes, node default tag, node, or container custom label
    operator: Includes ' = = ', '! = ', can be followed by ' ~ ' for soft matching, and automatically ignores filter conditions
    when conditions are not matched Value: string expression (character, data, dot, underline, etc.), can be a
        global match--abc*
        regular match--/node\d/

Example:

Constraint:node==node1 matches node node1. Constraint:node!=node1 matches all nodes, Except node1. Constraint:region!=us* matches all nodes outside and a region tag prefixed with us. Constraint:node==/node[12]/ matches Nodes node1 and node2. Constraint:node==/node\d/ matches all nodes with node + 1 digit. Constraint:node!=/node-[01]/ matches all nodes, except node-0 and node-1. Constraint:node!=/foo[bar]/ matches all nodes, Except foo[bar]. You can see the as of escape characters here. constraint:node==/(? i) node1/ matches node node1 case-insensitive. So node1 or node1 also match. Affinity:image==~redis tries to match for nodes running container with A redis image constraint:region== ~us* searches for nodes in the cluster belonging to The us region affinity:container!=~redis*  Schedule a new REDIS5 container to a nodeWithout a container that satisfies redis*


For more information: https://docs.docker.com/swarm/scheduler/filter/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.