We mentioned an example earlier, a micro-service application consisting of a front-end and a number of backend services. The front-end is the Traefik HTTP proxy, which is responsible for routing each request to the back-end service. The back end is very simple, a set of HTTP Web servers based on go, responsible for returning the container ID in which it is running.
The new Docker swarm no longer requires a separate HTTP proxy to be set for the application container. The previous schema, as shown in the previous illustration, is now reduced to the form shown in the following illustration:
Less moving parts--Praise!
In addition, we have built a load balancing mechanism for back-end services. We can even access these services based on any node within the cluster. Docker Swarm also integrates a built-in mesh routing mechanism for routing requests to appropriate back-end containers.
In the face of these new features, some friends may think that the Docker swarm cluster setup process is more complex than it was originally. In fact, the whole process is simpler.
Still doubtful? Let's look at it together.
Yes, we still use the Raspberry Pi cluster this time. I use the Docker 1.12 build and install it on Raspberry Pi. When Docker 1.12 launches the official version, we will update the content accordingly.
Here's a look at the current configuration:
Root@pi6 $ docker Version Client:
Version:1.12.0-rc1
API version:1.24
Go version:go1.6.2
Git commit:1f136c1-unsupported
built:wed June 15 15:35:51 2016
Os/arch:linux/arm
Server:
Version:1.12.0-rc1
API version:1.24
Go version:go1.6.2
Git commit:1f136c1-unsupported
built:wed June 15 15:35:51 2016
Os/arch:linux/arm
Very well, Docker 1.12 RC1 is ready. The necessary services are started below. First let's see if we can find the new hidden features in the Docker CLI.
Root@pi6 $ docker usage:docker [OPTIONS] COMMAND [arg ...]
Docker [--help |-v |--version]
A self-sufficient runtime for containers.
... service Manage Docker services ... stats Display a live stream of container (s) Resource usage statistics ... swarm man Age Docker Swarm ... update update configuration of one or more containers Run ' docker COMMAND--help ' for more informatio N on a command.
I just removed the exact part of the version that was consistent with the one before, leaving only the difference. Now we can use the Docker swarm command.
Query its specific role:
Root@pi6 $ docker swarm Usage:docker swarm COMMAND
Manage Docker Swarm
Options:
--help Print usage Commands:
Init Initialize a Swarm. Join join a Swarm as a node and/or manager. Update Update the Swarm. Leave leave a Swarm. Inspect inspect the Swarm Run ' Docker Swarm command--help ' For more information on a command.
This means that it is used to "initialize a set of swarm." Looks exactly what we need. Start the command first.
Root@pi6 $ docker swarm init swarm initialized:current node (1njlvzi9rk2syv3xojw217o0g) is now manager.
Now that our swarm Management node has started running, add more nodes to the cluster.
Go to another node in the cluster and execute:
ROOT@PI1 $ docker Swarm join pi6:2377 This node joined a swarm as a worker.
Using the above command, we declared the new nodes that should join the Swarm management node in the initial swarm cluster we just created. Docker Swarm will perform related operations in the background.
For example, it sets up encrypted communication channels for different cluster nodes. We no longer need to manage TLS certificates ourselves.
Every friend who has ever set up a Docker swarm cluster will realize how simple the new process is. But it's not over here yet.
A "Docker info" in the Swarm Management node brings some interesting hints. I still delete the unnecessary parts:
Root@pi6 $ docker Info ... Swarm:active
nodeid:1njlvzi9rk2syv3xojw217o0g
Ismanager:yes
Managers:1
Nodes:2
CACertHash:sha256:de4e2bff3b63700aad01df97bbe0397f131aabed5fabb7732283f044472323fc
... Kernel version:4.4.10-hypriotos-v7+
Operating System:raspbian Gnu/linux 8 (Jessie)
Ostype:linux
architecture:armv7l
Cpus:4
Total memory:925.4 MiB
Name:pi6
...
As you can see, we now have a new "Swarm" section in the "Docker info" output, which tells us that the current node belongs to a set of Swarm management nodes, and that the cluster consists of two cluster nodes.
On the second node, the output is slightly different from the management node:
Swarm:active nodeid:3fmwt4taurwxczr2icboojz8g
Ismanager:no
Here, we already have an interesting but still empty cluster of swarm.
We also need to understand the new abstract definition of service in Docker 1.12. You may notice the Docker Service command in the previous output. The so-called Docker service, refers to running in the container and responsible for the outside world to provide running in the Swarm cluster "service" software fragments.
Such a service can consist of a single or multiple sets of containers. In the latter case, we can ensure that the service has high availability and/or load balancing capabilities.
The following uses the "WhoAmI" mirror created earlier to establish such a service.
Root@pi6 $ docker Service Create--name whoami-p 80:8000 Hypriot/rpi-whoami buy0q65lw7nshm76kvy5imxk3
With the help of the "Docker swarm ls" command, we can check the status of this new service.
ROOT@PI6 $ docker Service ls ID NAME SCALE IMAGE COMMAND
Buy0q65lw7ns WhoAmI 1 Hypriot/rpi-whoami
Check below to see if we can send the L HTTP command to the Eth0 network interface via the Curl command to request a directory page.
Root@pi6 $ Curl http://192.168.178.24
I ' m 1b6df814c654
Everything is going well, clap! Some friends may notice that the "SCALE" section exists in the header row of the "Docker swarm ls" command, which seems to imply that we can extend the service.
ROOT@PI6 $ Docker Service scale whoami=5
WhoAmI scaled to 5
Let's actually verify it:
ROOT@PI6 $ docker Service ls ID NAME SCALE IMAGE COMMAND
Buy0q65lw7ns WhoAmI 5 Hypriot/rpi-whoami
Root@pi6 $ for I in {1..5}; Do Curl http://192.168.178.24; Done
I ' m 8db1657e8517
I ' m e1863a2be88d
I ' m 1b6df814c654
I ' m 8db1657e8517
I ' m e1863a2be88d
Very simple.
But this approach is similar to the original swarm, but more convenient and quicker to use. Note that we are using Raspberry Pi rather than a powerful server, so we have a more conservative estimate of performance.
The following is the current state of operation from the point of view of a single Docker engine:
Root@pi6 $ docker PS CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e1863a2be88d hypriot/rpi-whoami:latest "/http" 2 minutes ago up 2 minutes 8000/tcp WHOAMI.4.0LG12ZNDBAL72EXQE08R9WVPG
8db1657e8517 hypriot/rpi-whoami:latest "/http" 2 minutes ago up 2 minutes 8000/tcp Whoami.5.5z6mvsrdy73m5w24icgsqc8i2
1b6df814c654 hypriot/rpi-whoami:latest "/http" 8 minutes ago up 8 minutes 8000/tcp whoami.1.bg4qlpiye6h6uxyf8cmkwuh52
As you can see, there are 5 sets of containers that have been started, of which 3 are housed in "Pi6". Let's see if we can find another container:
ROOT@PI1 Docker PS CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
db411a119c0a hypriot/rpi-whoami:latest "/http" 6 minutes ago up 6 minutes 8000/tcp WHOAMI.2.2TF7YHMX9HAOL7E2B7XIB2EMJ
0A4BF32FA9C4 hypriot/rpi-whoami:latest "/http" 6 minutes ago up 6 minutes 8000/tcp whoami.3.2r6mm091c2ybr0f9jz4qaxw9k
So what happens if we place this cluster of swarm in the "Pi1"?
Root@pi1 Docker swarm leave Node left the default swarm.
Here's a look at the operation on another node:
Docker PS CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
58620e3d533c hypriot/rpi-whoami:latest "/http" seconds ago up seconds 8000/tcp WHOAMI.2.CGC4E2IXULC2F3EHR4LAOURSG
acc9b523f434 hypriot/rpi-whoami:latest "/http" seconds ago up seconds 8000/tcp Whoami.3.67bhlo3nwgehthi3bg5bfdzue
e1863a2be88d hypriot/rpi-whoami:latest "/http" 8 minutes ago up 8 minutes 8000/tcp WHOAMI.4.0LG12ZNDBAL72EXQE08R9WVPG
8db1657e8517 hypriot/rpi-whoami:latest "/http" 8 minutes ago up 8 minutes 8000/tcp Whoami.5.5z6mvsrdy73m5w24icgsqc8i2
1b6df814c654 hypriot/rpi-whoami:latest "/http" minutes ago up minutes 8000/tcp whoami.1.bg4qlpiye6h6uxyf8cmkwuh52
The situation here is equivalent to the failure of the "PI1" node, when all the containers running in "Pi1" are automatically migrated to another cluster node. This mechanism is undoubtedly very important in actual production.
So here's a look at the information we've learned before:
We have created a small dynamic micro-service application that is composed entirely of Docker. The Docker swarm is now integrated into the docker-engine and no longer exists as an independent software form. In most cases, this can establish an independent proxy mechanism for applying back-end services. No longer need to use Nginx, Haproxy or Traefik.
Although the number of active parts has decreased, we now have built-in high availability and load balancing capabilities. I am looking forward to the new surprises in the future Docker swarm, and how to collaborate with Docker compose.