Docker Swarm getting started, dockerswarm

Source: Internet
Author: User
Tags docker ps docker compose docker swarm docker machine

Docker Swarm getting started, dockerswarm

Swarm was an independent project before Docker 1.12. After Docker 1.12 was released, the project was merged into Docker and became a sub-command of Docker. Currently, Swarm is the only tool provided by the Docker community to support Docker cluster management. It can convert a system composed of multiple Docker hosts into a single virtual Docker host, so that containers can form a cross-host subnet network.

1. Understanding about Swarm

Swarm is currently the only cluster management tool specified (bound) by Docker. Docker 1.12 is embedded with the swarm mode cluster management mode.

To facilitate the demonstration of cross-host networks, we need to use a tool -- Docker Machine. This tool is used with Docker Compose, Docker Swarm, and Docker sanjianke. Let's take a look at how to install Docker Machine:

$ curl -L https://github.com/docker/machine/releases/download/v0.9.0-rc2/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine &&    chmod +x /tmp/docker-machine &&    sudo cp /tmp/docker-machine /usr/local/bin/docker-machine

The installation process is very similar to Docker Compose. Now all the three Docker Swordsmen have arrived.
Before starting, we need to understand some basic concepts. The Docker commands for clusters are as follows:

Docker swarm: cluster management. Sub-Commands include init, join, join-token, leave, and updatedocker node: node management. Sub-Commands include demote, inspect, ls, promote, rm, ps, updatedocker service: service Management. Sub-Commands include create, inspect, ps, ls, rm, scale, updatedocker stack/deploy: test features for multi-application deployment. 2. Create a cluster

First, use Docker Machine to create a virtual Machine as a manger node.

$ docker-machine create --driver virtualbox manager1                                    Running pre-create checks...(manager1) Unable to get the latest Boot2Docker ISO release version:  Get https://api.github.com/repos/boot2docker/boot2docker/releases/latest: dial tcp: lookup api.github.com on [::1]:53: server misbehavingCreating machine...(manager1) Unable to get the latest Boot2Docker ISO release version:  Get https://api.github.com/repos/boot2docker/boot2docker/releases/latest: dial tcp: lookup api.github.com on [::1]:53: server misbehaving(manager1) Copying /home/zuolan/.docker/machine/cache/boot2docker.iso to /home/zuolan/.docker/machine/machines/manager1/boot2docker.iso...(manager1) Creating VirtualBox VM...(manager1) Creating SSH key...(manager1) Starting the VM...(manager1) Check network to re-create if needed...(manager1) Found a new host-only adapter: "vboxnet0"(manager1) Waiting for an IP...Waiting for machine to be running, this may take a few minutes...Detecting operating system of created instance...Waiting for SSH to be available...Detecting the provisioner...Provisioning with boot2docker...Copying certs to the local machine directory...Copying certs to the remote machine...Setting Docker configuration on the remote daemon...Checking connection to Docker...Docker is up and running!To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager1

View the virtual machine environment variables and other information, including the Virtual Machine IP Address:

$  docker-machine env manager1export DOCKER_TLS_VERIFY="1"export DOCKER_HOST="tcp://192.168.99.100:2376"export DOCKER_CERT_PATH="/home/zuolan/.docker/machine/machines/manager1"export DOCKER_MACHINE_NAME="manager1"# Run this command to configure your shell: # eval $(docker-machine env manager1)

Then create a node as the work node.

$ docker-machine create --driver virtualbox worker1

Now we have two virtual hosts. You can view them using the Machine command:

$ docker-machine ls                             NAME     ACTIVE   DRIVER       STATE    URL                        SWARM  DOCKER   ERRORSmanager1   -      virtualbox   Running  tcp://192.168.99.100:2376         v1.12.3   worker1    -      virtualbox   Running  tcp://192.168.99.101:2376         v1.12.3

However, there is no connection between the two virtual hosts. To connect them, we need to bring up Swarm.
Because we use a virtual Machine created by Docker machine, we can use the docker-Machine ssh command to operate the virtual machine. In the actual production environment, we do not need to perform the operation as follows, run docker swarm.

Add manager1 to the cluster:

$ docker-machine ssh manager1 docker swarm init --listen-addr 192.168.99.100:2377 --advertise-addr 192.168.99.100Swarm initialized: current node (23lkbq7uovqsg550qfzup59t6) is now a manager.To add a worker to this swarm, run the following command:    docker swarm join \    --token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r \    192.168.99.100:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Use -- listen-addr to specify the listening ip address and port. The actual Swarm command format is as follows. In this example, use Docker Machine to connect to the virtual Machine:

$ docker swarm init --listen-addr 
 
  :
  
 

Next, add work1 to the cluster:

$ docker-machine ssh worker1 docker swarm join --token \    SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r \    192.168.99.100:2377This node joined a swarm as a worker.

In the preceding join command, you can add -- listen-addr $ WORKER1_IP: 2377 as the listener preparation, because sometimes a work node may be upgraded to a manger node, of course, this parameter will not be added in this example.

Note: If you encounter dual NICs when creating a cluster, you can specify the IP address to use. For example, the following error may occur in the preceding example.

$ docker-machine ssh manager1 docker swarm init --listen-addr $MANAGER1_IP:2377Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (10.0.2.15 on eth0 and 192.168.99.100 on eth1) - specify one with --advertise-addrexit status 1

The error occurs because there are two IP addresses, and Swarm does not know which one you want to use. Therefore, you must specify the IP address.

$ docker-machine ssh manager1 docker swarm init --advertise-addr 192.168.99.100 --listen-addr 192.168.99.100:2377 Swarm initialized: current node (ahvwxicunjd0z8g0eeosjztjx) is now a manager.To add a worker to this swarm, run the following command:    docker swarm join \    --token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r \    192.168.99.100:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The cluster is initialized successfully.

Now we have created a "cluster" with two nodes. Now we can use the docker node command to view the node information on one of the management nodes:

$ docker-machine ssh manager1 docker node lsID                       HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS23lkbq7uovqsg550qfzup59t6 *  manager1    Ready      Active         Leaderdqb3fim8zvcob8sycri3hy98a    worker1     Ready      Active

Now, each node belongs to Swarm and is in STANDBY state. Manager1 is the leader and work1 is the worker.

Now, we continue to create new virtual machines manger2, worker2, and worker3. Now there are five virtual machines. Use docker-machine ls to view the virtual machine:

NAME     ACTIVE    DRIVER       STATE     URL                         SWARM   DOCKER    ERRORSmanager1   -       virtualbox   Running   tcp://192.168.99.100:2376           v1.12.3   manager2   -       virtualbox   Running   tcp://192.168.99.105:2376           v1.12.3   worker1    -       virtualbox   Running   tcp://192.168.99.102:2376           v1.12.3   worker2    -       virtualbox   Running   tcp://192.168.99.103:2376           v1.12.3   worker3    -       virtualbox   Running   tcp://192.168.99.104:2376           v1.12.3

Then we add the remaining VMS to the cluster.

Add worker2 to the cluster:
$ docker-machine ssh worker2 docker swarm join \    --token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r \    192.168.99.100:2377This node joined a swarm as a worker.
Add worker3 to the cluster:
$ docker-machine ssh worker3 docker swarm join \    --token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r \    192.168.99.100:2377This node joined a swarm as a worker.
Add manager2 to the cluster:
First, obtain the manager token from manager1:
$ docker-machine ssh manager1 docker swarm join-token managerTo add a manager to this swarm, run the following command:    docker swarm join \    --token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-8tn855hkjdb6usrblo9iu700o \192.168.99.100:2377

Then add manager2 to the cluster:

$ docker-machine ssh manager2 docker swarm join \    --token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-8tn855hkjdb6usrblo9iu700o \    192.168.99.100:2377This node joined a swarm as a manager.

Now let's check the cluster information:

$ docker-machine ssh manager2 docker node lsID                            HOSTNAME   STATUS   AVAILABILITY   MANAGER STATUS16w80jnqy2k30yez4wbbaz1l8     worker1     Ready     Active        2gkwhzakejj72n5xoxruet71z     worker2     Ready     Active        35kutfyn1ratch55fn7j3fs4x     worker3     Ready     Active        a9r21g5iq1u6h31myprfwl8ln *   manager2    Ready     Active        Reachabledpo7snxbz2a0dxvx6mf19p35z     manager1    Ready     Active        Leader
3. Establish a cross-host network

In order to make the demonstration clearer, we will add the host machine to the cluster, so that we will be much clearer when using Docker commands.
Run the add cluster command locally:

$ docker swarm join \               --token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-8tn855hkjdb6usrblo9iu700o \    192.168.99.100:2377This node joined a swarm as a manager.

Now we have three managers and three workers. One of them is a host machine and five virtual machines.

$ docker node lsID                          HOSTNAME    STATUS    AVAILABILITY  MANAGER STATUS6z2rpk1t4xucffzlr2rpqb8u3    worker3     Ready     Active        7qbr0xd747qena4awx8bx101s *  user-pc     Ready     Active         Reachable9v93sav79jqrg0c7051rcxxev    manager2    Ready     Active         Reachablea1ner3zxj3ubsiw4l3p28wrkj    worker1     Ready     Active        a5w7h8j83i11qqi4vlu948mad    worker2     Ready     Active        d4h7vuekklpd6189fcudpfy18    manager1    Ready     Active          Leader

View the network status:

$ docker network lsNETWORK ID         NAME            DRIVER          SCOPE764ff31881e5        bridge          bridge          local                  fbd9a977aa03        host            host            local               6p6xlousvsy2        ingress         overlay         swarm            e81af24d643d        none            null            local

We can see that there is an overlay network named ingress by default on swarm, which is used by default in swarm. In this example, a new overlay network is created.

$ docker network create --driver overlay swarm_test4dm8cy9y5delvs5vd0ghdd89s$ docker network lsNETWORK ID         NAME                DRIVER              SCOPE764ff31881e5        bridge              bridge              localfbd9a977aa03        host                host                local6p6xlousvsy2        ingress             overlay             swarme81af24d643d        none                null                local4dm8cy9y5del        swarm_test          overlay             swarm

This cross-host network is ready, but now the network is in standby mode. We will deploy applications on this network in the next section.

4. Deploy applications on a cross-host network

First, the nodes we created above do not have images. Therefore, we need to pull images to the nodes one by one. Here we use the private repository built above.

$ docker-machine ssh manager1 docker pull reg.example.com/library/nginx:alpine     alpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ...7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine$ docker-machine ssh manager2 docker pull reg.example.com/library/nginx:alpinealpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ...7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine$ docker-machine ssh worker1 docker pull reg.example.com/library/nginx:alpine alpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ...7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine$ docker-machine ssh worker2 docker pull reg.example.com/library/nginx:alpinealpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ...7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine$ docker-machine ssh worker3 docker pull reg.example.com/library/nginx:alpinealpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ...7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine

Docker pull is used to pull the nginx: alpine image from five virtual machine nodes. Next, we will deploy a group of Nginx services on five nodes.

The deployed service uses swarm_test across host networks.

$ docker service create --replicas 2 --name helloworld --network=swarm_test nginx:alpine5gz0h2s5agh2d2libvzq6bhgs

View service status:

$ docker service lsID            NAME        REPLICAS  IMAGE         COMMAND5gz0h2s5agh2  helloworld  0/2       nginx:alpine  

View the helloworld service details (the output has been adjusted for ease of reading ):

$ docker service ps helloworldID          NAME          IMAGE         NODE      DESIRED STATE   CURRENT STATE              ERRORay081uome3   helloworld.1  nginx:alpine  manager1  Running         Preparing 2 seconds ago  16cvore0c96  helloworld.2  nginx:alpine  worker2   Running         Preparing 2 seconds ago

The two instances run on two nodes respectively.

Go to the two nodes and check the service status (the output has been adjusted for ease of reading ):

$ docker-machine ssh manager1 docker ps -aCONTAINER ID   IMAGE         COMMAND         CREATED        STATUS         PORTS            NAMES119f787622c2   nginx:alpine  "nginx -g ..."   4 minutes ago  Up 4 minutes   80/tcp, 443/tcp  hello ...$ docker-machine ssh worker2 docker ps -aCONTAINER ID   IMAGE         COMMAND         CREATED         STATUS        PORTS             NAMES5db707401a06   nginx:alpine  "nginx -g ..."   4 minutes ago   Up 4 minutes  80/tcp, 443/tcp   hello ...

The above output is adjusted. The actual NAMES value is:

helloworld.1.ay081uome3eejeg4mspa8pdlxhelloworld.2.16cvore0c96rby1vp0sny3mvt

Remember the names of the above two instances. Now let's see if these two cross-host containers can interwork:
First, use Machine to enter the manager1 node, and then run the docker exec-I command to enter the helloworld.1 container and ping the helloworld.2 container running on worker2 node.

$ docker-machine ssh manager1 docker exec -i helloworld.1.ay081uome3eejeg4mspa8pdlx \    ping helloworld.2.16cvore0c96rby1vp0sny3mvtPING helloworld.2.16cvore0c96rby1vp0sny3mvt (10.0.0.4): 56 data bytes64 bytes from 10.0.0.4: seq=0 ttl=64 time=0.591 ms64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.594 ms64 bytes from 10.0.0.4: seq=2 ttl=64 time=0.624 ms64 bytes from 10.0.0.4: seq=3 ttl=64 time=0.612 ms^C

Run the Machine command to enter the worker2 node, and then run the docker exec-I command to ping the helloworld.1 container running on the manager1 node in the helloworld.2 container.

$ docker-machine ssh worker2 docker exec -i helloworld.2.16cvore0c96rby1vp0sny3mvt \    ping helloworld.1.ay081uome3eejeg4mspa8pdlx PING helloworld.1.ay081uome3eejeg4mspa8pdlx (10.0.0.3): 56 data bytes64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.466 ms64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.465 ms64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.548 ms64 bytes from 10.0.0.3: seq=3 ttl=64 time=0.689 ms^C

The containers in the two cross-host service clusters can be connected to each other.

To reflect the advantages of the Swarm cluster, we can use the ping command of the virtual machine to test the containers in the virtual machine of the other party.

$ docker-machine ssh worker2 ping helloworld.1.ay081uome3eejeg4mspa8pdlxPING helloworld.1.ay081uome3eejeg4mspa8pdlx (221.179.46.190): 56 data bytes64 bytes from 221.179.46.190: seq=0 ttl=63 time=48.651 ms64 bytes from 221.179.46.190: seq=1 ttl=63 time=63.239 ms64 bytes from 221.179.46.190: seq=2 ttl=63 time=47.686 ms64 bytes from 221.179.46.190: seq=3 ttl=63 time=61.232 ms^C$ docker-machine ssh manager1 ping helloworld.2.16cvore0c96rby1vp0sny3mvtPING helloworld.2.16cvore0c96rby1vp0sny3mvt (221.179.46.194): 56 data bytes64 bytes from 221.179.46.194: seq=0 ttl=63 time=30.150 ms64 bytes from 221.179.46.194: seq=1 ttl=63 time=54.455 ms64 bytes from 221.179.46.194: seq=2 ttl=63 time=73.862 ms64 bytes from 221.179.46.194: seq=3 ttl=63 time=53.171 ms^C

The above uses the ping inside the virtual machine to test the container delay. We can see that the delay is significantly higher than the ping value inside the cluster.

5. Swarm Cluster load

Now we have learned how to deploy Swarm clusters. Now we can build an accessible Nginx cluster. Experience the automatic service discovery and Cluster load functions provided by the latest version of Swarm.
First, delete the helloworld service we started in the previous section:

$ docker service rm helloworld                                 helloworld

Then, create a new service and provide port ing parameters so that the outside world can access these Nginx services:

$ docker service create --replicas 2 --name helloworld -p 7080:80 --network=swarm_test nginx:alpine9gfziifbii7a6zdqt56kocyun

View the service running status:

$ docker service ls                                                                                ID           NAME         REPLICAS     IMAGE           COMMAND9gfziifbii7a  helloworld     2/2        nginx:alpine  

I wonder if you have found that although the -- replicas parameter values are the same, but when getting the service status in the previous section, REPLICAS returns 0/2, And now REPLICAS returns 2/2.
When you use docker service ps to view the detailed service status (the output below has been manually adjusted to a more readable format), you can see that the current state of the instance is Running, the current state in the previous section is in the Preparing STATE.

$ docker service ps helloworldID          NAME      IMAGE     NODE    DESIRED STATE   CURRENT STATE    ERROR9ikr3agyi...   helloworld.1  nginx:alpine  user-pc    Running         Running 13 seconds ago  7acmhj0u...   helloworld.2  nginx:alpine  worker2    Running         Running 6 seconds ago

This involves the built-in discovery mechanism of Swarm. Currently, in Docker 1.12, Swarm has built-in service discovery tools. We no longer need to configure service discovery using tools such as Etcd or Consul. For a container, if there is no external communication but it is in the Running status, it will be considered as the Preparing status by the service discovery tool. In this example, the container has a Running status because of the port ing.
Now let's look at another interesting function of Swarm. What happens when we kill one of the nodes.
First kill the worker2 instance:

$ docker-machine ssh worker2 docker kill helloworld.2.7acmhj0udzusv1d7lu2tbuhu4helloworld.2.7acmhj0udzusv1d7lu2tbuhu4

Wait a few seconds and check the service status again:

$ docker service ps helloworldID         NAME          IMAGE     NODE   DESIRED STATE  CURRENT STATE   ERROR9ikr3agyi...  helloworld.1     nginx:alpine  zuolan-pc  Running       Running 19 minutes ago  8f866igpl...  helloworld.2     nginx:alpine  manager1  Running       Running 4 seconds ago   7acmhj0u...   \_ helloworld.2  nginx:alpine  worker2   Shutdown       Failed 11 seconds ago  ...exit...$ docker service ls           ID            NAME        REPLICAS  IMAGE         COMMAND9gfziifbii7a  helloworld  2/2       nginx:alpine

We can see that even if we kill one of the instances, Swarm will quickly remove the stopped containers and start a new instance on the node. In this way, the service is still running on two instances.
To add more instances, run the scale command:

$ docker service scale helloworld=3helloworld scaled to 3

View the service details. Three instances are started:

$ docker service ps helloworldID         NAME        IMAGE      NODE   DESIRED STATE  CURRENT STATE    ERROR9ikr3agyi...  helloworld.1    nginx:alpine  user-pc   Running        Running 30 minutes ago  8f866igpl...  helloworld.2    nginx:alpine  manager1  Running        Running 11 minutes ago  7acmhj0u...  \_ helloworld.2  nginx:alpine  worker2   Shutdown       Failed 11 minutes ago   exit1371vexr1jm...  helloworld.3    nginx:alpine   worker2   Running       Running 4 seconds ago

To reduce the number of instances, run the scale command:

$ docker service scale helloworld=2helloworld scaled to 2

So far, the main usage of Swarm has been described, mainly about the creation and deployment of the Swarm cluster network. This section describes common Swarm applications, including Swarm service discovery and load balancing. Then, Swarm is used to configure cross-host container networks and deploy applications on them.

More practical examples of Swarm will be written later.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.