Preface
When you have more than one physical machine, consider using cluster mode, so how does Docker use the cluster for management? The main use of this is the Docker swarm mode, which is the management and orchestration of the Docker cluster. The so-called orchestration refers to the management of multiple clusters, host configuration, container scheduling and so on.
Swarm mode is a model that comes with the Docker engine and is easy to use, without the need to install additional software. Use of Swarm mode
When using swarm mode, Docker is installed on several hosts, as shown in the following architecture:
The machine with the host name Docker-ce as the manager node, that is, the management node, and Docker1 and Docker2 as the working nodes.
1. Create a swarm cluster
[Root@docker-ce swarm]# docker swarm init--advertise-addr 192.168.1.222 (Initialize cluster, The IP address of the nodes communicating with each other is 192.168.1.222, the default port is 2377)
Swarm initialized:current Node (pk4p936t4e03cpse3izuws07s) is now a manager.
To add a worker to this swarm, run the following command:
Docker swarm join--token SWMTKN-1-60H71GEYD7Z297JFY2ICEKTMQ3HA3N5NEGO2ZNYTGRZQIX768E-F36PSBHRNRDN9H0BOP6NP22XM 192.168.1.222:2377
To add a manager to this swarm, run ' Docker swarm Join-token Manager ' and follow the instructions.
[Root@docker-ce swarm]# Docker info|grep-i swarm (swarm mode activated)
Swarm:active
[Root@docker-ce swarm]# Netstat-tnlp|grep Docker (default listens on two ports, tcp2377 port is the management port of the cluster, tcp7946 is the communication port between nodes)
TCP6 0 0::: 2377:::* LISTEN 66488/dockerd
TCP6 0 0::: 7946:::* LISTEN 66488/dockerd
[Root@docker-ce swarm]# Docker network LS (by default creates a overlay ingress, and also creates a bridged network Docker_gwbridge)
NETWORK ID NAME DRIVER SCOPE
641EEB86F6A4 Bridge Bridge Local
C23AFA61AFAA Docker_gwbridge Bridge Local
65F6EED9F144 Host Host Local
N8I6CPIZZLWW Ingress Overlay Swarm
B4d6492a85d5 None Null Local
[Root@docker-ce swarm]# docker node ls (view nodes in the cluster, when there are multiple manager nodes, the primary node is selected through the raft protocol, that is, the leader node)
ID HOSTNAME status Availability MANAGER status
pk4p936t4e03cpse3izuws07s * Docker-ce Ready Active Leader
[Root@docker-ce swarm]# ls-l (swarm configuration files are in the/var/lib/docker/swarm directory, there will be the relevant certificate and manager profile, using the Raft protocol)
Total 8
Drwxr-xr-x. 2 root root 10:13 Jan certificates (TLS for secure communication)
-RW-------. 1 root 151 Jan 10:13 Docker-state.json (used to record the address and port of the communication, local address and Port are also recorded)
DRWX------. 4 root root, Jan 10:13 raft (Raft protocol)
-RW-------. 1 root root, Jan 10:13 State.json (manager's IP and port)
Drwxr-xr-x. 2 root root 10:13 worker (record task information issued by work node)
[Root@docker2 ~]# docker swarm join--token Swmtkn-1-60h71geyd7z297jfy2icektmq3ha3n5nego2znytgrzqix768e-f36psbhrnrdn9h0bop6np22xm 192.168.1.222:2377 (Other machines join the Swarm cluster)
This node joined a swarm as a worker.
When you forget to add tokens to the cluster, you can use the following instructions to find tokens, and then execute directly on the nodes node to join the worker node or the manager node.
View the cluster as follows:
Roles between nodes can be transformed at any time (updated with update):
2. Open firewall
When communicating with each node, the relevant firewall policies must be open, including TCP 2377 ports for communication, 7946 ports for TCP and UDP, and UDP port 4789 ports for network overlay.
[Root@docker-ce ~]# firewall-cmd--add-port tcp/2377--permanent
[Root@docker-ce ~]# firewall-cmd--add-port tcp/7946--permanent
[Root@docker-ce ~]# firewall-cmd--add-port udp/7946--permanent
[Root@docker-ce ~]# firewall-cmd--add-port udp/4789--permanent
[Root@docker-ce ~]# systemctl Restart Firewalld
3. Operation Service
A service is a set of tasks, and a task is represented as a container, and from the basic concept of running a service, there may be several tasks, such as running several nginx services, This will then be disassembled into several nginx containers to run on each node.
[Root@docker-ce ~]# Docker Service Create--name Web Nginx (create represents a service, name is Web, image is Nginx)
Oy2y8sb31c2jpn9owk6gdt7nk
Overall progress:1 out of 1 tasks
1/1: Running [==================================================>]
Verify:service converged
[Root@docker-ce ~]# Docker Service Create--name frontweb--mode global Nginx (Create a name to ask Frontweb service, mode is global, image as Nginx)
Ld835zsd9x1x4rdaj6u1i1rfy
Overall progress:3 out of 3 tasks
pk4p936t4e03:running [==================================================>]
xvkxa7z22v75:running [==================================================>]
6xum2o1iqmya:running [==================================================>]
Verify:service converged
[Root@docker-ce ~]# docker service LS (view running services)
ID NAME MODE Replicas IMAGE PORTS
ld835zsd9x1x Frontweb Global 3/3 nginx:latest
OY2Y8SB31C2J Web Replicated 1/1 nginx:latest
[Root@docker-ce ~]# Docker Service PS Web (view run details, by default the manager node can also run the container)
ID NAME IMAGE NODE desired state ERROR PORTS
Li2bfdt1dfjs web.1 nginx:latest docker-ce Running Running-minutes ago
[Root@docker-ce ~]# Docker Service PS frontweb (view run details)
ID NAME IMAGE NODE desired State Curre NT State ERROR PORTS
S96twac1s4av frontweb.6xum2o1iqmyaun2khb4b5z57h nginx:latest docker2 Running Runni ng seconds ago
Qtr35ehwuu26 frontweb.xvkxa7z22v757jnptndvtcc4t nginx:latest docker1 Running Runni ng Notoginseng seconds ago
Jujtu01q49o2 frontweb.pk4p936t4e03cpse3izuws07s nginx:latest docker-ce Running Runni ng seconds ago
In the creation of the service, there will be several states, one is prepared, the preparation, the main is to pull the mirror from the warehouse, and then start the container, that is, starting, the final validation of the container state, and finally become running state.
When viewing the service, a mode, that is, the type of service, can be divided into two, a replicated, a copy, by default, replicated mode is used, and the default is to create only one copy, mainly used for the purpose of high availability The other is global, which means that a task, the container, must be run on each machine, and you can see that three containers are created when using global mode.
4, the expansion of the service capacity
When using the service, because it is a cluster, then it will inevitably involve high availability, which will have the service expansion and contraction, in the swarm is still very easy.
[Root@docker-ce ~]# Docker Service scale web=3 (expansion to 3, which is three containers running)
Web scaled to 3
Overall progress:3 out of 3 tasks
1/3: Running [==================================================>]
2/3: Running [==================================================>]
3/3: Running [==================================================>]
Verify:service converged
[Root@docker-ce ~]# docker service LS (view service, you can see the number of replicas replicas is 3)
ID NAME MODE Replicas IMAGE PORTS
OY2Y8SB31C2J Web Replicated 3/3 nginx:latest
[Root@docker-ce ~]# Docker Service PS web (you can see that each of the three nodes is running with a container task running on each)
ID NAME IMAGE NODE desired state ERROR PORTS
Li2bfdt1dfjs web.1 nginx:latest docker-ce Running Running + minutes ago
8dsrshssyd6t web.2 nginx:latest docker2 Running Running $ seconds ago
4i7vgzspdpts web.3 nginx:latest docker1 Running Running $ seconds ago
By default, a managed machine can also run a container, which also runs a container on the manager node.
[Root@docker-ce ~]# Docker Service scale web=2 (shrinks the Web service to 2)
Web scaled to 2
Overall progress:2 out of 2 tasks
1/2: Running [==================================================>]
2/2: Running [==================================================>]
Verify:service converged
[Root@docker-ce ~]# Docker Service PS Web (view running containers)
ID NAME IMAGE NODE desired state ERROR PORTS
4i7vgzspdpts web.3 nginx:latest docker1 Running Running 9 minutes ago
56s441jtydq4 web.5 nginx:latest docker-ce Running Running about a minute ago
When you want the manager node of swarm not to run the container, just change the state of the node, from active to drain, if you run the container on the manager, when the manager goes down, if it is not a multi-node manager, Causes the service to be unable to schedule.
[Root@docker-ce ~]# Docker node Update--availability drain DOCKER-CE (the state of the manager node is modified to drain state so that no related task tasks are performed)
Docker-ce
[Root@docker-ce ~]# docker node ls (view node status from active to drain)
ID HOSTNAME status Availability MANAGER status
XVKXA7Z22V757JNPTNDVTCC4T Docker1 Ready Active
6xum2o1iqmyaun2khb4b5z57h Docker2 Ready Active
pk4p936t4e03cpse3izuws07s * Docker-ce Ready Drain Leader
[Root@docker-ce ~]# Docker Service PS Web (containers that were originally running on Docker-ce are closed and then migrated automatically to other worker nodes)
ID NAME IMAGE NODE desired state ERROR PORTS
4i7vgzspdpts web.3 nginx:latest docker1 Running Running-minutes ago
X2w8qdxuv2y5 web.5 nginx:latest docker2 Running Running about a minute ago
56s441jtydq4 \_ web.5 nginx:latest docker-ce Shutdown Shutdown about a minut E ago
5. Automatic fault transfer
In the cluster,