The backup nodes of the same MongoDB database must communicate with each other, even after the re-arrangement. Nodes in the same redundant backup set must know the addresses of all other nodes. However, when a container is reconfigured, its IP address changes. For example, all containers in Kubernetes share an IP address. After the pod is reconfigured, the address changes. In Kubernetes, you can contact the Kubernetes service and MongoDB node to solve this problem. The Kubernetes DNS Service provides the host name to the service after the re-arrangement.
Once each independent MongoDB node is started (each node is in a separate container), the backup set must be initialized and each node must be added. This requires the orchestration tool to provide additional logic. In particular, when there is only one MongoDB node in the backup set, the rs. initiate and rs. add commands must be executed.
If the orchestration framework provides the automatic re-arrangement container function (such as the Kubernetes feature), this can improve the Disaster Tolerance of MongoDB, And the node will be automatically re-created after it fails, recovery to the full redundancy level without manual intervention.
When the orchestration framework controls the status of all containers, it does not manage applications or backup data in the container. This means that it is important to adopt an effective management and backup solution, such as MongoDB Cloud Manager, which includes two parts: MongoDB Enterprise Advanced and MongoDB Professional. To create an image, you can use your preferred MongoDB version and MongoDB Automation Agent.
Using Docker and Kubernetes for MongoDB Redundancy backupAs described in the previous section, distributed databases such as MongoDB need to be considered for deployment using an orchestration framework (such as Kubernetes. This section analyzes the details and describes how to implement them.
First, we create an entire MongoDB redundancy set in a separate Kubernetes cluster (there is no physical redundancy backup in the same data center. If you create data centers across multiple data centers, the steps are slightly different and will be described later.
Each member in the backup runs in its own pod and only exposes its IP address and port. A fixed IP address is very important for external applications and other redundant backup nodes. It determines which pods will be redeployed.
Shows the relationship between a pod and its associated redundant controllers and services.
The resources described in these configurations are as follows:
Start the core node mongo-node1, which includes a mongo image from Docker Hub that exposes port 27107.
The volume feature of Kubernetes is used to map/data/db folders to persistent directory mongo-persistent-storage1, which is the directory ing mongodb-disk1 created on Google Cloud for persistence of MongoDB data.
Containers are managed by pods, marked as mongo-node, and rod is given a random name.
The redundant controller is named a mongo-rc1 to ensure that the instance of the mongo-node1 is always running.
The server Load balancer service is named mongo-svc-a and port 27017 is exposed. The service matches the correct service to the corresponding pod through the pod label, exposes the ip address and port for application, and is used for communications between nodes in the redundant backup set. Although each container has an internal ip address, it changes when the container is restarted or moved, so it cannot be used for communication between redundant backup sets.
This section displays the redundant backup and other member information:
The configuration of 90% is the same, with only a few differences:
The hard disk and volume name must be unique, so use mongodb-disk2 and mongo-persisitent-storage2
The Pod is assigned to the jane instance and the node is named mongo-node2 to distinguish the new service from the Pod in Figure 1
Redundant control named mongo-rc2
The service is named mongo-svc-B and a different external IP address is obtained (in this example, Kubernets is allocated as 104.1.4.5)
The configuration of the third redundant backup member is modeled in the preceding mode, showing the complete redundant configuration set:
Note: Even if configuration 3 is the same, on a Kubernetes cluster with three or more nodes, Kubernetes may schedule two or more redundant MongoDB backup members on the same host machine. This is because Kubernetes regards three pods as three independent services.
To increase redundancy, you need to create an additional headless service. This service does not provide external services or even has no external IP addresses, but it is used to notify Kubernetes that the three MongoDB pods belong to the same service, therefore, Kubernetes will schedule them on different nodes.
The specific configuration file and related operation commands can be found in the "Start microservice: Container & Scheduling instructions White Paper. This includes three special steps to ensure that three MongoDB instances are merged into one function, that is, the redundant backup described in this article.
MongoDB redundancy set in multiple available regionsAll redundant components run on the same GCE cluster, which is highly risky. If a major event causes the available zone to be offline, The MongoDB redundant set will become unavailable. If you need a geographically redundant backup, the three pods must run in different zones.
You can create such a redundant backup set with few changes. Each cluster needs its own Kubernetes YAML file to define pods, redundant controllers, and services. Then, you can complete cluster creation, persistent storage, and MongoDB nodes in a zone.
Shows the redundant combinations running on different zones:
From: http:// OS .51cto.com/art/201607/515108.htm
Address: http://www.linuxprobe.com/docker-kubernetes-with-mangdb.html