This article introduces the use of Docker and kubernetes to build a set of MONGODB services with redundant backup sets, from the container to the CI and CD-initiated changes, discussed the container technology to MONGODB challenges and opportunities, and then how to deploy a stable set of MONGODB services , very dry.
Introduced
Want to try to run MongoDB on your laptop? Want to execute a simple command and then have a lightweight, self-organizing sandbox? And can you remove all traces with one more command?
Need to run the same application stack in multiple environments? Create your own container images that enable the development, testing, operations, and support teams to launch an identical environment.
The container is changing the entire software lifecycle; it covers the proof of concepts from initial technical trials through development, testing, deployment, and support.
Read the MicroServices: Containers and Orchestration Whitepaper
Orchestration Tool Managers How multiple containers are created, upgraded, and highly available. The orchestration also manages how the container is connected and uses multiple microservices containers to create stable application services.
Rich features, simple tools, and powerful APIs make containers and orchestration a favorite for DevOps teams. DevOps engineers integrate them into continuous integration (CI) and continuous delivery (CD) workflows.
This article explores the issues you encounter when trying to run and orchestrate a MongoDB container, and describes how to overcome these problems.
Thinking about MongoDB
Using containers and orchestration to run MongoDB brings some new thinking:
The MongoDB database node is stateful. If a container is hung and rearranged, data loss is unacceptable (although it can recover data from other nodes, but it is time consuming). To solve this problem, the volume abstraction (Volume abstraction) feature in Kubernetes will be used to map the MongoDB Data folder to a persistent address to avoid container failures or rescheduling.
Communication is required between the same set of MongoDB database backup nodes, even after the re-orchestration. Nodes of the same redundant backup collection must know the addresses of all other nodes, but when a container is rearranged, its IP address changes. For example, all containers within a kubernetes share an IP address that changes after the pod is rearranged. In kubernetes, this problem can be resolved by contacting the Kubernetes service with the MongoDB node, which uses the Kubernetes DNS service to provide the host name to the service after the rescheduling.
Once each individual MongoDB node (each node in a separate container) starts up, the backup collection must be initialized and each node is added. This requires the Orchestration tool to provide additional logic. In particular, when there is only one MongoDB node in the Backup collection, the Rs.initiate and Rs.add commands must be executed.
If the Orchestration Framework provides automated rescheduling of container functionality (such as Kubernetes's features), this can improve MongoDB's disaster tolerance, and nodes are automatically recreated after they are hung, reverting to full redundancy levels without human intervention.
When the Orchestration framework controls the state of all containers, it does not manage the application within the container or back up the data. This means that it is important to adopt an effective management and backup scheme, such as MongoDB Cloud Manager, including MongoDB Enterprise Advanced and MongoDB professional two parts. Consider the need to create an image, using your preferred MongoDB version and the MongoDB Automation Agent.
Use Docker and kubernetes for MongoDB redundant backups
As described in the previous section, a distributed database such as MONGODB requires additional consideration when deploying with an orchestration framework, such as Kubernetes. This section provides an analysis of this detail and describes how to implement it.
First, we create the entire MongoDB redundancy collection in a separate kubernetes cluster (in the same datacenter, where there is no physical redundant backup). If you create across multiple datacenters, the steps are not very different, as you'll follow.
Each member of the backup runs in its own pod, exposing only its IP address and port. Fixed IP addresses are important for external applications and other redundant backup nodes, and it determines which pods will be redeployed.
Shows the relationship of one pod to the associated redundant controller and service.
Drill down into the resources described in these configurations, as follows:
Start the core node mongo-node1. The node includes a mirror called MONGO, derived from the [Docker Hub], which exposes 27107 ports.
The kubernetes volume attribute is used to map the/data/db folder to the persistent directory Mongo-persistent-storage1, which is a directory mapping Mongodb-disk1 created on Google Cloud. Data that is used to persist MongoDB.
The container is managed by the pod and marked as Mongo-node, while providing a randomly generated name for rod.
The redundant controller is named MONGO-RC1 and is used to ensure that the Mongo-node1 instance is always running.
The Load Balancer service is named Mongo-svc-a with 27017 exposed ports. The service uses the tag of the pod to match the correct service to the corresponding pod, the externally exposed IP and port to the application, and the redundancy for the communication of the nodes in the collection. Although each container has an internal IP, they are changed when the container is restarted or moved, and therefore cannot be used for communication between redundant backup sets.
Shows a redundant backup and another member information in:
The 90% configuration is the same, with only a few differences:
The name of the hard disk and the volume must be unique, so use Mongodb-disk2 and mongo-persisitent-storage2.
The pod is assigned to the Jane instance, and the node is named Mongo-node2, which distinguishes between the new service and the pod in Figure 1
Redundant control named Mongo-rc2
The service is named Mongo-svc-b and gets a different external IP address (in this example, Kubernets is assigned as 104.1.4.5)
The configuration of the third redundant backup member follows the above pattern, showing a complete set of redundant configurations:
Note that even with configuration 3, Kubernetes may dispatch two or more MongoDB redundant backup members on the same host on a kubernetes cluster of three or more nodes. This is because Kubernetes treats three pods as three separate services.
To increase redundancy, you need to create an additional headless service. The service does not have the ability to provide external services, even without an external IP address, but it is used to inform Kubernetes that the three MongoDB pods belong to the same service, so Kubernetes will dispatch them on different nodes.
Specific configuration files and related operations commands can be found in the launch microservices: Container & Dispatch Instructions whitepaper. It contains three special steps to ensure the merging of three MongoDB into one function, the redundant backup described in this article.
Multiple available zones MongoDB redundant collection
All redundant components run on the same GCE cluster at a high risk, as are the clusters in the same zone. The MongoDB redundancy collection is not available if a significant event occurs that causes the available zone to go offline. If a geographically redundant backup is required, then three pods need to be run in different zones.
You can create such a redundant collection of backups with minimal changes. Each cluster requires its own kubernetes Yaml file to define pods, redundant controllers, and services. Then, you can complete a zone of cluster creation, persistent storage, and MONGODB nodes.
Shows the redundancy of the combination running on a different zone:
Recommend an Exchange Learning Group: 685167672 It will share some of the senior architect recorded video recordings: Spring,mybatis,netty source analysis, high concurrency, performance, distributed, microservices architecture Principles, JVM Performance optimization These become architects must have the knowledge system. You can also receive free learning resources, and now benefit from:
Great God teaches you to play easily. How to run a MongoDB microservices service in Docker and Kubernetes