Run the MongoDB microservice on Docker and Kubernetes

Source: Internet
Author: User
Tags mongodb version docker hub

Run the MongoDB microservice on Docker and Kubernetes
GuideDocker is an open-source application container engine that allows developers to package their applications and dependencies to a portable container and then publish them to any popular Linux machine, you can also achieve virtualization. Containers fully use the sandbox mechanism and do not have any interfaces between them.

This article describes how to use Docker and Kubernetes to build a set of MongoDB services with redundant backup sets. Starting from the changes caused by containers on CI and CD, we discuss the challenges and opportunities of container Technology for MongoDB, how can we deploy a stable MongoDB service ~

Introduction

Want to run MongoDB on a laptop? Do you want to execute a simple command and then have a lightweight, self-organizing sandbox? Can I use another command to remove all traces?

Need to run the same application stack in multiple environments? Create your own container image so that the development, testing, operations, and support teams can start a completely identical environment.
Containers are changing the entire software lifecycle; it covers from initial technical experimentation to proof of concept through development, testing, deployment, and support.
Read microservices: container and orchestration White Paper (https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained ).

The orchestration tool manages how multiple containers are created, upgraded, and available. The orchestration also manages how containers are connected and uses multiple microservice containers to create stable application services.
The rich functions, simple tools, and powerful APIs make containers and orchestration favored by the DevOps team. DevOps engineers integrate them into CI and CD workflows.

This article will explore problems encountered when trying to run and orchestrate MongoDB containers, and describe how to overcome these problems.

Thoughts on MongoDB

Using containers and orchestration to run MongoDB brings new ideas:
MongoDB database nodes are stateful. If a container crashes and is rescheduled, data loss is unacceptable (although it can recover data from other nodes, it takes a lot of time ). To solve this problem, the Volume abstraction Action feature in Kubernetes maps the MongoDB data folder to a persistent address to avoid container failure or rescheduling.

The backup nodes of the same MongoDB database must communicate with each other, even after the re-arrangement. Nodes in the same redundant backup set must know the addresses of all other nodes. However, when a container is reconfigured, its IP address changes. For example, all containers in Kubernetes share an IP address. After the pod is reconfigured, the address changes. In Kubernetes, you can contact the Kubernetes service and MongoDB node to solve this problem. The Kubernetes DNS Service provides the host name to the service after the re-arrangement.

Once each independent MongoDB node is started (each node is in a separate container), the backup set must be initialized and each node must be added. This requires the orchestration tool to provide additional logic. In particular, when there is only one MongoDB node in the backup set, the rs. initiate and rs. add commands must be executed.
If the orchestration framework provides the automatic re-arrangement container function (such as the Kubernetes feature), this can improve the Disaster Tolerance of MongoDB, And the node will be automatically re-created after it fails, recovery to the full redundancy level without manual intervention.

When the orchestration framework controls the status of all containers, it does not manage applications or backup data in the container. This means that it is important to adopt an effective management and backup solution, such as MongoDB Cloud Manager, which includes two parts: MongoDB Enterprise Advanced and MongoDB Professional. To create an image, you can use your preferred MongoDB version and MongoDB Automation Agent.

Using Docker and Kubernetes for MongoDB Redundancy backup

As described in the previous section, distributed databases such as MongoDB need to be considered for deployment using an orchestration framework (such as Kubernetes. This section analyzes the details and describes how to implement them.

First, we create an entire MongoDB redundancy set in a separate Kubernetes cluster (there is no physical redundancy backup in the same data center. If you create data centers across multiple data centers, the steps are slightly different and will be described later.
Each member in the backup runs in its own pod and only exposes its IP address and port. A fixed IP address is very important for external applications and other redundant backup nodes. It determines which pods will be redeployed.

Shows the relationship between a pod and its associated redundant controllers and services.

The resources described in these configurations are as follows:

Start the core node mongo-node1, which includes a mongo image from Docker Hub that exposes port 27107.
The volume feature of Kubernetes is used to map/data/db folders to persistent directory mongo-persistent-storage1, which is the directory ing mongodb-disk1 created on Google Cloud for persistence of MongoDB data.
Containers are managed by pods, marked as mongo-node, and rod is given a random name.
The redundant controller is named a mongo-rc1 to ensure that the instance of the mongo-node1 is always running.
The server Load balancer service is named mongo-svc-a and port 27017 is exposed. The service matches the correct service to the corresponding pod through the pod label, exposes the ip address and port for application, and is used for communications between nodes in the redundant backup set. Although each container has an internal ip address, it changes when the container is restarted or moved, so it cannot be used for communication between redundant backup sets.

This section displays the redundant backup and other member information:

The configuration of 90% is the same, with only a few differences:

The hard disk and volume name must be unique, so use mongodb-disk2 and mongo-persisitent-storage2
The Pod is assigned to the jane instance and the node is named mongo-node2 to distinguish the new service from the Pod in Figure 1
Redundant control named mongo-rc2
The service is named mongo-svc-B and a different external IP address is obtained (in this example, Kubernets is allocated as 104.1.4.5)

The configuration of the third redundant backup member is modeled in the preceding mode, showing the complete redundant configuration set:

Note: Even if configuration 3 is the same, on a Kubernetes cluster with three or more nodes, Kubernetes may schedule two or more redundant MongoDB backup members on the same host machine. This is because Kubernetes regards three pods as three independent services.

To increase redundancy, you need to create an additional headless service. This service does not provide external services or even has no external IP addresses, but it is used to notify Kubernetes that the three MongoDB pods belong to the same service, therefore, Kubernetes will schedule them on different nodes.

The specific configuration file and related operation commands can be found in the "Start microservice: Container & Scheduling instructions White Paper. This includes three special steps to ensure that three MongoDB instances are merged into one function, that is, the redundant backup described in this article.

MongoDB redundancy set in multiple available regions

All redundant components run on the same GCE cluster, which is highly risky. If a major event causes the available zone to be offline, The MongoDB redundant set will become unavailable. If you need a geographically redundant backup, the three pods must run in different zones.

You can create such a redundant backup set with few changes. Each cluster needs its own Kubernetes YAML file to define pods, redundant controllers, and services. Then, you can complete cluster creation, persistent storage, and MongoDB nodes in a zone.

Shows the redundant combinations running on different zones:

From: http:// OS .51cto.com/art/201607/515108.htm

Address: http://www.linuxprobe.com/docker-kubernetes-with-mangdb.html


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.