MongoDB's replica sets + sharding Architecture

Source: Internet
Author: User

Reprinted from http://www.cnblogs.com/spnt/

MongoDB's sharding mechanism solves the problems of massive storage and dynamic resizing, but it is highly reliable and highly available from the production environment. sharding is powerless in the case of single point of failure. However, MongoDB's replica set can easily handle single point of failure (spof), so it has a highly available and secure architecture of replica sets + sharding.

The architecture is as follows:

1. Shard server: Use replica sets to ensure that each data node has the backup, automatic fault tolerance transfer, and automatic recovery capabilities.

2. configuration server: use three configuration servers to ensure metadata integrity

3. routing process: use three routing processes to balance and improve client access performance. The architecture is as follows:

Three sharding processes: shard11, shard12, and shard13 form a replica set and provide shard1 functions in sharding.

Three sharding processes: shard21, shard22, and shard23 form a replica set to provide shard2 functions in sharding.

Three configuration server processes and three router Processes

Bytes --------------------------------------------------------------------------------------------

Now we start to build the entire architecture (because there are not so many machines, I still use a local directory to simulate the machine)

Host IP Service and Port

Servera

Mongodb1

DB2 Connector

Mongodb3

Mongodb4

127.0.0.1

Mongod shard11: 10000

Mongod shard21: 20000

Mongod config

Mongos: 40000

Serverb

Mongodb5

Mongodb6

Mongodb7

Mongodb4

127.0.0.1

Mongod shard12: 10001

Mongod shard22: 20001

Mongod config

Mongos: 40000

Serverc

Mongodb9

Mongodb8

Mongodb11

Mongodb4

127.0.0.1  

Mongod shard13: 10002

Mongod shard23: 20002

Mongod config :30002

Mongos: 40000

1. Start the shard1 process and configure replica sets.

Start the mongod shard11 process. Replica Set Name: shard1

Start the mongod shard12 process and set the replica set: shard1

Start the mongod shard13 process and set the replica set: shard1

Configure these three processes as replica sets and open a new cmd to execute various non-startup commands, connect to any of the above three processes, and configure them as replica sets. The operations are as follows:

 

2. Start the shard2 process and configure replica sets.

Start the mongod shard21 process. Replica Set Name: shard2

Start the mongod shard22 process and set the replica set: shard2

Start the mongod shard23 process and set the replica set: shard2

Configure these three processes as replica sets. The operations are as follows:

 

By now, the parts of the two replica sets have been configured. The configuration below is config server and route process.

3. Configure three config servers

 

4. Configure route Process

The chunk size is 1 MB, so that we can test the effect.

5. Configure the partition table and partition key.

I used the frienduser table in the friends database for sharding. The partition key is _ id. Because the CMD width is too small, the command for adding shards is not completely displayed. I manually list them.

Add parts

DB. runcommand ({addshard: "shard1/127.0.0.1: 10000,127.0 .0.1: 10001,127.0 .0.1: 10002 "})

DB. runcommand ({addshard: "shard2/127.0.0.1: 20000,127.0 .0.1: 20001,127.0 .0.1: 20002 "})

 

 

Now that the entire architecture has been configured, let's verify the configuration. I added 10000 pieces of data to the database through the client.

You can see that the part has been executed.

Bytes ------------------------------------------------------------------------------------------------

Now let's test Disaster Tolerance. I will stop shard11 and see what the result will be.

Open the CMD window of shard11 and press Ctrl + C to stop the process.

View status

The status is intact. I inserted 20000 data records to see the effect.

You can see that the task can still run.

This problem occurs here: When three machines are used as replica sets, Only one or more servers can be used as the replica set. When two servers are used as the replica set, the third server cannot be changed from a slave database to a master database.

Pay attention to the replica set election rules: when the master database crashes, the secondary node will triggerElection. The first node that receives a vote from a majority of members in the replica set will become the master node. The most important feature of replica set election is that most of the original member nodes of the replica set must participate in the election to succeed. If your replica set contains three members and two or three nodes can connect to each other, this replica set can select a master node. If two nodes in the replica set are offline, the remaining node is still used as the secondary node.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.