MongoDB shard deployment

Source: Internet
Author: User
Tags mongodb sharding database sharding

1. MongoDB sharding knowledge) 

Replica set ):

The replica set enables each member (member) in the group to have the same data backup in different mongod instances, so that requests from the application server can be accessed (request visit) distributed evenly on the mongod instance where each member of the replica set is located to relieve the pressure on the request load of a single mongod server. Within a certain period of time, the replica set can complete the final consistency of the member data in the group. This backup mechanism is automatic and transparent to users.

MongoDB uses the cache to hit a large number of read requests to improve throughput. In this way, in some extreme situations (such as data center power failure in a single data center deployment environment) it still cannot ensure that the data is completely and reliably stored. The latest data written in the last dozens of seconds will be lost.

Sharding ):

When the data load of a single mongod instance is too large, you can consider deploying the data in the instance to different mongod instances according to certain rules. Similarly, under this splitting rule, access requests to data are also distributed to different mongod instances according to this rule to solve the problem of query performance degradation when the data volume of a single machine is too large.

Of course, the system requires the database to have high availability under the sharding rules to ensure that multiple copies of the data are stored on different servers after splitting. Make the mongod instance with the same data form a shards group, which is a replica set. This allows the MongoDB cluster to retain complete data after a small number of server failures.

Config Server ):

Config server stores the metadata of the sharded cluster, including the basic information and block information of each mongod instance. A copy of the metadata of all the blocks on each configuration server. Two commits are used to ensure the consistency between the configuration server information and block data.

Routing processor (mongos routing process)

Mongos can be seen as a center for data and request distribution, so that a single mongod instance can form an interconnected cluster. When receiving client requests, mongos routes the requests to the corresponding mongod instance based on the config server (which may be a group of mongod) and processes the requests and returns the results. The mongos process does not have a persistent state. When mongos is started, a connection is established with the configuration server and the status is obtained. When the configuration server changes, it is propagated to each mongos process.

(If a leader asks to write a scheme, he will draw a picture based on others. This is purely plagiarism. If there are similarities, it will not be a coincidence! )

2. Start the mongod instance of three machines

Deploy mongod according to the replica set and sharding policies. Deploy two sharding groups on three servers. Each sharding group has three replica set members.

# Server1:
Mkdir-P/data2/MongoDB/shard11
Mkdir-P/data2/MongoDB/shard21
/MongoDB/bin/mongod -- shardsvr -- replset shard1 -- Port 27017 -- dbpath/data2/MongoDB/shard11 -- oplogsize 100 -- logpath/data2/MongoDB/shard11.log -- logappend -- fork -- rest
/MongoDB/bin/mongod -- shardsvr -- replset shard2 -- Port 27018 -- dbpath/data2/MongoDB/shard21 -- oplogsize 100 -- logpath/data2/MongoDB/shard21.log -- logappend -- fork-rest

# Server2:
Mkdir-P/data2/MongoDB/shard12/
Mkdir-P/data2/MongoDB/shard22/
/MongoDB/bin/mongod -- shardsvr -- replset shard1 -- Port 27017 -- dbpath/data2/MongoDB/shard12 -- oplogsize 100 -- logpath/data2/MongoDB/shard12.log -- logappend -- fork -- rest
/MongoDB/bin/mongod -- shardsvr -- replset shard2 -- Port 27018 -- dbpath/data2/MongoDB/shard22 -- oplogsize 100 -- logpath/data2/MongoDB/shard22.log -- logappend -- fork-rest

# Server3:
Mkdir-P/data2/MongoDB/shard13/
Mkdir-P/data2/MongoDB/shard23/
/MongoDB/bin/mongod -- shardsvr -- replset shard1 -- Port 27017 -- dbpath/data2/MongoDB/shard13 -- oplogsize 100 -- logpath/data2/MongoDB/shard13.log -- logappend -- fork -- rest
/MongoDB/bin/mongod -- shardsvr -- replset shard2 -- Port 27018 -- dbpath/data2/MongoDB/shard23 -- oplogsize 100 -- logpath/data2/MongoDB/shard23.log -- logappend -- fork-rest

3. initialize the replica set

Initialize two replica sets through the command line and connect to a mongod through Mongo

/Apsaradb for MongoDB/bin/Mongo 172.17.0.121: 27017

Config = {_ ID: 'shard1 ', members :[
{_ ID: 0, host: '192. 17.0.121: 100 '},
{_ ID: 1, host: '192. 17.0.122: 100 '},
{_ ID: 2, host: '192. 17.0.123: 100'}]};

Rs. Initiate (config );

/Apsaradb for MongoDB/bin/Mongo 172.17.0.121: 27018

Config = {_ ID: 'shard2 ', members :[
{_ ID: 0, host: '192. 17.0.121: 100 '},
{_ ID: 1, host: '192. 17.0.122: 100 '},
{_ ID: 2, host: '192. 17.0.123: 100'}]};

Rs. Initiate (config );

4. Start and configure three config servers

# Server1, 2, 3:
Mkdir-P/data2/MongoDB/config/
/MongoDB/bin/mongod -- configsvr -- dbpath/data2/MongoDB/config/-- port 20000 -- logpath/data2/MongoDB/config1.log -- logappend-Fork

5. Deploy and configure three routing servers

Specify all the config sever address parameters. chunksize is the unit size of each chunk when data is split.

# Server1, 2, 3:

/MongoDB/bin/mongos -- configdb 172.17.0.121: 20000,172.17 .0.122: 20000,172.17 .0.123: 20000 -- Port 30000 -- chunksize 100 -- logpath/data2/MongoDB/mongos. log -- logappend-Fork

6. Add shards on the command line.

Connect to the mongs server and switch to Admin

/MongoDB/bin/Mongo 172.17.0.121: 30000/admin

DB. runcommand ({

Addshard: "shard1/172.17.0.121: 27017,172.17 .0.122: 27017,172.17 .0.123: 27017 ",

Name: "shard1 ",

Maxsize: 20480,

Allowlocal: true });

DB. runcommand ({

Addshard: "shard2/172.17.0.121: 27018,172.17 .0.122: 27018,172.17 .0.123: 27018 ",

Name: "shard2 ",

Maxsize: 20480

Allowlocal: true });

DB. runcommand ({listshards: 1 });

If the two shards you added are listed (sharding), the shards has been configured successfully.

7. Activate database shards

Database sharding enables horizontal data sharding for all collections in the database

DB. runcommand ({enablesharding: "test "});

View sharding status

Use admin;

DB. printshardingstatus ();

Split collection to achieve horizontal data splitting for a single collection.

To enable multipart storage for a single collection, you must specify a shard key for the collection.

A. The multipart collection system automatically creates an index (you can also create an index in advance)

B. The collection of shards can only have one unique index on the shard key. Other unique indexes are not allowed.

DB. runcommand ({shardcollection: "test. C1", key: {ID: 1 }};

You can view the shard status through dB. c1.stats.

We recommend that you use the configuration file to start mongod in final and production environments.

Example:/MongoDB/bin/mongod -- config/data2/MongoDB/shard1/shard1.properties -- rest

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.