Excellent MongoDB learning materials 10. Sharding (2)

Source: Internet
Author: User
MongoDBAuto-Sharding solves the problems of massive storage and dynamic resizing, but it is still a little away from the high reliability and high availability required by the actual production environment. Solution: Shard: Use ReplicaSets to ensure that each data node has backup, automatic fault tolerance transfer, and automatic recovery capabilities. Config:

MongoDB Auto-Sharding solves the problems of massive storage and dynamic resizing, but it is still a little away from the high reliability and high availability required by the actual production environment. Solution: Shard: Use Replica Sets to ensure that each data node has backup, automatic fault tolerance transfer, and automatic recovery capabilities. Config:

MongoDB Auto-Sharding solves the problems of massive storage and dynamic resizing, but it is still a little away from the high reliability and high availability required by the actual production environment.

Solution:
  • Shard: Use Replica Sets to ensure that each data node has backup, automatic fault tolerance transfer, and automatic recovery capabilities.
  • Config: use three configuration servers to ensure metadata integrity (two-phase commit ).
  • Route: Works with LVS to achieve load balancing and improve access performance ).
Below we configure a Replica Sets + Sharding test environment.
We recommend that you use IP Addresses During configuration to avoid errors.

(1) first, create all the database directories.

$ sudo mkdir -p /var/mongodb/10001$ sudo mkdir -p /var/mongodb/10002$ sudo mkdir -p /var/mongodb/10003$ sudo mkdir -p /var/mongodb/10011$ sudo mkdir -p /var/mongodb/10012$ sudo mkdir -p /var/mongodb/10013$ sudo mkdir -p /var/mongodb/config1$ sudo mkdir -p /var/mongodb/config2$ sudo mkdir -p /var/mongodb/config3

(2) Configure Shard Replica Sets.

$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10001 --port 10001 --nohttpinterface --replSet set1forked process: 4974all output going to: /dev/null$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10002 --port 10002 --nohttpinterface --replSet set1forked process: 4988all output going to: /dev/null$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10003 --port 10003 --nohttpinterface --replSet set1forked process: 5000all output going to: /dev/null

$ ./mongo --port 10001MongoDB shell version: 1.6.2connecting to: 127.0.0.1:10001/test> cfg = { _id:'set1', members:[... { _id:0, host:'192.168.1.202:10001' },... { _id:1, host:'192.168.1.202:10002' },... { _id:2, host:'192.168.1.202:10003' }... ]};> rs.initiate(cfg){        "info" : "Config now saved locally.  Should come online in about a minute.",        "ok" : 1}> rs.status(){        "set" : "set1",        "date" : "Tue Sep 07 2010 10:25:28 GMT+0800 (CST)",        "myState" : 5,        "members" : [                {                        "_id" : 0,                        "name" : "yuhen-server64:10001",                        "health" : 1,                        "state" : 5,                        "self" : true                },                {                        "_id" : 1,                        "name" : "192.168.1.202:10002",                        "health" : -1,                        "state" : 6,                        "uptime" : 0,                        "lastHeartbeat" : "Thu Jan 01 1970 08:00:00 GMT+0800 (CST)"                },                {                        "_id" : 2,                        "name" : "192.168.1.202:10003",                        "health" : -1,                        "state" : 6,                        "uptime" : 0,                        "lastHeartbeat" : "Thu Jan 01 1970 08:00:00 GMT+0800 (CST)"                }        ],        "ok" : 1}

Configure the second group of Shard Replica Sets.

$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10011 --port 10011 --nohttpinterface --replSet set2forked process: 5086all output going to: /dev/null$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10012 --port 10012 --nohttpinterface --replSet set2forked process: 5098all output going to: /dev/null$ sudo ./mongod --shardsvr --fork --logpath /dev/null --dbpath /var/mongodb/10013 --port 10013 --nohttpinterface --replSet set2forked process: 5112all output going to: /dev/null

$ ./mongo --port 10011MongoDB shell version: 1.6.2connecting to: 127.0.0.1:10011/test> cfg = { _id:'set2', members:[... { _id:0, host:'192.168.1.202:10011' },... { _id:1, host:'192.168.1.202:10012' },... { _id:2, host:'192.168.1.202:10013' }... ]}> rs.initiate(cfg){        "info" : "Config now saved locally.  Should come online in about a minute.",        "ok" : 1}> rs.status(){        "set" : "set2",        "date" : "Tue Sep 07 2010 10:28:37 GMT+0800 (CST)",        "myState" : 1,        "members" : [                {                        "_id" : 0,                        "name" : "yuhen-server64:10011",                        "health" : 1,                        "state" : 1,                        "self" : true                },                {                        "_id" : 1,                        "name" : "192.168.1.202:10012",                        "health" : 0,                        "state" : 6,                        "uptime" : 0,                        "lastHeartbeat" : "Tue Sep 07 2010 10:28:36 GMT+0800 (CST)",                        "errmsg" : "still initializing"                },                {                        "_id" : 2,                        "name" : "192.168.1.202:10013",                        "health" : 1,                        "state" : 5,                        "uptime" : 1,                        "lastHeartbeat" : "Tue Sep 07 2010 10:28:36 GMT+0800 (CST)",                        "errmsg" : "."                }        ],        "ok" : 1}

(3) Start Config Server.

We can use only one Config Server, but the three are theoretically more secure.

Chunk information is the main data stored by the config servers.  Each config server has a complete copy of all chunk information.  A two-phasecommit is used to ensure the consistency of the configuration data among the config servers.If any of the config servers is down, the cluster's meta-data goes read only. However, even in such a failure state, the MongoDB cluster can stillbe read from and written to.

Note! This is not a Replica set, and the -- replSet parameter is not required.

$ sudo ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config1 --port 20000 --nohttpinterfaceforked process: 5177all output going to: /dev/null$ sudo ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config2 --port 20001 --nohttpinterfaceforked process: 5186all output going to: /dev/null$ sudo ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config3 --port 20002 --nohttpinterfaceforked process: 5195all output going to: /dev/null

$ ps aux | grep configsvr | grep -v greproot  ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config1 --port 20000 --nohttpinterfaceroot  ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config2 --port 20001 --nohttpinterfaceroot  ./mongod --configsvr --fork --logpath /dev/null --dbpath /var/mongodb/config3 --port 20002 --nohttpinterface


(4) Start the Route Server.

Note the -- configdb parameter.

$ sudo ./mongos --fork --logpath /dev/null --configdb "192.168.1.202:20000,192.168.1.202:20001,192.168.1.202:20002"forked process: 5209all output going to: /dev/null

$ ps aux | grep mongos | grep -v greproot ./mongos --fork --logpath /dev/null --configdb 192.168.1.202:20000,192.168.1.202:20001,192.168.1.202:20002

(5) Configure Sharding.

Note the format of addshard Replica Sets.

$ ./mongoMongoDB shell version: 1.6.2connecting to: test> use adminswitched to db admin> db.runCommand({ addshard:'set1/192.168.1.202:10001,192.168.1.202:10002,192.168.1.202:10003' }){ "shardAdded" : "set1", "ok" : 1 }> db.runCommand({ addshard:'set2/192.168.1.202:10011,192.168.1.202:10012,192.168.1.202:10013' }){ "shardAdded" : "set2", "ok" : 1 }> db.runCommand({ enablesharding:'test' }){ "ok" : 1 }> db.runCommand({ shardcollection:'test.data', key:{_id:1} }){ "collectionsharded" : "test.data", "ok" : 1 }> db.runCommand({ listshards:1 }){        "shards" : [                {                        "_id" : "set1",                        "host" : "set1/192.168.1.202:10001,192.168.1.202:10002,192.168.1.202:10003"                },                {                        "_id" : "set2",                        "host" : "set2/192.168.1.202:10011,192.168.1.202:10012,192.168.1.202:10013"                }        ],        "ok" : 1}> printShardingStatus()--- Sharding Status ---  sharding version: { "_id" : 1, "version" : 3 }  shards:      {        "_id" : "set1",        "host" : "set1/192.168.1.202:10001,192.168.1.202:10002,192.168.1.202:10003"      }      {        "_id" : "set2",        "host" : "set2/192.168.1.202:10011,192.168.1.202:10012,192.168.1.202:10013"      }  databases:        { "_id" : "admin", "partitioned" : false, "primary" : "config" }        { "_id" : "test", "partitioned" : true, "primary" : "set1" }                test.data chunks:                        { "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : set1 { "t" : 1000, "i" : 0 }


---- End of configuration ------

OK! You can test it.

> use testswitched to db test> db.data.insert({name:1})> db.data.insert({name:2})> db.data.insert({name:3})> db.data.find(){ "_id" : ObjectId("4c85a6d9ce93b9b1b302ebe7"), "name" : 1 }{ "_id" : ObjectId("4c85a6dbce93b9b1b302ebe8"), "name" : 2 }{ "_id" : ObjectId("4c85a6ddce93b9b1b302ebe9"), "name" : 3 }

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.