MongoDB (autosharding + replication sets stability test)

Source: Internet
Author: User

Single replication sets design:

, 10.9.3.228, only mongos and config services are started.

^_^[root@:/usr/local/mongodb/bin]#cat runServerConfig.sh ./mongod --configsvr --dbpath=../data/config --logpath=../data/config.log --fork ^_^[root@:/usr/local/mongodb/bin]#cat runServerMongos.sh ./mongos --configdb 10.7.3.228:27019 --logpath=../data/mongos.log --logappend --fork

Note: The IP addresses and ports in mongos are the IP addresses and ports of the config service.

Advanced Configuration autosharding

163 of shardv has been started, as long as the autosharding service of the 165 server is started

[root@localhost bin]# cat runServerShard.sh ./mongod --shardsvr -dbpath=../data/mongodb --logpath=../data/shardsvr_logs.txt --fork

Configure 163 and 164 Replication

[root@localhost bin]# ./mongo 10.10.21.163:27018MongoDB shell version: 1.8.2connecting to: 10.10.21.163:27018/test> cfg={_id:"set163164",members:[                            ... {_id:0,host:"10.10.21.163:27018"},... {_id:1,host:"10.10.21.164:27017"}... ]}{        "_id" : "set163164",        "members" : [                {                        "_id" : 0,                        "host" : "10.10.21.163:27018"                },                {                        "_id" : 1,                        "host" : "10.10.21.164:27017"                }        ]}> rs.initiate(cfg){        "info" : "Config now saved locally.  Should come online in about a minute.",        "ok" : 1}> rs.conf(){        "_id" : "set163164",        "version" : 1,        "members" : [                {                        "_id" : 0,                        "host" : "10.10.21.163:27018"                },                {                        "_id" : 1,                        "host" : "10.10.21.164:27017"                }        ]}set163164:PRIMARY> set163164:PRIMARY> set163164:PRIMARY> show dbsadmin   (empty)local   14.1962890625GBset163164:PRIMARY> use localswitched to db localset163164:PRIMARY> show collectionsoplog.rssystem.replsetset163164:PRIMARY> db.system.replset.find(){ "_id" : "set163164", "version" : 1, "members" : [        {                "_id" : 0,                "host" : "10.10.21.163:27018"        },        {                "_id" : 1,                "host" : "10.10.21.164:27017"        }] }set163164:PRIMARY> rs.isMaster(){        "setName" : "set163164",        "ismaster" : true,        "secondary" : false,        "hosts" : [                "10.10.21.163:27018",                "10.10.21.164:27017"        ],        "maxBsonObjectSize" : 16777216,        "ok" : 1}

So far, replication sets are configured successfully!

Configure the sharding on the 228 server.

Use Admin

> db.runCommand({addshard:"set163164/10.10.21.163:27018,10.10.21.165:27018"}); { "shardAdded" : "set163164", "ok" : 1 }> db.runCommand({enableSharding:"test"})  { "ok" : 1 }
> db.runCommand({shardcollection:"test.users",key:{_id:1}})  { "collectionsharded" : "test.users", "ok" : 1 }

Then, start the rep service on the 163 and 164 servers respectively, and the 163 server should start the shard service separately.

163:

[root@localhost bin]# cat runServerShard.sh ./mongod --shardsvr --dbpath=../data/mongodb --logpath=../data/shardsvr_logs.txt --fork --replSet set163164

164:

[root@localhost bin]# cat runServerShard.sh ./mongod --dbpath=../data --logpath=../data/shardsvr_logs.txt --fork --replSet set163164 

So far, autosharding + rep is successfully configured. Then perform the stability test.

First look at the result:

We can see that a total of 163 pieces of data are inserted, and 164 and 165 of the same size are part data.

I am now conducting a stability test:

163 servers are disconnected.

Mongos then query accordingly:

> db.users.find()error: { "$err" : "error querying server: 10.10.21.163:27018", "code" : 13633 }> db.users.find()error: {        "$err" : "DBClientBase::findOne: transport error: 10.10.21.163:27018 query: { setShardVersion: \"test.users\", configdb: \"10.7.3.228:27019\", version: Timestamp 11000|1, serverID: ObjectId('4e2f64af98dd90fed26585a4'), shard: \"shard0000\", shardHost: \"10.10.21.163:27018\" }",        "code" : 10276}> db.users.find()                                                                           error: { "$err" : "socket exception", "code" : 11002 }

An error occurred!

Add the 164 server manually!

> db.runCommand({addshard:"10.10.21.164:27017"}); {        "ok" : 0,        "errmsg" : "host is part of set: set163164 use replica set url format <setname>/<server1>,<server2>,...."}

An error occurred!

We can see that this configuration is incorrect!

After a period of time of thinking and repeated tests, I found that the voting was not a problem

The official website has the following message:

Consensus vote

For a node to be elected primary, it must receive a majority of votes. this is a majority of all votes in the Set: If you have a 5-member set and 4 members are down, a majority of the set is still 3 members (Floor (5/2) + 1). Each member of the Set parameter es a single vote and knows the total number of available votes.

If no node can reach a majority, then no primary can be elected and no data can be written to that replica set (although reads to secondaries are still possible ).

So will there be a problem when two servers vote? What should I do if I add another server?

Here we can also use 164 as an arbiter:

use adminvar cfg={_id:"set162163164", members:[{_id:0,host:"10.10.21.162:27018"}, {_id:1,host:"10.10.21.163:27017"}, {_id:2,host:"10.10.21.164:27017",arbiterOnly:true} ]}rs.initiate(cfg)rs.conf()

228:

Use admin # dB. runcommand ({addshard: "set162168244/10.10.21.162: 27018, 10.10.21.163: 27017, 10.10.21.164: 27017"}) # add three normal databases. runcommand ({addshard: "set162168244/10.10.21.162: 27018, 10.10.21.163: 27017"}) # arbiterdb. runcommand ({addshard: "10.10.21.165: 27018"}) dB. runcommand ({enablesharding: "test"}) dB. runcommand ({shardcollection: "test. users ", key: {_ ID: 1 }})

Tested:

The stability has been improved. If 162,163,164 of any server is disconnected, mongos can automatically reconnect one of its vote Members primary.

Final design drawing:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.