MongoDB AutoSharding + Replication sets Stability Test
Single Replication sets design:
, 10.9.3.228, only Mongos and config services are started.
View plainprint?
- Pai_^ [root @:/usr/local/mongodb/bin] # cat runServerConfig. sh
- ./Mongod -- configsvr -- dbpath = ../data/config -- logpath = ../data/config. log -- fork
- Pai_^ [root @:/usr/local/mongodb/bin] # cat runServerMongos. sh
- ./Mongos -- configdb 10.7.3.228: 27019 -- logpath = ../data/mongos. log -- logappend -- fork
Note: The ip addresses and ports in Mongos are the ip addresses and ports of the config service.
Advanced Configuration AutoSharding
163 of shardv has been started, as long as the autoSharding service of the 165 server is started
View plainprint?
- [Root @ localhost bin] # cat runServerShard. sh
- ./Mongod -- shardsvr-dbpath = ../data/mongodb -- logpath = ../data/shardsvr_logs.txt -- fork
Configure 163 and 164 Replication
View plainprint?
- [Root @ localhost bin] #./mongo 10.10.21.163: 27018
- MongoDB shell version: 1.8.2
- Connecting to: 10.10.21.163: 27018/test
- > Cfg = {_ id: "set163164", members :[
- ... {_ Id: 0, host: "10.10.21.163: 27018 "},
- ... {_ Id: 1, host: "10.10.21.164: 27017 "}
- ...]}
- {
- "_ Id": "set163164 ",
- "Members ":[
- {
- "_ Id": 0,
- "Host": "10.10.21.163: 27018"
- },
- {
- "_ Id": 1,
- "Host": "10.10.21.164: 27017"
- }
- ]
- }
- > Rs. Initiate (CFG)
- {
- "Info": "Config now saved locally. shocould come online in about a minute .",
- "OK": 1
- }
- > Rs. conf ()
- {
- "_ Id": "set163164 ",
- "Version": 1,
- "Members ":[
- {
- "_ Id": 0,
- "Host": "10.10.21.163: 27018"
- },
- {
- "_ Id": 1,
- "Host": "10.10.21.164: 27017"
- }
- ]
- }
- Set163164: PRIMARY>
- Set163164: PRIMARY>
- Set163164: PRIMARY> show dbs
- Admin (empty)
- Local 14.1962890625 GB
- Set163164: PRIMARY> use local
- Switched to db local
- Set16.04: PRIMARY> show collections
- Oplog. rs
- System. replset
- Set163164: PRIMARY> db. system. replset. find ()
- {"_ Id": "set163164", "version": 1, "members ":[
- {
- "_ Id": 0,
- "Host": "10.10.21.163: 27018"
- },
- {
- "_ Id": 1,
- "Host": "10.10.21.164: 27017"
- }
- ]}
- Set163164: Primary> Rs. ismaster ()
- {
- "Setname": "set163164 ",
- "Ismaster": True,
- "Secondary": false,
- "Hosts ":[
- "10.10.21.163: 27018 ",
- "10.10.21.164: 27017"
- ],
- "Max bsonobjectsize": 16777216,
- "OK": 1
- }
So far, Replication sets are configured successfully!
Configure the Sharding on the 228 server.
Use Admin
View plainprint?
- > DB. runcommand ({addshard: "set163164/10.10.21.163: 27018, 10.10.21.165: 27018 "});
- {"Shardadded": "set163164", "OK": 1}
- > DB. runcommand ({enablesharding: "test "})
- {"OK": 1}
View plainprint?
- > DB. runcommand ({shardcollection: "test. Users", key: {_ ID: 1 }})
- {"Collectionsharded": "test. Users", "OK": 1}
Then, start the rep service on the 163 and 164 servers respectively, and the 163 server should start the shard service separately.
163:
View plainprint?
- [Root @ localhost bin] # Cat runservershard. Sh
- ./Mongod -- shardsvr -- dbpath = ../data/MongoDB -- logpath = ../data/shardsvr_logs.txt -- fork -- replset set163164
164:
View plainprint?
- [Root @ localhost bin] # Cat runservershard. Sh
- ./Mongod -- dbpath = ../data -- logpath = ../data/shardsvr_logs.txt -- fork -- replset set162134
So far, AutoSharding + Rep is successfully configured. Then perform the stability test.
First look at the result:
We can see that a total of 163 pieces of data are inserted, and 164 and 165 of the same size are part data.
I am now conducting a stability test:
163 servers are disconnected.
Mongos then query accordingly:
View plainprint?
- > DB. Users. Find ()
- Error: {"$ Err": "Error querying server: 10.10.21.163: 27018", "code": 13633}
- > DB. Users. Find ()
- Error :{
- "$ Err": "dbclientbase: findone: Transport Error: 10.10.21.163: 27018 query: {setshardversion: \" test. users \ ", configdb: \" 10.7.3.228: 27019 \ ", version: Timestamp 11000 | 1, serverid: objectid ('4e2f64af98dd90fed26585a4 '), shard: \" shard0000 \ ", shardhost: \ "Maid: 27018 \"}",
- "Code": 10276
- }
- > DB. Users. Find ()
- Error: {"$ Err": "socket exception", "code": 11002}
An error occurred!
Add the 164 server manually!
View plainprint?
- > DB. runcommand ({addshard: "10.10.21.164: 27017 "});
- {
- "OK": 0,
- "Errmsg": "host is part of set: set16.04 use replica set url format <setname>/<server1>, <server2> ,...."
- }
An error occurred!
We can see that this configuration is incorrect!
After a period of time of thinking and repeated tests, I found that the voting was not a problem
The official website has the following message:
Consensus Vote
For a node to be elected primary, it must receive a majority of votes. this is a majority of all votes in the set: if you have a 5-member set and 4 members are down, a majority of the set is still 3 members (Floor (5/2) + 1). Each member of the set
Es a single vote and knows the total number of available votes.
If no node can reach a majority, then no primary can be elected and no data can be written to that replica set (although reads to secondaries are still possible ).
So will there be a problem when two servers vote? What should I do if I add another Server?
Here we can also use 164 as an arbiter:
View plainprint?
- Use admin
- Var cfg = {_ id: "set162163164", members: [{_ id: 0, host: "10.10.21.162: 27018" },{ _ id: 1, host: "10.10.21.163: 27017 "},{ _ id: 2, host:" 10.10.21.164: 27017 ", arbiterOnly: true}]}
- Rs. initiate (cfg)
- Rs. conf ()
228:
View plainprint?
- Use admin
- # Db. runCommand ({addshard: "set162168244/10.10.21.162: 27018, 10.10.21.163: 27017, 10.10.21.164: 27017"}) # Add 3 servers normally
- Db. runCommand ({addshard: "set162162134/10.10.21.162: 27018, 10.10.21.163: 27017"}) # arbiter
- Db. runCommand ({addshard: "10.10.21.165: 27018 "})
- Db. runCommand ({enableSharding: "test "})
- Db. runCommand ({shardcollection: "test. users", key: {_ id: 1 }})
Tested:
The stability has been improved. If 162,163,164 of any Server is disconnected, Mongos can automatically reconnect one of its vote Members primary.
Final design drawing: