MongoDB AutoSharding + Replication sets Stability Test

Source: Internet
Author: User
MongoDB AutoSharding + Replication sets Stability Test

Single Replication sets design:

, 10.9.3.228, only Mongos and config services are started.

View plainprint?

  1. Pai_^ [root @:/usr/local/mongodb/bin] # cat runServerConfig. sh
  2. ./Mongod -- configsvr -- dbpath = ../data/config -- logpath = ../data/config. log -- fork
  3. Pai_^ [root @:/usr/local/mongodb/bin] # cat runServerMongos. sh
  4. ./Mongos -- configdb 10.7.3.228: 27019 -- logpath = ../data/mongos. log -- logappend -- fork

Note: The ip addresses and ports in Mongos are the ip addresses and ports of the config service.

Advanced Configuration AutoSharding

163 of shardv has been started, as long as the autoSharding service of the 165 server is started

View plainprint?

  1. [Root @ localhost bin] # cat runServerShard. sh
  2. ./Mongod -- shardsvr-dbpath = ../data/mongodb -- logpath = ../data/shardsvr_logs.txt -- fork

Configure 163 and 164 Replication

View plainprint?

  1. [Root @ localhost bin] #./mongo 10.10.21.163: 27018
  2. MongoDB shell version: 1.8.2
  3. Connecting to: 10.10.21.163: 27018/test
  4. > Cfg = {_ id: "set163164", members :[
  5. ... {_ Id: 0, host: "10.10.21.163: 27018 "},
  6. ... {_ Id: 1, host: "10.10.21.164: 27017 "}
  7. ...]}
  8. {
  9. "_ Id": "set163164 ",
  10. "Members ":[
  11. {
  12. "_ Id": 0,
  13. "Host": "10.10.21.163: 27018"
  14. },
  15. {
  16. "_ Id": 1,
  17. "Host": "10.10.21.164: 27017"
  18. }
  19. ]
  20. }
  21. > Rs. Initiate (CFG)
  22. {
  23. "Info": "Config now saved locally. shocould come online in about a minute .",
  24. "OK": 1
  25. }
  26. > Rs. conf ()
  27. {
  28. "_ Id": "set163164 ",
  29. "Version": 1,
  30. "Members ":[
  31. {
  32. "_ Id": 0,
  33. "Host": "10.10.21.163: 27018"
  34. },
  35. {
  36. "_ Id": 1,
  37. "Host": "10.10.21.164: 27017"
  38. }
  39. ]
  40. }
  41. Set163164: PRIMARY>
  42. Set163164: PRIMARY>
  43. Set163164: PRIMARY> show dbs
  44. Admin (empty)
  45. Local 14.1962890625 GB
  46. Set163164: PRIMARY> use local
  47. Switched to db local
  48. Set16.04: PRIMARY> show collections
  49. Oplog. rs
  50. System. replset
  51. Set163164: PRIMARY> db. system. replset. find ()
  52. {"_ Id": "set163164", "version": 1, "members ":[
  53. {
  54. "_ Id": 0,
  55. "Host": "10.10.21.163: 27018"
  56. },
  57. {
  58. "_ Id": 1,
  59. "Host": "10.10.21.164: 27017"
  60. }
  61. ]}
  62. Set163164: Primary> Rs. ismaster ()
  63. {
  64. "Setname": "set163164 ",
  65. "Ismaster": True,
  66. "Secondary": false,
  67. "Hosts ":[
  68. "10.10.21.163: 27018 ",
  69. "10.10.21.164: 27017"
  70. ],
  71. "Max bsonobjectsize": 16777216,
  72. "OK": 1
  73. }

So far, Replication sets are configured successfully!

 

Configure the Sharding on the 228 server.

Use Admin

View plainprint?

  1. > DB. runcommand ({addshard: "set163164/10.10.21.163: 27018, 10.10.21.165: 27018 "});
  2. {"Shardadded": "set163164", "OK": 1}
  3. > DB. runcommand ({enablesharding: "test "})
  4. {"OK": 1}

View plainprint?

  1. > DB. runcommand ({shardcollection: "test. Users", key: {_ ID: 1 }})
  2. {"Collectionsharded": "test. Users", "OK": 1}

Then, start the rep service on the 163 and 164 servers respectively, and the 163 server should start the shard service separately.

163:

View plainprint?

  1. [Root @ localhost bin] # Cat runservershard. Sh
  2. ./Mongod -- shardsvr -- dbpath = ../data/MongoDB -- logpath = ../data/shardsvr_logs.txt -- fork -- replset set163164

164:
View plainprint?

  1. [Root @ localhost bin] # Cat runservershard. Sh
  2. ./Mongod -- dbpath = ../data -- logpath = ../data/shardsvr_logs.txt -- fork -- replset set162134

So far, AutoSharding + Rep is successfully configured. Then perform the stability test.

First look at the result:

 

We can see that a total of 163 pieces of data are inserted, and 164 and 165 of the same size are part data.

I am now conducting a stability test:

163 servers are disconnected.

Mongos then query accordingly:

View plainprint?

  1. > DB. Users. Find ()
  2. Error: {"$ Err": "Error querying server: 10.10.21.163: 27018", "code": 13633}
  3. > DB. Users. Find ()
  4. Error :{
  5. "$ Err": "dbclientbase: findone: Transport Error: 10.10.21.163: 27018 query: {setshardversion: \" test. users \ ", configdb: \" 10.7.3.228: 27019 \ ", version: Timestamp 11000 | 1, serverid: objectid ('4e2f64af98dd90fed26585a4 '), shard: \" shard0000 \ ", shardhost: \ "Maid: 27018 \"}",
  6. "Code": 10276
  7. }
  8. > DB. Users. Find ()
  9. Error: {"$ Err": "socket exception", "code": 11002}

An error occurred!

Add the 164 server manually!

View plainprint?

  1. > DB. runcommand ({addshard: "10.10.21.164: 27017 "});
  2. {
  3. "OK": 0,
  4. "Errmsg": "host is part of set: set16.04 use replica set url format <setname>/<server1>, <server2> ,...."
  5. }

An error occurred!

We can see that this configuration is incorrect!

After a period of time of thinking and repeated tests, I found that the voting was not a problem

The official website has the following message:

Consensus Vote

For a node to be elected primary, it must receive a majority of votes. this is a majority of all votes in the set: if you have a 5-member set and 4 members are down, a majority of the set is still 3 members (Floor (5/2) + 1). Each member of the set
Es a single vote and knows the total number of available votes.

If no node can reach a majority, then no primary can be elected and no data can be written to that replica set (although reads to secondaries are still possible ).

So will there be a problem when two servers vote? What should I do if I add another Server?

Here we can also use 164 as an arbiter:

View plainprint?

  1. Use admin
  2. Var cfg = {_ id: "set162163164", members: [{_ id: 0, host: "10.10.21.162: 27018" },{ _ id: 1, host: "10.10.21.163: 27017 "},{ _ id: 2, host:" 10.10.21.164: 27017 ", arbiterOnly: true}]}
  3. Rs. initiate (cfg)
  4. Rs. conf ()

228:

View plainprint?

  1. Use admin
  2. # Db. runCommand ({addshard: "set162168244/10.10.21.162: 27018, 10.10.21.163: 27017, 10.10.21.164: 27017"}) # Add 3 servers normally
  3. Db. runCommand ({addshard: "set162162134/10.10.21.162: 27018, 10.10.21.163: 27017"}) # arbiter
  4. Db. runCommand ({addshard: "10.10.21.165: 27018 "})
  5. Db. runCommand ({enableSharding: "test "})
  6. Db. runCommand ({shardcollection: "test. users", key: {_ id: 1 }})

Tested:

The stability has been improved. If 162,163,164 of any Server is disconnected, Mongos can automatically reconnect one of its vote Members primary.

Final design drawing:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.