MongoDB(AutoSharding+Replication sets 穩定性測試 )

來源:互聯網
上載者:User

單Replication sets設計:

,10.9.3.228隻啟動Mongos和config兩個服務

^_^[root@:/usr/local/mongodb/bin]#cat runServerConfig.sh ./mongod --configsvr --dbpath=../data/config --logpath=../data/config.log --fork ^_^[root@:/usr/local/mongodb/bin]#cat runServerMongos.sh ./mongos --configdb 10.7.3.228:27019 --logpath=../data/mongos.log --logappend --fork

注意:Mongos裡面的ip和連接埠是config服務的ip和連接埠

先進性配置AutoSharding

163的shardv已經啟動了,只要啟動下165伺服器的autoSharding服務

[root@localhost bin]# cat runServerShard.sh ./mongod --shardsvr -dbpath=../data/mongodb --logpath=../data/shardsvr_logs.txt --fork

配置163和164的Replication

[root@localhost bin]# ./mongo 10.10.21.163:27018MongoDB shell version: 1.8.2connecting to: 10.10.21.163:27018/test> cfg={_id:"set163164",members:[                            ... {_id:0,host:"10.10.21.163:27018"},... {_id:1,host:"10.10.21.164:27017"}... ]}{        "_id" : "set163164",        "members" : [                {                        "_id" : 0,                        "host" : "10.10.21.163:27018"                },                {                        "_id" : 1,                        "host" : "10.10.21.164:27017"                }        ]}> rs.initiate(cfg){        "info" : "Config now saved locally.  Should come online in about a minute.",        "ok" : 1}> rs.conf(){        "_id" : "set163164",        "version" : 1,        "members" : [                {                        "_id" : 0,                        "host" : "10.10.21.163:27018"                },                {                        "_id" : 1,                        "host" : "10.10.21.164:27017"                }        ]}set163164:PRIMARY> set163164:PRIMARY> set163164:PRIMARY> show dbsadmin   (empty)local   14.1962890625GBset163164:PRIMARY> use localswitched to db localset163164:PRIMARY> show collectionsoplog.rssystem.replsetset163164:PRIMARY> db.system.replset.find(){ "_id" : "set163164", "version" : 1, "members" : [        {                "_id" : 0,                "host" : "10.10.21.163:27018"        },        {                "_id" : 1,                "host" : "10.10.21.164:27017"        }] }set163164:PRIMARY> rs.isMaster(){        "setName" : "set163164",        "ismaster" : true,        "secondary" : false,        "hosts" : [                "10.10.21.163:27018",                "10.10.21.164:27017"        ],        "maxBsonObjectSize" : 16777216,        "ok" : 1}

至此Replication sets配置成功!

再在228伺服器上進行相應Sharding配置

use admin

> db.runCommand({addshard:"set163164/10.10.21.163:27018,10.10.21.165:27018"}); { "shardAdded" : "set163164", "ok" : 1 }> db.runCommand({enableSharding:"test"})  { "ok" : 1 }
> db.runCommand({shardcollection:"test.users",key:{_id:1}})  { "collectionsharded" : "test.users", "ok" : 1 }

然後分別在163和164伺服器上啟動rep服務,163要單獨啟動shard服務

163:

[root@localhost bin]# cat runServerShard.sh ./mongod --shardsvr --dbpath=../data/mongodb --logpath=../data/shardsvr_logs.txt --fork --replSet set163164

164:

[root@localhost bin]# cat runServerShard.sh ./mongod --dbpath=../data --logpath=../data/shardsvr_logs.txt --fork --replSet set163164 

至此AutoSharding+Rep配置成功。然後進行測試穩定性階段。

先看下結果:

可以看到,總共插入2000W條資料,163和164相同大小 165屬於分區 資料。

我現在進行穩定性測試:

斷掉163伺服器。

Mongos那再相應進行查詢:

> db.users.find()error: { "$err" : "error querying server: 10.10.21.163:27018", "code" : 13633 }> db.users.find()error: {        "$err" : "DBClientBase::findOne: transport error: 10.10.21.163:27018 query: { setShardVersion: \"test.users\", configdb: \"10.7.3.228:27019\", version: Timestamp 11000|1, serverID: ObjectId('4e2f64af98dd90fed26585a4'), shard: \"shard0000\", shardHost: \"10.10.21.163:27018\" }",        "code" : 10276}> db.users.find()                                                                           error: { "$err" : "socket exception", "code" : 11002 }

直接出現錯誤!

再進行手動添加164伺服器!

> db.runCommand({addshard:"10.10.21.164:27017"}); {        "ok" : 0,        "errmsg" : "host is part of set: set163164 use replica set url format <setname>/<server1>,<server2>,...."}

還是出錯!

可見這樣配置是有問題的!

經過一段時間的思考和反覆測試,發現是否是投票上除了問題

看到官網上有這樣一段話:

Consensus Vote

For a node to be elected primary, it must receive a majority of votes. This is a majority of all votes in the set: if you have a 5-member set and 4 members are down, a majority of the set is still 3 members (floor(5/2)+1). Each member of the set receives a single vote and knows the total number of available votes.

If no node can reach a majority, then no primary can be elected and no data can be written to that replica set (although reads to secondaries are still possible).

那麼2台Server投票是否會出現問題,那再加一台如何?

這邊也可以 把164作為 arbiter來 :

use adminvar cfg={_id:"set162163164", members:[{_id:0,host:"10.10.21.162:27018"}, {_id:1,host:"10.10.21.163:27017"}, {_id:2,host:"10.10.21.164:27017",arbiterOnly:true} ]}rs.initiate(cfg)rs.conf()

228:

use admin#db.runCommand({addshard:"set162163164/10.10.21.162:27018,10.10.21.163:27017,10.10.21.164:27017"})       #正常添加3台db.runCommand({addshard:"set162163164/10.10.21.162:27018,10.10.21.163:27017"})       #arbiterdb.runCommand({addshard:"10.10.21.165:27018"}) db.runCommand({enableSharding:"test"}) db.runCommand({shardcollection:"test.users",key:{_id:1}})

經過實驗:

穩定性已經提高,斷掉162,163,164任意一台Server ,Mongos都能自動reconnect中其中的vote的一個成員primary.

最終設計圖:

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.