Single Replication sets design:
, 10.9.3.228, only Mongos and config services are started.
- ^_^[root@:/usr/local/mongodb/bin]#cat runServerConfig.sh
- ./mongod --configsvr --dbpath=../data/config --logpath=../data/config.log --fork
- ^_^[root@:/usr/local/mongodb/bin]#cat runServerMongos.sh
- ./mongos --configdb 10.7.3.228:27019 --logpath=../data/mongos.log --logappend --fork
Note: The ip addresses and ports in Mongos are the ip addresses and ports of the config service.
Advanced Configuration AutoSharding
163 of shardv has been started, as long as the autoSharding service of the 165 server is started
- [root@localhost bin]# cat runServerShard.sh
- ./mongod --shardsvr -dbpath=../data/mongodb --logpath=../data/shardsvr_logs.txt --fork
Configure the Sharding on the 228 server.
Use admin
- > db.runCommand({addshard:"10.10.21.163:27018"});
- { "shardAdded" : "shard0000", "ok" : 1 }
- > db.runCommand({addshard:"10.10.21.165:27018"});
- { "shardAdded" : "shard0001", "ok" : 1 }
- > db.runCommand({enableSharding:"test"})
- { "ok" : 1 }
- > db.runCommand({shardcollection:"test.users",key:{_id:1}})
- { "collectionsharded" : "test.users", "ok" : 1 }
Then, start the rep service on the 163 and 164 servers respectively, and the 163 server should start the shard service separately.
163:
- [root@localhost bin]# cat runServerShard.sh
- ./mongod --shardsvr --dbpath=../data/mongodb --logpath=../data/shardsvr_logs.txt --fork --replSet set163164
164:
- [root@localhost bin]# cat runServerShard.sh
- ./mongod --dbpath=../data --logpath=../data/shardsvr_logs.txt --fork --replSet set163164
Continue to configure Replication for 163 and 164
- [root@localhost bin]# ./mongo 10.10.21.163:27018
- MongoDB shell version: 1.8.2
- connecting to: 10.10.21.163:27018/test
- > cfg={_id:"set163164",members:[
- ... {_id:0,host:"10.10.21.163:27018"},
- ... {_id:1,host:"10.10.21.164:27017"}
- ... ]}
- {
- "_id" : "set163164",
- "members" : [
- {
- "_id" : 0,
- "host" : "10.10.21.163:27018"
- },
- {
- "_id" : 1,
- "host" : "10.10.21.164:27017"
- }
- ]
- }
- > rs.initiate(cfg)
- {
- "info" : "Config now saved locally. Should come online in about a minute.",
- "ok" : 1
- }
- > rs.conf()
- {
- "_id" : "set163164",
- "version" : 1,
- "members" : [
- {
- "_id" : 0,
- "host" : "10.10.21.163:27018"
- },
- {
- "_id" : 1,
- "host" : "10.10.21.164:27017"
- }
- ]
- }
- set163164:PRIMARY>
- set163164:PRIMARY>
- set163164:PRIMARY> show dbs
- admin (empty)
- local 14.1962890625GB
- set163164:PRIMARY> use local
- switched to db local
- set163164:PRIMARY> show collections
- oplog.rs
- system.replset
- set163164:PRIMARY> db.system.replset.find()
- { "_id" : "set163164", "version" : 1, "members" : [
- {
- "_id" : 0,
- "host" : "10.10.21.163:27018"
- },
- {
- "_id" : 1,
- "host" : "10.10.21.164:27017"
- }
- ] }
- set163164:PRIMARY> rs.isMaster()
- {
- "setName" : "set163164",
- "ismaster" : true,
- "secondary" : false,
- "hosts" : [
- "10.10.21.163:27018",
- "10.10.21.164:27017"
- ],
- "maxBsonObjectSize" : 16777216,
- "ok" : 1
- }
So far, Replication sets are configured successfully!
So far, AutoSharding + Rep is successfully configured. Then perform the stability test. (You should configure sharding before Replication)
First look at the result:
We can see that a total of 163 pieces of data are inserted, and 164 and 165 of the same size are part data.
I am now conducting a stability test:
163 servers are disconnected.
Mongos then query accordingly:
- > db.users.find()
- error: { "$err" : "error querying server: 10.10.21.163:27018", "code" : 13633 }
- > db.users.find()
- error: {
- "$err" : "DBClientBase::findOne: transport error: 10.10.21.163:27018 query: { setShardVersion: \"test.users\", configdb: \"10.7.3.228:27019\", version: Timestamp 11000|1, serverID: ObjectId('4e2f64af98dd90fed26585a4'), shard: \"shard0000\", shardHost: \"10.10.21.163:27018\" }",
- "code" : 10276
- }
- > db.users.find()
- error: { "$err" : "socket exception", "code" : 11002 }
An error occurred!
Add the 164 server manually!
- > db.runCommand({addshard:"10.10.21.164:27017"});
- {
- "ok" : 0,
- "errmsg" : "host is part of set: set163164 use replica set url format <setname>/<server1>,<server2>,...."
- }
An error occurred!
We can see that this configuration is incorrect!
The article is not complete. Continue to update!