1: Start three instances
- /Home/mongodb/db27017/-/home/MongoDB /db27018/-/home/mongodb/db27019 /mongodb27019.conf
The configuration file is as follows:
Verbose=true #日志信息vvvv=true #日志的级别logpath=/Home/Mongodb/db27019/Log/Mongodb.Log#日志文件logappend=true #设置文件记录格式TRUE为追加, false to overwrite Pport= 27019#指定端口号maxConns= -#最大链接数, the default is the maximum of 2Wpidfilepath per system limit=/Home/Mongodb/db27019/Tmp/Mongo.pid #进程ID, there is no PID file when the boot is not specified Nounixsocket=false #当设置为true时 The socket file is not generated Unixsocketprefix=/Home/Mongodb/db27019/TMP #套接字文件路径, default/tmpfork=true #后台运行的守护进程模式dbpath=/Home/Mongodb/db27019/Data #数据存放目录noprealloc=false #预分配方式来保证写入性能, true will bring performance degradation nssize= -#命名空间默认大小16M, max 2g# slow log profile= 1#0closed. No analysis 1 includes only slow operation 2 including all operations. Slowms= $#大于200ms的日志将会被记录下来replSet=Sharingmxqconfigsvr=True
2: Enter a machine to initialize:
MONGO Use [{_id:0, Host: ' localhost:27017 '},{_id:1, Host: ' localhost:27018 '},{_id:2, Host: ' Loca lhost:27019 "}]})
3: Create a replicated set of shards
[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37017/Data[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37017/tmp[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37017/Log[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37017/Key[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37018/Data[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37018/tmp[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37018/Log[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37018/Key[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37019/Key[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37019/Log[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37019/tmp[[email protected] MongoDB]# mkdir-P/Home/Mongodb/db37019/Data
Start:
- /Home/mongodb/db37017/-/home/MongoDB /db37018/-/home/mongodb/db37019 /mongodb37019.conf
Note that if you add a shard copy set. Each shard is assigned a different replset and then initialized. It would be nice to think of a replica set as a single node, but you must specify master when adding shards.
4: Start a MONGOs instance link to configure the MongoDB database
-- configdb "Sharingmxq"/localhost:27017,localhost:27018,localhost:27019--port=30000
Then add the Shard node:
UseAdminmongos>Sh.addshard ('localhost:37017'{"shardadded": "shard0000", "OK":1}mongos>Sh.addshard ('localhost:37018'{"shardadded": "shard0001", "OK":1}mongos>Sh.addshard ('localhost:37019'{"shardadded": "shard0002", "OK":1}
If the three Shard nodes use a copy set, they are added in the following way:
MONGOs>Sh.addshard ("Sharingmxq/localhost37017") {"shardadded": "Sharingmxq", "OK":1}mongos>Sh.addshard ("Sharingmxq/localhost37018") {"shardadded": "Sharingmxq", "OK":1}mongos>Sh.addshard ("Sharingmxq/localhost37019") {"shardadded": "Sharingmxq", "OK":1}
5: Configuration of the Shard
MONGOs--port=30000 to start a shard sh.enablesharding ("Maxiangqian") for a database to add shards to a table:
' ID ':1}) sh.shardcollection ('aedata.ac01_test', {' ID ':1,'idcard':1})
6: Verify that the Shard is complete
-- port=30000 Use Maxiangqian for (var=1<=100000; I+ +) Db.maxiangqian. Save ({id:i, "test1": "Testval1"});
Log on to each Shard server for verification:
99981118 has completed the Shard. But why the distribution will be uneven, because we are using a hash partition, and then insert him a 900,000 look. (to distinguish between a hash partition and a range partition, the hash partition is extensible, but the distributed server is not control) for (var i = 100000; I <= 1000000; i++) Db.maxiangqian.save ({id:i, " Test1 ":" Testval1 "}); Check again: 328487--port=37019335552--port=37018335962--port=37017
MongoDB 3.4 Shard Replica Set configuration