Architecture diagram:
1. Prepare the machine, the IP is set to
192.168.1.201192.168.1.202192.168.1.203
2. Download MongoDB on each machine
3. Create a mongos.config,shard1,shard2.shard3 Five folder on each machine
MONGOs: Ingress of DB cluster requests (production environment should have multiple)
Config: Configure the server to store all database meta information (routing, sharding) configuration (corresponding to MONGOs)
Shard1: Shard 1
Shard2: Shard 2
Shard3: Shard 3
(MONGOs of 3, Config server 3, data divided into 3 pieces Shard server 3, each shard has a copy one quorum is 3 * 2 = 6, a total of 15 instances need to be deployed )
Create a log file and data file (MONGOs file does not save data)
#建立mongos目录mkdir-P Mongos/log
#建立config Server data Files directory and log directory mkdir-p congif/datamkdir-p congif/log
#建立shard1数据文件存放目录和日志目录mkdir-P shard1/datamkdir-pshard1/Log
#建立shard2数据文件存放目录和日志目录mkdir-P shard2/datamkdir-p shard2/log
#建立shard3数据文件存放目录和日志目录mkdir-P shard3/datamkdir-p shard3/log
4. Port settings (open the corresponding port firewall for each machine: iptables-i input-p tcp--dport 20000-j ACCEPT)
mongos:20000config:21000shard1:22001shard2:22002shard3:22003
5. Start each server configuration server
Bin/mongod--configsvr--dbpath config/data--port 21000--logpath config/log/mongod.log--fork
6. Start each server MONGOs server
Bin/mongos--configdb 192.168.1.201:21000,192.168.1.202:21000,192.168.1.203:21000 --port 20000 --logpath Mongos/log/mongos.log--fork
7. Start configuring the replica set in each module
#在每个机器里分别设置分片1服务器及副本集shard1bin/mongod--shardsvr--replset shard1--port 22001--dbpath shard1/data --logpath Shard1/log/shard1.log--fork--nojournal --oplogsize 10
#在每个机器里分别设置分片2服务器及副本集shard2bin/mongod--shardsvr--replset shard2--port 22002--dbpath shard2/data --logpath Shard2/log/shard2.log--fork--nojournal --oplogsize 10
#在每个机器里分别设置分片3服务器及副本集shard3bin/mongod--shardsvr--replset shard3--port 22003--dbpath shard3/data --logpath Shard3/log/shard3.log--fork--nojournal --oplogsize 10
8. Configure the replica set for each shard separately
#任意一台机器 # Set the first shard replica set, connect Shard1bin/mongo 192.168.1.201:22001# using the Admin database use admin# define replica set configuration Config = {_id: "Shard1", members : [ {_id:0,host: "192.168.1.201:22001"}, {_id:1,host: "192.168.1.202:22001"}, {_id:2,host: " 192.168.1.203:22001 ", arbiteronly:true} ] } #初始化副本及: rs.initiate (config);
#设置第二个分片副本集, Shard2bin/mongo 192.168.1.201:22002# using the admin database use admin# to define the replica set configuration config = {_id: "Shard2", members:[ {_ Id:0,host: "192.168.1.201:22002"}, {_id:1,host: "192.168.1.202:22002"}, {_id:2,host: "192.168.1.203:22002 ", arbiteronly:true} ] } #初始化副本及: rs.initiate (config);
#设置第三个分片副本集, Shard3bin/mongo 192.168.1.201:22003# using the admin database use admin# to define the replica set configuration config = {_id: "Shard3", members:[ {_ Id:0,host: "192.168.1.201:22003"}, {_id:1,host: "192.168.1.202:22003"}, {_id:2,host: "192.168.1.203:22003 ", arbiteronly:true} ] } #初始化副本及: rs.initiate (config);
9. Set the Shard configuration for the Shard to take effect
#连接mongosbin/mongo 192.168.1.201:20000# uses the admin database use admin# to route the server in tandem, assigning replica set Db.runcommand ({addshard: " Shard1/192.168.1.201:22001,192.168.1.202:22001,192.168.1.203:22001 "}) Db.runcommand ({addshard:" shard2/ 192.168.1.201:22002,192.168.1.202:22002,192.168.1.203:22002 "}) Db.runcommand ({addshard:" shard3/ 192.168.1.201:22003,192.168.1.202:22003,192.168.1.203:22003 "})
#查看分片服务器的配置db. RunCommand ({listshards:1});
10. Connect on MONGOs, ready to have the specified database, specified collection shard in effect
#指定testdb分片生效db. RunCommand ({enablesharding: "TestDB"}); #指定数据库里需要分片的集合和片键db. RunCommand ({shardcollection: " Testdb.person ", key: {x:1}})
11.java programs are associated with clusters
list<serveraddress> addresses = new arraylist<serveraddress> (); Addresses.add (New ServerAddress (" 192.168.1.201 ", 20000)), Addresses.add (New ServerAddress (" 192.168.1.202 ", 20000)), Addresses.add (New ServerAddress (" 192.168.1.203 ", 20000)); Mongoclient client = new Mongoclient (addresses); Mongodatabase db = Client.getdatabase ("TestDB"); Mongocollection coll = db.getcollection ("person"); Long start = System.currenttimemillis (); System.out.println ("Start:" +start); list<document> add = new arraylist<document> (); for (int i=1; i<400000; i++) {add.add (New Document (). Append ("x", I). Append ("name", "Daxiong" +i)); } coll.insertmany (add); /* Finditerable<document> findinterable = Coll.find (); for (Document d:findinterable) {System.out.println (D.tojson ()); }*/System.out.println ("Time Consuming:" + (System.currenttimemillis ()-start)/1000);
#查看分片状态命令db. Person.stats ();d b.printshardingstatus ()
12. After restarting the server, start the MongoDB cluster command again
#1. Start each config Servicebin/mongod--configsvr--dbpath config/data--port 21000--logpath config/log/mongod.log--fork#2. Start each mongos (with all config) Bin/mongos--configdb 192.168.1.201:21000,192.168.1.202:21000,192.168.1.203:21000 -- Port 20000 --logpath mongos/log/mongos.log--fork#3. Start each shard, replica cluster, quorum bin/mongod--shardsvr--replset shard1 for each machine-- Port 22001--dbpath shard1/data --logpath shard1/log/shard1.log--fork--nojournal--oplogsize 10bin/mongod -- Shardsvr--replset shard2--port 22002--dbpath shard2/data --logpath shard1/log/shard2.log--fork--nojournal --oplogsize 10bin/mongod--shardsvr--replset shard3--port 22003--dbpath shard3/data --logpath shard1/log/ Shard3.log--fork--nojournal --oplogsize 10#4. Connect any MONGOs
Reference: http://www.lanceyan.com/tech/arch/mongodb_shard1.html
MongoDB Shard Cluster Real-combat construction