MongoDb Replica set solves the problem of fault tolerance and single point of failure, but the storage and withstanding capacity of a single machine is limited, and sharding is generated for mass storage and dynamic expansion. This has the replica set+sharding highly available architecture.
Sharding cluster mainly includes the following three parts:
Shards: Each shard is replica set, with automatic backup, fault tolerance, recovery capabilities, of course, in the development environment you can be matched to a single mongod
Config server: Storage metadata, including basic information and chunk information for each shard, production environment has at least 3 config servers
MONGOs: The portal of the cluster, the client interacts with it, and the complexity forwards the request to the corresponding shard.
Build MongoDB Replica set+sharding Environment:
Topology map:
Note: Because of local testing, deployment will be done on one machine, and both MONGOs and config server use only one instance.
Configuration:
Replica set REP1 configuration:
#master. conf configuration
dbpath=/data/mongo_cluster/master
logpath=/data/mongo_cluster/master/logs/master.log
pidfilepath=/data/mongo_cluster/master/master.pid
logappend=true
replset=rep1
bind_ip= 192.168.15.130
port=10000
fork=true
journal=true
shardsvr=true
#slave. conf configuration
Dbpath=/data/mongo_cluster/slave
logpath=/data/mongo_cluster/slave/logs/slave.log
pidfilepath=/data/ Mongo_cluster/slave/slave.pid
logappend=true
replset=rep1
bind_ip=192.168.15.130
port= 10001
fork=true
journal=true
shardsvr=true
#arbiter. conf configuration
Dbpath=/data/mongo_ Cluster/arbiter
logpath=/data/mongo_cluster/arbiter/logs/arbiter.log
pidfilepath=/data/mongo_cluster /arbiter/arbiter.pid
logappend=true
replset=rep1
bind_ip=192.168.15.130
port=10002
Fork=true
journal=true
shardsvr=true
Replica set REP2 configuration:
#master. conf configuration
dbpath=/data/mongo_cluster2/master
logpath=/data/mongo_cluster2/master/logs/master.log
pidfilepath=/data/mongo_cluster2/master/master.pid
logappend=true
replset=rep2
bind_ip= 192.168.15.130
port=10004
fork=true
journal=true
shardsvr=true
#slave. conf configuration
Dbpath=/data/mongo_cluster2/slave
logpath=/data/mongo_cluster2/slave/logs/slave.log
pidfilepath=/ Data/mongo_cluster2/slave/slave.pid
logappend=true
replset=rep2
bind_ip=192.168.15.130
port=10005
fork=true
journal=true
shardsvr=true
#arbiter. conf configuration
dbpath=/data/ Mongo_cluster2/arbiter
logpath=/data/mongo_cluster2/arbiter/logs/arbiter.log
Pidfilepath=/data/mongo _cluster2/arbiter/arbiter.pid
logappend=true
replset=rep2
bind_ip=192.168.15.130
port= 10006
fork=true
journal=true
shardsvr=true
Configure the replica set master, standby, and Quorum node: (Click to view)
Config server configuration:
#config Server Configuration configsvr.conf
dbpath=/data/mongo_config_server
logpath=/data/mongo_config_server/logs/ Configsvr.log
pidfilepath=/data/mongo_config_server/configsvr.pid
logappend=true
bind_ip= 192.168.15.130
port=10007
fork=true
journal=true
configsvr=true
mongos configuration:
#mongos配置 mongos.conf
logpath=/data/mongos/logs/mongos.log
pidfilepath=/data/mongos/mongos.pid
logappend= True
bind_ip=192.168.15.130
port=10008
fork=true
configdb=192.168.15.130:10007
Start:
#启动shard
/usr/local/mongodb3.0.5/bin/mongod-f/data/mongo_cluster/master/master.conf
/usr/local/ Mongodb3.0.5/bin/mongod-f/data/mongo_cluster/slave/slave.conf
/usr/local/mongodb3.0.5/bin/mongod-f/data/ mongo_cluster/arbiter/arbiter.conf
/usr/local/mongodb3.0.5/bin/mongod-f/data/mongo_cluster2/master/ master.conf
/usr/local/mongodb3.0.5/bin/mongod-f/data/mongo_cluster2/slave/slave.conf
/usr/local/ Mongodb3.0.5/bin/mongod-f/data/mongo_cluster2/arbiter/arbiter.conf
#启动config server
/usr/local/ Mongodb3.0.5/bin/mongod-f/data/mongo_config_server/configsvr.conf
#启动mongos
/usr/local/ Mongodb3.0.5/bin/mongos-f/data/mongos/mongos.conf
Configure MONGOs to set up fragmentation rules:
(1) connection MONGOs and replica set
#进入mongos
/usr/local/mongodb3.0.5/bin/mongo 192.168.15.130:10008
#连接mongos与replica set
mongos> Db.runcommand ({addshard: "rep1/192.168.15.130:10001"});
{"shardadded": "Rep1", "OK": 1}
Mongos> Db.runcommand ({addshard: "rep2/192.168.15.130:10004"});
{"shardadded": "Rep2", "OK": 1}
#查看分片配置
mongos> db.runcommand ({listshards:1});
(2) Set up fragmentation rules and test
#指定testdb分片生效
Db.runcommand ({enablesharding: "TestDB"});
#指定数据库里需要分片的集合和片键
Db.runcommand ({shardcollection: "Testdb.table1", key: {id:1}})
#测试
> Use tes Tdb
#插入测试数据
> for (var i = 1; I <= 1000 i++) db.table1.save ({id:i, "test1": "Testval1"});
#查看分片情况如下
> Db.table1.stats ();