first, three components of the cluster:MONGOs (query routers): Queries the route, takes care of the client connection, assigns the task to shards, and then collects the results. Config server: Configures the servers, saves the metadata information of the cluster, and queries the route by configuring the server's configuration information to decide which shards to assign the task to. Shards server: Shards, stores data, and performs calculations.
second, the cluster architecture diagram:
third, IP and port planning of the cluster:
Service |
192.168.141.201 |
192.168.141.202 |
192.168.141.203 |
Router |
Mongos (17017) |
Mongos (17017) |
|
Config |
Config Server1 (27017) |
Config Server2 (27017) |
Config Server3 (27017) |
|
shard1-Main (37017) |
shard2-Main (47017) |
shard3-Main (57017) |
Shard |
shard2-from (47017) |
Shard1-from (37017) |
Shard1-from (37017) |
|
shard3-from (57017) |
shard3-from (57017) |
shard2-from (47017) |
Iv. directory planning for Clusters on Linux:
Five, start to build the cluster: 1. Download software: https://www.mongodb.com/download-center#community
The version used here is: mongodb-linux-x86_64-rhel62-3.2.10.tgz 2. Create a directory:
Mkdir-p/home/mongo/{config,router,shard}
Mkdir-p/home/mongo/config/{data,logs}
Mkdir-p/home/mongo/router/logs
Mkdir-p/home/mongo/shard/{data,logs}
Mkdir-p/home/mongo/shard/data/{shard1,shard2,shard3} 3. Unzip and copy:
Unzip the file and copy the contents under the folder to Config/router/shard three directories. 4. Configure Config server 1) Create the required configuration file for the new Config instance and launch the instance. "Perform the following actions per server order"
[Root@mini1 ~]# cd/home/mongo/config/
[Root@mini1 config]# VI mongo.config
Dbpath=/home/mongo/config/data
logpath=/Home/mongo/config/logs/mongo.log
port=27017
Logappend=true
Fork=true
Quiet=true
Journal=true
Configsvr=true
replset=configrs/192.168.141.201:27017,192.168.141.202:27017,192.168.141.203:27017
# #启动实例 (server side)
[Root@mini1 bin]# cd/home/mongo/config/bin/
[Root@mini1 bin]#./mongod/home/mongo/config/mongo.config 2) initializes config server. Arbitrarily enter a server, configure a copy set of config server
[Root@mini1 bin]#./mongo–port 27017
Rs.initiate ({_id: "Configrs", Configsvr:true,members:[{_id:1,host: "192.168.141.201:27017", Priority:2},{_id:2, Host: "192.168.141.202:27017"},{_id:3,host: "192.168.141.203:27017"}]})
{"OK": 1}
Note: Use Rs.status () to view the status of the replica set 5. Start MONGOs Server (routing service)
[Root@mini1/]# cd/home/mongo/router/bin/
[Root@mini1 bin]#./mongos–configdb configrs/192.168.141.201:27017,192.168.141.202:27017,192.168.141.203:27017– Port 17017–fork–logpath=/home/mongo/router/logs/mongos.log 6. Configuring Shard Server (Shard service) 1) Create a new config instance to start the desired configuration file and launch the instance. Each server sequence performs the following actions
[Root@mini1 ~]# cd/home/mongo/shard/
[Root@mini1 shard]# VI shard1.config
dbpath=/Home/mongo/shard/data/shard1
logpath=/Home/mongo/shard/logs/shard1.log
port=37017
Logappend=true
Fork=true
Quiet=true
Journal=true
Shardsvr=true
replset= shard1rs/192.168.141.201:37017,192.168.141.202:37017,192.168.141.203:37017
[Root@mini1 shard]# VI shard2.config
Dbpath=/home/mongo/shard/data/shard2
logpath=/Home/mongo/shard/logs/shard2.log
port=47017
Logappend=true
Fork=true
Quiet=true
Journal=true
Shardsvr =true
replset= shard2rs/192.168.141.201:47017,192.168.141.202:47017,192.168.141.203:47017
[Root@mini1 shard]# VI shard3.config
Dbpath=/home/mongo/shard/data/shard3
logpath=/Home/mongo/shard/logs/shard3.log
port=57017
Logappend=true
Fork=true
Quiet=true
Journal=true
Shardsvr =true
replset= shard3rs/192.168.141.201:57017,192.168.141.202:57017,192.168.141.203:57017
# #启动实例 (service side) The first server starts Shard1, then goes to the second server to start Shard2, and then to the third server to start Shard3. Once started, each server then launches the remaining 2 instances.
[Root@mini1/]# cd/home/mongo/shard/bin/
[Root@mini1 bin]#./mongod-f/home/mongo/shard/ shard1.config
[Root@mini1 bin]#./mongod-f/home/mongo/shard/shard2.config
[Root@mini1 bin]#./ Mongod-f/home/mongo/shard/shard3.config 2) initializes the Shard server. Arbitrarily enter a single server, configuring a replica set of each shard
[Root@mini1 bin]#./mongo 192.168.141.201:37017
>rs.initiate ({_id: "Shard1rs", Members:[{_id:1,host: " 192.168.141.201:37017 ", Priority:2},{_id:2,host:" 192.168.141.202:37017 "},{_id:3,host:" 192.168.141.203:37017 "}]}
{"OK": 1}
[Root@mini1 bin]#./mongo 192.168.141.201:47017
>rs.initiate ({_id: "Shard2rs", members:[{ _id:1,host: "192.168.141.202:47017", Priority:2},{_id:2,host: "192.168.141.201:47017"},{_id:3,host: " 192.168.141.203:47017 "}]})
{" OK ": 1}
[Root@mini1 bin]#./mongo 192.168.141.201:57017
>rs.initiate ({ _id: "Shard3rs", Members:[{_id:1,host: "192.168.141.203:57017", Priority:2},{_id:2,host: "192.168.141.201:57017"},{ _id:3,host: "192.168.141.202:57017"}]})
{"OK": 1}
7. Configuring Shards
[Root@mini1/]# cd/home/mongo/router/bin/
[root@mini1 bin]#./mongo--port 17017
>use admin
> Db.runcommand ({"Addshard": "shard1rs/192.168.141.201:37017", "MaxSize": 1024x768})
>db.runcommand ({"Addshard": "shard2rs/192.168.141.202:47017", "MaxSize": 1024x768})
>db.runcommand ({"Addshard": "shard3rs/ 192.168.141.203:57017 "," MaxSize ": 1024x768})
Note: Use the command Db.runcommand ({listshards:1}) to view the status information for a shard
Determine if the current cluster is Shard
Db.runcommand ({isdbgrid:1});
Yes:
{"Isdbgrid": 1, "hostname": "Xxxhost", "OK": 1}
not:
{
"OK": 0,
"errmsg": "No such CMD:ISD Bgrid ",
" code ": $,
" bad cmd ": {
" Isdbgrid ": 1
}
}
Delete Shards
Use admin
Db.runcommand ({removeshard: "Mongodb0"})
To remove a member from the replica set, follow these steps:
1. Delete the specified replica set members by MONGO the primary of the shell login Shard, as
Rs.remove ("15.62.32.123:27021")
#添加新节点到集群的某一个分片中, the steps are as follows:
1, set up the new node machine STATICIP
2. Download MongoDB installation package to new node machine, create new log directory, data directory and keyfile file directory.
3. Open-related firewalls (such as 27017 or 27018, etc.)
4. Start Mongod instances that need to be added to the Shard (replica set), as
Mongod-f/usr/local/mongodb/bin/rs1_member.conf
5. Add a new node to the replica set by MONGO the primary of the shell login Shard, as
Rs.add ({host: "15.62.32.111:27018", priority:1})
Rs.add ({host: "15.62.32.109:30001", arbiteronly:true})//Add quorum node
Rs.add ({host: "15.62.32.109:30003", arbiteronly:true})//Add quorum node
Rs.add ({host: "15.62.32.123:27021", priority:0, hidden:true})//Add hidden node
Rs.add ({host: "15.62.32.123:27021", priority:0, Hidden:true, slavedelay:259200})//Add delay node
#给集群新增用户, such as:
First use a user with the "useradmin" role to log in to the cluster and execute the following command
Use admin
Db.createuser (
{
"User": "Backupuser",
"pwd": "123",
Roles: [{role: ' Backup ', DB: ' admin '}]
}
)
Db.auth ("Backupuser", "123")//Make new user effective
At this point, a new user is completed to back up the entire cluster
#给集群用户新增权限, such as:
Use admin
Db.grantrolestouser (
"Pubuser",
[{role: ' ReadWrite ', DB: ' Philippines '},
{role: "ReadWrite", DB: "Italy"},
{role: "ReadWrite", DB: "India"},
{role: "ReadWrite", DB: "Japan"}]
)
(1) Shard storage information for all DB, including chunks number, shard key information: Db.printshardingstatus ()
(2) db.collection.getShardDistribution () Get data storage for each shard of collection
(3) Sh.status () displays information about all db in this MONGOs cluster, including shard key information
(4) Show only shards:
Use config; Db.shards.find ()
{"_id": "shard0000", "host": "Xxhost:10001"}
{"_id": "shard0001", "host": "yyhost:10002"}5) turn on the User collection sharding feature:
. ./bin/mongo–port 20000
2. mongos> Use admin
3. Switched to DB admin
4. Mongos> Db.runcommand ({' enablesharding ' ' Test '})
5. {"OK": 1}
To turn on the user collection sharding feature:
1. Mongos> Db.runcommand ({' shardcollection ': ' Test.user ', ' key ': {' _id ': 1}})
{"collectionsharded": "Test.user", "OK": 1}
MongoDB Default on Autobalancer
Balancer is a Load Balancer tool for the sharded cluster, which is turned on by default when you create a new cluster, unless you close it in config:
Config> Db.settings.find ()
{"_id": "Chunksize", "Value": 64}
{"_id": "Balancer", "ActiveWindow": {"Start": "", "" Stop ":" 19:30 "}," Stopped ": false}
ActiveWindow Specifies the time window for Autobalancer to perform the equalization.
Stopped indicates whether to use Autobalancer.
Manual Start Balancer:sh.startBalancer ()