MongoDB Highly available configuration of the Shard cluster

Source: Internet
Author: User
Tags mkdir mongodb

First, the port IP is well planned
The schema diagram below, arbitrarily extracting one shard (non-quorum node) from each replica set can form a complete piece of data.

1. First replica set rs1

Share1 127.0.0.1:30011:c:/data/share_rs1/share1/data/
Share2 127.0.0.1:40011:c:/data/share_rs1/share2/data/
Share3 127.0.0.1:50011:c:/data/share_rs1/share3/data/
2. Second replica set rs2

Share1 127.0.0.1:30012:c:/data/share_rs2/share1/data/
Share2 127.0.0.1:40012:c:/data/share_rs2/share2/data/
Share3 127.0.0.1:50012:c:/data/share_rs2/share3/data/
3. Third replica set RS3

Share1 127.0.0.1:30013:c:/data/share_rs3/share1/data/
Share2 127.0.0.1:40013:c:/data/share_rs3/share2/data/
Share3 127.0.0.1:50013:c:/data/share_rs3/share3/data/
4.config Server
CONFIG1 127.0.0.1:30002:c:/data/config/config1/data/
Config2 127.0.0.1:30002:c:/data/config/config2/data/
Config3 127.0.0.1:30002:c:/data/config/config3/data/
5. MONGOs
MONGOS1 127.0.0.1:30001:c:/data/mongos/mongos1/data/
Mongos2 127.0.0.1:30001:c:/data/mongos/mongos2/data/
MONGOS3 127.0.0.1:30001:c:/data/mongos/mongos3/data/
Second, create the appropriate directory
Mkdir-p C:/data/{share_rs1,share_rs2,share_rs3}/{share1,share2,share3}/{data,log}
Mkdir-p C:/data/mongos/{mongos1,mongos2,mongos3}/{data,log}
Mkdir-p C:/data/config/{config1,config2,config3}/{data,log}
Configuration files for configuring Mongs and config (additional replica reference modify port and IP)
[Mongo@mongo config1]$ Cat mongo.conf
dbpath=c:/data/config/config1/data/
Logpath=c:/data/config/config1/log/mongo.log
Logappend=true
port=30002
Rest=true
Httpinterface=true
Configsvr=true

[Mongo@mongo mongs1]$ Cat mongo.conf
Logpath=c:/data/mongos/mongos1/log/mongo.log
Logappend=true
port=30001
configdb=127.0.0.1:30002,127.0.0.1:40002,127.0.0.1:50002
Chunksize=1
Four, start the config server on the three replicas sequentially and the mongs server

Mongod-f c:/data/config/config1/mongo.conf
Mongod-f c:/data/config/config2/mongo.conf
Mongod-f c:/data/config/config3/mongo.conf

Mongos-f c:/data/mongos/mongos1/mongo.conf
Mongos-f c:/data/mongos/mongos2/mongo.conf
Mongos-f c:/data/mongos/mongos3/mongo.conf

Configuration files that configure Mong shards (other replica reference modification ports and IP) are the same as the replica set name for a Shard, which is replset.
One shard of the first replica set
[Mongo@mongo share_rs1]$ Cat share1/mongo.conf
Dbpath=c:/data/share_rs1/share1/data
Logpath=c:/data/share_rs1/share1/log/mongo.log
Logappend=true
port=30011
Rest=true
Httpinterface=true
Replset=rs1
Shardsvr=true

One shard of the second replica set
[Mongo@mongo share_rs2]$ Cat share1/mongo.conf
Dbpath=c:/data/share_rs2/share1/data
Logpath=c:/data/share_rs2/share1/log/mongo.log
Logappend=true
port=30012
Rest=true
Httpinterface=true
Replset=rs2
Shardsvr=true

One shard of the third replica set
[Mongo@mongo share_rs1]$ Cat share1/mongo.conf
Dbpath=c:/data/share_rs3/share1/data
Logpath=c:/data/share_rs3/share1/log/mongo.log
Logappend=true
port=30013
Rest=true
Httpinterface=true
Replset=rs3
Shardsvr=true
Vi. start each shard and the corresponding copy
Mongod-f c:/data/share_rs1/share1/mongo.conf
Mongod-f c:/data/share_rs1/share2/mongo.conf
Mongod-f c:/data/share_rs1/share3/mongo.conf
Mongod-f c:/data/share_rs2/share1/mongo.conf
Mongod-f c:/data/share_rs2/share2/mongo.conf
Mongod-f c:/data/share_rs2/share3/mongo.conf
Mongod-f c:/data/share_rs3/share1/mongo.conf
Mongod-f c:/data/share_rs3/share2/mongo.conf
Mongod-f c:/data/share_rs3/share3/mongo.conf

[Mongo@mongo share_rs]$ Ps-ef | grep MONGO | grep Share | Grep-v grep
MONGO 2480 1 0 12:50? 00:00:03 mongod-f c:/data/share_rs1/share1/mongo.conf
MONGO 2506 1 0 12:50? 00:00:03 mongod-f c:/data/share_rs1/share2/mongo.conf
MONGO 2532 1 0 12:50? 00:00:02 mongod-f c:/data/share_rs1/share3/mongo.conf
MONGO 2558 1 0 12:50? 00:00:03 mongod-f c:/data/share_rs2/share1/mongo.conf
MONGO 2584 1 0 12:50? 00:00:03 mongod-f c:/data/share_rs2/share2/mongo.conf
MONGO 2610 1 0 12:50? 00:00:02 mongod-f c:/data/share_rs2/share3/mongo.conf
MONGO 2636 1 0 12:50? 00:00:01 mongod-f c:/data/share_rs3/share1/mongo.conf
MONGO 2662 1 0 12:50? 00:00:01 mongod-f c:/data/share_rs3/share2/mongo.conf
MONGO 2688 1 0 12:50? 00:00:01 mongod-f c:/data/share_rs3/share3/mongo.conf
MONGO 3469 1 0 13:17? 00:00:00 mongod-f c:/data/config/config1/mongo.conf
MONGO 3485 1 0 13:17? 00:00:00 mongod-f c:/data/config/config2/mongo.conf
MONGO 3513 1 0 13:17? 00:00:00 mongod-f c:/data/config/config3/mongo.conf
MONGO 3535 1 0 13:18? 00:00:00 mongos-f c:/data/mongos/mongos1/mongo.conf
MONGO 3629 1 0 13:22? 00:00:00 mongos-f c:/data/mongos/mongos2/mongo.conf
MONGO 3678 1 0 13:22? 00:00:00 mongos-f c:/data/mongos/mongos3/mongo.conf
Vii. setting up a replica set
1. Log in a shard of the first copy, set a replica set for it
MONGO 127.0.0.1:30011/admin
Config = {_id: "Rs1", members:[
{_id:0,host: "127.0.0.1:30011"},
{_id:1,host: "127.0.0.1:40011"},
{_id:2,host: "127.0.0.1:50011", arbiteronly:true}
]
}

–>; Note: This ID rs1 needs to be the same as the name in the replica set, which is the Replset value
Rs.initiate (config)
{"OK": 1}–>; Tip This description is initialized successfully

2. Log in a shard of the second copy, set the replica set for it

MONGO 127.0.0.1:30012/admin
Config = {_id: "rs2", members:[
{_id:0,host: "127.0.0.1:30012"},
{_id:1,host: "127.0.0.1:40012"},
{_id:2,host: "127.0.0.1:50012", arbiteronly:true}
]
}
Rs.initiate (config)
{"OK": 1}–>; Tip This description is initialized successfully

3. Log in a shard of the third copy, set the replica set for it

MONGO 127.0.0.1:30013/admin
Config = {_id: "Rs3", members:[
{_id:0,host: "127.0.0.1:30013"},
{_id:1,host: "127.0.0.1:40013"},
{_id:2,host: "127.0.0.1:50013", arbiteronly:true}
]
}
Rs.initiate (config)
{"OK": 1}–>; Tip This description is initialized successfully
Eight, the current set up the MongoDB configuration server, routing server, each shard server, but the application connection MONGOs routing server does not use the Shard mechanism, but also need in the program configuration of the Shard, let the Shard take effect.
Connect to the first MONGOs
MONGO 127.0.0.1:30001/admin
Db.runcommand ({addshard: "rs1/127.0.0.1:30011,127.0.0.1:40011,127.0.0.1:50011", allowlocal:true});
Db.runcommand ({addshard: "rs2/127.0.0.1:30012,127.0.0.1:40012,127.0.0.1:50012"});
Db.runcommand ({addshard: "rs3/127.0.0.1:30013,127.0.0.1:40013,127.0.0.1:50013"});

–>; Add all copies of the first shard to all
–>; If Shard is a single server, add a command such as Db.runcommand ({addshard: "[:]"})
–>; If Shard is a replica set, use Db.runcommand ({addshard: "replicasetname/[:p ort][,serverhostname2[:p ort],...]"}); Such a format is represented.

Mongos>sh.status ()
-sharding status-
Sharding version: {
"_id": 1,
"Mincompatibleversion": 5,
"CurrentVersion": 6,
"Clusterid": ObjectId ("57f33f4d35d9c494714adfa7")
}
Shards:
{"_id": "Rs1", "host": "Rs1/127.0.0.1:30011,127.0.0.1:40011"}
{"_id": "Rs2", "host": "Rs2/127.0.0.1:30012,127.0.0.1:40012"}
{"_id": "Rs3", "host": "rs3/127.0.0.1:30013,127.0.0.1:40013"}
Active Mongoses:
"3.2.7": 3
Balancer
Currently Enabled:yes
Currently Running:no
Failed balancer rounds in last 5 attempts:0
Migration Results for the Last Hours:
No Recent Migrations
Databases
Nine, the collection is fragmented.
Db.runcommand ({enablesharding: "Testcol"});
–>; Specify the TestDB shard to take effect

Db.runcommand ({shardcollection: "Testcol.testdoc", key: {id:1}})
–>; Specify the collection and slice keys in the database that require sharding

–>; Insert test Data
for (var i = 1; I <;= 100000; i++) {Db.testdoc.save ({id:i, "name": "Harvey"})};–>; view the status of the collection db.testdoc.stats ();

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.