1. Environment:
Open four MongoDB instances on a single server to implement the cluster setup of MongoDB's replica set replica set mode
2. Configuration file:
Master Master Instance configuration file:
[Email protected] ~]# CAT/USR/LOCAL/MONGODB/MONGOD.CNF
Logpath=/data/mongodb-master/logs/mongodb.log
Logappend = True
#fork and run in background
Fork = True
Port = 27017
Dbpath=/data/mongodb-master/data
#location of Pidfile
Pidfilepath=/data/mongodb-master/mongod.pid
Auth = True
KeyFile =/tmp/mongo-keyfile
Nohttpinterface=true
Replset=shard1
Slave1 instance configuration file:
[Email protected] ~]# CAT/USR/LOCAL/MONGODB/MONGOD1.CNF
Logpath=/data/mongodb-slave/logs/mongodb.log
Logappend = True
#fork and run in background
Fork = True
Port = 27018
Dbpath=/data/mongodb-slave/data
#location of Pidfile
Pidfilepath=/data/mongodb-slave/mongod.pid
Auth = True
KeyFile =/tmp/mongo-keyfile
Nohttpinterface=true
Replset=shard1
Slave2 instance configuration file:
[Email protected] ~]# CAT/USR/LOCAL/MONGODB/MONGOD2.CNF
Logpath=/data/mongodb-slave1/logs/mongodb.log
Logappend = True
#fork and run in background
Fork = True
Port = 27019
Dbpath=/data/mongodb-slave1/data
#location of Pidfile
Pidfilepath=/data/mongodb-slave1/mongod.pid
Auth = True
KeyFile =/tmp/mongo-keyfile
Nohttpinterface=true
Replset=shard1
Arbiter blanking Node Instance deployment:
[Email protected] ~]# CAT/USR/LOCAL/MONGODB/ARBITER.CNF
Logpath=/data/mongodb-arbiter/logs/mongodb.log
Logappend = True
#fork and run in background
Fork = True
Port = 27020
Dbpath=/data/mongodb-arbiter/data
#location of Pidfile
Pidfilepath=/data/mongodb-arbiter/mongod.pid
KeyFile =/tmp/mongo-keyfile
Nohttpinterface=true
Replset=shard1
3. Single-Instance MongoDB startup script:
[Email protected] data]# Cat/etc/init.d/mongod
#!/bin/sh
# # chkconfig:2345 66 40
Source/etc/profile
Config=/usr/local/mongodb/mongod.cnf
Program=/usr/local/mongodb/bin/mongod
Mongopid= ' Ps-ef | grep ' Mongod--config ' | Grep-v grep | awk ' {print $} '
Test-x $PROGRAM | | Exit 0
Case "$" in
Start
echo "Starting MongoDB Server ..."
$PROGRAM--config $CONFIG &
;;
Stop
echo "Stopping MongoDB Server ..."
if [!-Z "$MONGOPID"]; Then
Kill-15 $MONGOPID
Fi
;;
Status
If [-Z "$MONGOPID"]; Then
echo "MongoDB is not running!"
Else
echo "MongoDB is running! ("$MONGOPID") "
Fi
;;
Restart
echo "shutting down MongoDB Server ..."
if [!-Z "$MONGOPID"]; Then
Kill-15 $MONGOPID
Fi
echo "Starting MongoDB ..."
$PROGRAM--config $CONFIG &
;;
*)
Log_success_msg "Usage:/etc/init.d/mongod {Start|stop|status|restart}"
Exit 1
Esac
Exit 0
4. Start the MongoDB instance:
[Email protected] data]# ps-ef|grep MONGO
Root 7226 1 0 13:12? 00:00:41/usr/local/mongodb/bin/mongod--config/usr/local/mongodb/arbiter.cnf
Root 11262 1 0 13:54? 00:00:29/usr/local/mongodb/bin/mongod--config/usr/local/mongodb/mongod.cnf
Root 13097 1 0 14:22? 00:00:19/usr/local/mongodb/bin/mongod--config/usr/local/mongodb/mongod1.cnf
Root 13214 1 0 14:22? 00:00:19/usr/local/mongodb/bin/mongod--config/usr/local/mongodb/mongod2.cnf
5. Configure master, Standby, quorum node:
You can connect MongoDB via client, or you can choose a connection to mongodb directly from three nodes.
Login Mongodb-master Master instance to configure:
[Email protected] ~]# MONGO 127.0.0.1:27017
MongoDB Shell version:3.0.5
Connecting To:127.0.0.1:27017/test
shard1:primary> use admin;
Switched to DB admin
> cfg={_id: "Shard1", members:[{_id:0,host: ' 127.0.0.1:27017 ', priority:3}, {_id:1,host: ' 127.0.0.1:27018 ', Priority:2},
... {_id:2,host: ' 127.0.0.1:27019 ', priority:1}, {_id:3,host: ' 127.0.0.1:27020 ', Arbiteronly:true}]};
{
"_id": "Shard1",
"Members": [
{
"_id": 0,
"Host": "127.0.0.1:27017",
"Priority": 3
},
{
"_id": 1,
"Host": "127.0.0.1:27018",
"Priority": 2
},
{
"_id": 2,
"Host": "127.0.0.1:27019",
"Priority": 1
},
{
"_id": 3,
"Host": "127.0.0.1:27020",
"Arbiteronly": True
}
]
}
> rs.initiate (CFG); ( initialize configuration )
{"OK": 1}
Shard1:primary> rs.conf (); ( view configuration information )
{
"_id": "Shard1",
"Version": 1,
"Members": [
{
"_id": 0,
"Host": "127.0.0.1:27017",
"Arbiteronly": false,
"Buildindexes": true,
"Hidden": false,
"Priority": 3,
"Tags": {
},
"Slavedelay": 0,
"Votes": 1
},
{
"_id": 1,
"Host": "127.0.0.1:27018",
"Arbiteronly": false,
"Buildindexes": true,
"Hidden": false,
"Priority": 2,
"Tags": {
},
"Slavedelay": 0,
"Votes": 1
},
{
"_id": 2,
"Host": "127.0.0.1:27019",
"Arbiteronly": false,
"Buildindexes": true,
"Hidden": false,
"Priority": 1,
"Tags": {
},
"Slavedelay": 0,
"Votes": 1
},
{
"_id": 3,
"Host": "127.0.0.1:27020",
"Arbiteronly": true,
"Buildindexes": true,
"Hidden": false,
"Priority": 1,
"Tags": {
},
"Slavedelay": 0,
"Votes": 1
}
],
"Settings": {
"Chainingallowed": true,
"Heartbeattimeoutsecs": 10,
"Getlasterrormodes": {
},
"Getlasterrordefaults": {
"W": 1,
"Wtimeout": 0
}
}
}
Shard1:other> rs.status () ( view replica set )
{
"Set": "Shard1",
"Date": Isodate ("2017-09-14t05:23:16.893z"),
"MyState": 1,
"Members": [
{
"_id": 0,
"Name": "127.0.0.1:27017",
"Health": 1,
"State": 1,
"Statestr": "PRIMARY",
"Uptime": 1117,
"Optime": Timestamp (1505366494, 1),
"Optimedate": Isodate ("2017-09-14t05:21:34z"),
"Electiontime": Timestamp (1505366495, 1),
"Electiondate": Isodate ("2017-09-14t05:21:35z"),
"ConfigVersion": 1,
"Self": true
},
{
"_id": 1,
"Name": "127.0.0.1:27018",
"Health": 1,
"State": 2,
"Statestr": "Secondary",
"Uptime": 102,
"Optime": Timestamp (1505366494, 1),
"Optimedate": Isodate ("2017-09-14t05:21:34z"),
"Lastheartbeat": Isodate ("2017-09-14t05:23:16.113z"),
"Lastheartbeatrecv": Isodate ("2017-09-14t05:23:16.663z"),
"Pingms": 0,
"ConfigVersion": 1
},
{
"_id": 2,
"Name": "127.0.0.1:27019",
"Health": 1,
"State": 2,
"Statestr": "Secondary",
"Uptime": 102,
"Optime": Timestamp (1505366494, 1),
"Optimedate": Isodate ("2017-09-14t05:21:34z"),
"Lastheartbeat": Isodate ("2017-09-14t05:23:16.127z"),
"Lastheartbeatrecv": Isodate ("2017-09-14t05:23:16.668z"),
"Pingms": 0,
"ConfigVersion": 1
},
{
"_id": 3,
"Name": "127.0.0.1:27020",
"Health": 1,
"State": 7,
"Statestr": "Arbiter",
"Uptime": 102,
"Lastheartbeat": Isodate ("2017-09-14t05:23:16.055z"),
"Lastheartbeatrecv": Isodate ("2017-09-14t05:23:16.674z"),
"Pingms": 1,
"ConfigVersion": 1
}
],
"OK": 1
}
Shard1:primary>
To configure success here:
6. Test verification:
Reference Document: http://blog.csdn.net/zhang_yanan/article/details/25972693 for test validation
Create a user and database on master to test to verify that the cluster is successful
Db.createuser (
{
User: "Root",
PWD: "[email protected]",
Roles: [{role: ' Root ', db: ' admin '}]
}
)
Use admin
Db.createuser (
... {
... User: "DBA3",
... pwd: "[email protected]",
... roles: [{role: ' ReadWrite ', db: ' dbtest001 '}]
... }
... )
You can also install the MongoDB client software to verify the success of the cluster build
7. Deploy the reference documentation:
http://blog.csdn.net/luonanqin/article/details/8497860;
http://suifu.blog.51cto.com/9167728/1853478
http://blog.csdn.net/zhang_yanan/article/details/25972693
Several scenarios for MongoDB high Availability cluster configuration reference: https://yq.aliyun.com/articles/61516
This article from the "10931853" blog, reproduced please contact the author!
Replica the MongoDB cluster construction of set replica set mode