MongoDB Replica Set Build

Source: Internet
Author: User
Tags mongodb version iptables

What to note: MongoDB replica set setup
Note Date: 2018-01-09



    • 21.33 MongoDB Replica Set Introduction
    • 21.34 MongoDB Replica Set Build
    • 21.35 MongoDB Replica set test


21.33 MongoDB Replica Set Introduction

A replica set (Replica set) is a cluster of MongoDB instances consisting of a primary (Primary) server and multiple backup (secondary) servers. With replication, updates to the data are pushed from primary to other instances, and after a certain delay, each MongoDB instance maintains the same copy of the DataSet. By maintaining redundant database replicas, you can achieve offsite backup of data, read-write separation, and automatic failover.

This means that if the primary server crashes, the backup server automatically upgrades one of the members to the new primary server. When you use the replication feature, you can still access data from other servers on the replica set if one server goes down. If the data on the server is corrupt or inaccessible, you can create a new copy of the data from a member of the replica set.

Early MongoDB version uses Master-slave, one master one from and MySQL similar, but slave in this architecture is read-only, when the main library down, from the library can not automatically switch to primary. The Master-slave mode has now been phased out, changed to a replica set, this mode has a master (primary), and multiple Slave (secondary), read-only. Support to set weights for them, when the main outage, the most weighted from the switch-based. In this architecture it is also possible to establish a quorum (arbiter) role, which is only responsible for adjudication, without storing data. The read-write data in this architecture is the Lord, and for the purpose of load balancing, it is necessary to manually specify the target server for reading the library.

In short, the MongoDB replica set is a master-slave cluster with automatic failback, which consists of a primary node and one or more secondary nodes. Similar to the MMM architecture of MySQL. For more information about replica sets, see the official documentation:

Official Document Address:

https://docs.mongodb.com/manual/replication/



Replica set schema diagram:




21.34 MongoDB Replica Set Build

I used three machines to build a replica set here:

192.168.77.128 (primary)
192.168.77.130 (secondary)
192.168.77.134 (secondary)

MongoDB has been installed on all three machines.

Start building:
1. Edit the configuration file for three machines, and change or add the following:

[[email protected] ~]# vim /etc/mongod.confreplication:   # 取消这行的注释  oplogSizeMB: 20  # 增加这一行配置定义oplog的大小,注意前面需要有两个空格  replSetName: zero  # 定义复制集的名称,同样的前面需要有两个空格

Note: It is necessary to ensure that the Bindip in the configuration file of each machine is configured to monitor its own intranet IP

2. Once the edits are complete, restart the MongoDB service for each of the three machines:

[[email protected] ~]# systemctl restart mongod.service[[email protected] ~]# ps aux |grep mongodmongod     2578  0.7  8.9 1034696 43592 ?       Sl   18:21   0:00 /usr/bin/mongod -f /etc/mongod.confroot       2605  0.0  0.1 112660   964 pts/0    S+   18:21   0:00 grep --color=auto mongod[[email protected] ~]# netstat -lntp |grep mongodtcp        0      0 192.168.77.134:27017    0.0.0.0:*               LISTEN      2578/mongod         tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      2578/mongod         [[email protected] ~]#

3. Close the firewall of three machines, or empty the iptables rule

4. Connect the main machine's MongoDB, Run command MONGO on the main machine, and then configure the replica set:

[[email protected] ~]# mongo> use adminswitched to DB admin> config={_id: "Zero", Members:[{_id:0,host: " 192.168.77.128:27017 "},{_id:1,host:" 192.168.77.130:27017 "},{_id:2,host:" 192.168.77.134:27017 "}]} # Configure the ip{of three machines respectively        _id ":" Zero ", # The name of the replica set" members ": [{" _id ": 0," host ":" 192.168.77.128:27017 "            }, {"_id": 1, "host": "192.168.77.130:27017"}, {"_id": 2, "Host": "192.168.77.134:27017"}]}> rs.initiate (config) # initialize {"OK": 1, "Operationtime": T            Imestamp (1515465317, 1), "$clusterTime": {"Clustertime": Timestamp (1515465317, 1), "signature": { "Hash": Bindata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="), "KeyId": Numberlong (0)}}}zero:primary& Gt Rs.status () # View status {"Set": "Zero", "date": Isodate ("2018-01-09t02:37:13.713z"), "MyState": 1, "term": Num Berlong (1), "HeartbeatintervalmIllis ": Numberlong (+)," Optimes ": {" Lastcommittedoptime ": {" ts ": Timestamp (1515465429, 1),             "T": Numberlong (1)}, "Readconcernmajorityoptime": {"ts": Timestamp (1515465429, 1),            "T": Numberlong (1)}, "Appliedoptime": {"ts": Timestamp (1515465429, 1), "T": Numberlong (1)}, "Durableoptime": {"ts": Timestamp (1515465429, 1), "T": Nu            Mberlong (1)}}, "members": [{"_id": 0, "name": "192.168.77.128:27017", "Health": 1, "state": 1, "Statestr": "PRIMARY", "uptime": 527, "Op Time ": {" ts ": Timestamp (1515465429, 1)," T ": Numberlong (1)}," opt Imedate ": Isodate (" 2018-01-09t02:37:09z ")," InfoMessage ":" Could not find member-sync from "," E Lectiontime ": Timestamp(1515465327, 1), "Electiondate": Isodate ("2018-01-09t02:35:27z"), "ConfigVersion": 1, "             Self ": true}, {" _id ": 1," name ":" 192.168.77.130:27017 "," Health ": 1,                "State": 2, "statestr": "Secondary", "uptime": $, "optime": {                "TS": Timestamp (1515465429, 1), "T": Numberlong (1)}, "Optimedurable": { "TS": Timestamp (1515465429, 1), "T": Numberlong (1)}, "Optimedate": Is Odate ("2018-01-09t02:37:09z"), "Optimedurabledate": Isodate ("2018-01-09t02:37:09z"), "lastheartbeat            ": Isodate (" 2018-01-09t02:37:13.695z ")," Lastheartbeatrecv ": Isodate (" 2018-01-09t02:37:13.661z "),        "Pingms": Numberlong (0), "syncingto": "192.168.77.128:27017", "ConfigVersion": 1}, {"_id": 2,            "Name": "192.168.77.134:27017", "Health": 1, "state": 2, "Statestr": "SEC                Ondary "," Uptime ":" Optime ": {" ts ": Timestamp (1515465429, 1),                "T": Numberlong (1)}, "Optimedurable": {"ts": Timestamp (1515465429, 1), "T": Numberlong (1)}, "Optimedate": Isodate ("2018-01-09t02:37:09z"), "Optimedura Bledate ": Isodate (" 2018-01-09t02:37:09z ")," Lastheartbeat ": Isodate (" 2018-01-09t02:37:13.561z ")," Lastheartbeatrecv ": Isodate (" 2018-01-09t02:37:13.660z ")," Pingms ": Numberlong (0)," syncingto ":" 1 92.168.77.128:27017 "," ConfigVersion ": 1}]," OK ": 1," Operationtime ": Timestamp (1515465429 , 1), "$clusterTime": {"Clustertime": Timestamp (1515465429, 1), "signature": {"hash": Bi Ndata (0, "aaaaaaaaaaaaaaaAaaaaaaaaaaa= ")," KeyId ": Numberlong (0)}}}zero:primary> 

Above we need to pay attention to the STATESTR state of three machines, the STATESTR state of the main machine needs to be primary, two from the STATESTR state of the machine need to be secondary is normal.

If there are two from the Statestr status of "Statestr": "STARTUP", you need to do the following:

> config={_id:"zero",members:[{_id:0,host:"192.168.77.128:27017"},{_id:1,host:"192.168.77.130:27017"},{_id:2,host:"192.168.77.134:27017"}]}> rs.reconfig(config)

Then check the status again: Rs.status () to make sure the status from is changed to secondary.



21.35 MongoDB Replica set test

1. Create a library on the main machine and create a collection:

zero:PRIMARY> use testdb  # 创建库switched to db testdbzero:PRIMARY> db.test.insert({AccountID:1,UserName:"zero",password:"123456"})  # 创建集合,并且插入一条数据WriteResult({ "nInserted" : 1 })zero:PRIMARY> show dbs  # 查看所有的库admin   0.000GBconfig  0.000GBlocal   0.000GBtestdb  0.000GBzero:PRIMARY> show tables  # 查看当前库的集合testzero:PRIMARY>

2. Then go to the machine to see if there is data on the Sync Master machine:

[[email protected] ~]# mongozero:secondary> show dbs2018-01-09t18:46:09.959+0800 E QUERY [Thread1] Error:listd Atabases failed:{"Operationtime": Timestamp (1515466399, 1), "OK": 0, "errmsg": "Not Master and Slaveok=false" , "code": 13435, "codename": "Notmasternoslaveok", "$clusterTime": {"Clustertime": Timestamp (15154663 1), "signature": {"hash": Bindata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="), "keyId": Numberl Ong (0)}}: [Email protected]/mongo/shell/utils.js:25:13[email protected]/mongo/shell/mongo.js:65:1[e mail protected]/mongo/shell/utils.js:813:19[email protected]/mongo/shell/utils.js:703:15@ (SHELLHELP2 ):1:1zero:secondary> Rs.slaveok () # If the above error occurs, you need to execute this command zero:secondary> show DBS # and then it won't be an error. Admin 0.000GBconfig 0.0 00GBlocal 0.000GBtestdb 0.000gbzero:secondary> use testdbswitched to DB testdbzero:secondary> show Tablestestzero :secondary>

As you can see, the data has been successfully synced to the slave machine.



Replica set Change weight simulation main outage

Use the Rs.config () command to see the weights for each machine:

Zero:primary> Rs.config () {"_id": "Zero", "version": 1, "ProtocolVersion": Numberlong (1), "members": [ {"_id": 0, "host": "192.168.77.128:27017", "arbiteronly": false, "b                            Uildindexes ": True," hidden ": false," priority ": 1," tags ": {            }, "Slavedelay": Numberlong (0), "votes": 1}, {"_id": 1, "Host": "192.168.77.130:27017", "arbiteronly": false, "buildindexes": True, "hidden" : false, "priority": 1, "tags": {}, "Slavedelay": Numberlo            Ng (0), "votes": 1}, {"_id": 2, "host": "192.168.77.134:27017",            "Arbiteronly": false, "buildindexes": True, "hidden": false, "priority": 1,        "Tags": {                    }, "Slavedelay": Numberlong (0), "votes": 1}], "Settings": { "Chainingallowed": True, "Heartbeatintervalmillis": $, "heartbeattimeoutsecs": Ten, "electi Ontimeoutmillis ": 10000," Catchuptimeoutmillis ":-1," Catchuptakeoverdelaymillis ": 30000," getlast        Errormodes ": {}," Getlasterrordefaults ": {" W ": 1," Wtimeout ": 0 }, "Replicasetid": ObjectId ("5a542a65e491a43160eb92f0")}}zero:primary>

The value of priority indicates the weight of the machine, which is 1 by default.

Add a firewall rule to block communications from simulating main machine outages:

# 注意这是在主机器上执行[[email protected] ~]# iptables -I INPUT -p tcp --dport 27017 -j DROP

Then look at the status from the machine:

Zero:secondary> Rs.status () {"Set": "Zero", "date": Isodate ("2018-01-09t14:06:24.127z"), "MyState": 1, "            term ": Numberlong (4)," Heartbeatintervalmillis ": Numberlong (+)," Optimes ": {" Lastcommittedoptime ": { "TS": Timestamp (1515506782, 1), "T": Numberlong (4)}, "Readconcernmajorityoptime":            {"TS": Timestamp (1515506782, 1), "T": Numberlong (4)}, "Appliedoptime": { "TS": Timestamp (1515506782, 1), "T": Numberlong (4)}, "Durableoptime": {"TS":            Timestamp (1515506782, 1), "T": Numberlong (4)}, "members": [{"_id": 0, ' Name ': ' 192.168.77.128:27017 ', ' health ': 0, ' state ': 8, ' statestr ': ' (not R                Eachable/healthy) "," uptime ": 0," Optime ": {" ts ": Timestamp (0, 0),   "T": Numberlong (-1)         }, "optimedurable": {"ts": Timestamp (0, 0), "T": Numberlong (-1) }, "Optimedate": Isodate ("1970-01-01t00:00:00z"), "Optimedurabledate": Isodate ("1970-01-01t 00:00:00z ")," Lastheartbeat ": Isodate (" 2018-01-09t14:06:20.243z ")," Lastheartbeatrecv ": Isodate (" 2 018-01-09t14:06:23.491z ")," Pingms ": Numberlong (0)," lastheartbeatmessage ":" Couldn ' t get a Connec tion within the time limit "," configversion ":-1}, {" _id ": 1," name ": "192.168.77.130:27017", "Health": 1, "state": 2, "statestr": "Secondary", "             Uptime ": 1010," Optime ": {" ts ": Timestamp (1515506782, 1)," T ": Numberlong (4) }, "optimedurable": {"ts": Timestamp (1515506782, 1), "T": Numberl    Ong (4)},        "Optimedate": Isodate ("2018-01-09t14:06:22z"), "Optimedurabledate": Isodate ("2018-01-09t14:06:22z"), "Lastheartbeat": Isodate ("2018-01-09t14:06:23.481z"), "Lastheartbeatrecv": Isodate ("2018-01-09t14: 06:23.178z ")," Pingms ": Numberlong (0)," syncingto ":" 192.168.77.134:27017 "," configvers            Ion ": 1}, {" _id ": 2," name ":" 192.168.77.134:27017 "," Health ": 1,                "State": 1, "Statestr": "PRIMARY", "uptime": 1250, "optime": { "TS": Timestamp (1515506782, 1), "T": Numberlong (4)}, "Optimedate": Isodate ("2018 -01-09t14:06:22z ")," Electiontime ": Timestamp (1515506731, 1)," Electiondate ": Isodate (" 2018-01-09t 14:05:31z ")," ConfigVersion ": 1," Self ": true}]," OK ": 1," Operationtime ": Tim  Estamp (1515506782, 1),  "$clusterTime": {"Clustertime": Timestamp (1515506782, 1), "signature": {"hash": Bindata (0 , "aaaaaaaaaaaaaaaaaaaaaaaaaaa="), "KeyId": Numberlong (0)}}}zero:primary>

As above, you can see that the value of 192.168.77.128 statestr to not reachable/healthy, and 192.168.77.134 automatically switch to the main, you can also see 192.168.77.134 statestr value change became a primary. Because the weights are the same, there is a certain randomness in switching.

Next we specify the weight of each machine, so that the weight of the machine automatically switch to the main.
1. Remove the firewall rule on 192.168.77.128 first:

[[email protected] ~]# iptables-d input-p TCP--dport 27017-j DROP

2. Go back to the 192.168.77.134 machine and specify the weights of each machine:

zero:PRIMARY> cfg = rs.conf()zero:PRIMARY> cfg.members[0].priority = 33zero:PRIMARY> cfg.members[1].priority = 22zero:PRIMARY> cfg.members[2].priority = 1zero:PRIMARY> rs.reconfig(cfg){    "ok" : 1,    "operationTime" : Timestamp(1515507322, 1),    "$clusterTime" : {        "clusterTime" : Timestamp(1515507322, 1),        "signature" : {            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),            "keyId" : NumberLong(0)        }    }}zero:PRIMARY>

3. At this time 192.168.77.128 should switch to the main, to 192.168.77.128 on the execution of Rs.config () to view:

Zero:primary> Rs.config () {"_id": "Zero", "version": 2, "ProtocolVersion": Numberlong (1), "members": [ {"_id": 0, "host": "192.168.77.128:27017", "arbiteronly": false, "b                            Uildindexes ": True," hidden ": false," priority ": 3," tags ": {            }, "Slavedelay": Numberlong (0), "votes": 1}, {"_id": 1, "Host": "192.168.77.130:27017", "arbiteronly": false, "buildindexes": True, "hidden" : false, "priority": 2, "tags": {}, "Slavedelay": Numberlo            Ng (0), "votes": 1}, {"_id": 2, "host": "192.168.77.134:27017",            "Arbiteronly": false, "buildindexes": True, "hidden": false, "priority": 1,        "Tags": {                    }, "Slavedelay": Numberlong (0), "votes": 1}], "Settings": { "Chainingallowed": True, "Heartbeatintervalmillis": $, "heartbeattimeoutsecs": Ten, "electi Ontimeoutmillis ": 10000," Catchuptimeoutmillis ":-1," Catchuptakeoverdelaymillis ": 30000," getlast        Errormodes ": {}," Getlasterrordefaults ": {" W ": 1," Wtimeout ": 0 }, "Replicasetid": ObjectId ("5a542a65e491a43160eb92f0")}}zero:primary>

As above, you can see the weight of each machine changes, 192.168.77.128 also automatically switch back to the main role. If the 192.168.77.128 goes down again, then 192.168.77.130 will be the candidate master, because it has the highest weight in addition to 192.168.77.128.

MongoDB Replica Set Build

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.