Managing maintenance of the deployment of MongoDB replica sets and replication sets on CentOS7

Source: Internet
Author: User
Tags chmod mongodb

An overview of the MONGODB replication set

A replica set is an additional copy of the data that is the process of synchronizing data across multiple servers, providing redundancy and increasing data availability through a replica set to recover from hardware failures and interrupted services.

How replication sets work
    • A copy set of MongoDB requires at least two nodes. One is the primary node (primary), which handles client requests, and the rest is the slave node (secondary), responsible for replicating the data on the master node.
    • MongoDB each node common collocation method is: a master one from or a master many from. The master node records all operations on it into Oplog, obtains these operations from the node periodically polling the master node, and then performs these operations on its own number of copies, ensuring that the data from the node is consistent with the primary node.
Features of replication sets
    • n Nodes of a cluster
    • Any node can be used as the master node
    • All write operations are on the primary node
    • Auto Fail-Over
    • Automatic recovery
MongoDB Replica set deployment

1. Configuring Replication Sets

(1) Create a data file and log file storage path

[[email protected] ~]# mkdir -p /data/mongodb/mongodb{2,3,4}[[email protected] ~]# cd /data/mongodb/[[email protected] mongodb]# mkdir logs[[email protected] mongodb]# touch logs/mongodb{2,3,4}.log[[email protected] mongodb]# cd logs/[[email protected] logs]# lsmongodb2.log  mongodb3.log  mongodb4.log[[email protected] logs]# chmod 777 *.log

(2) Edit the configuration file for 4 MongoDB instances

First edit the MongoDB configuration file, configure the Replset parameter value is Kgcrs, and copy 3 copies, the following:

[[email protected] etc]# vim mongod.conf   path: /var/log/mongodb/mongod.log# Where and how to store data.storage:  dbPath: /var/lib/mongo  journal:    enabled: true#  engine:#  mmapv1:#  wiredTiger:# how the process runsprocessManagement:  fork: true  # fork and run in background  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile  timeZoneInfo: /usr/share/zoneinfo# network interfacesnet:  port: 27017  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.#security:#operationProfiling:replication:     replSetName: kgcrs    #sharding:## Enterprise-Only Options#auditLog:#snmp:

The port parameter in mongodb2.conf is then configured to 27018,mongodb3.conf in the port parameter in the configuration to 27019,mongodb4.conf with the port parameter configured to 27020. The DBPath and LogPath parameters are also modified to the corresponding path values.

(3) Start 4 mongodb node real columns and view process information

[[email protected] etc]# mongod -f /etc/mongod.conf --shutdown  //先关闭//[[email protected] etc]# mongod -f /etc/mongod.conf //再开启//[[email protected] etc]# mongod -f /etc/mongod2.conf[[email protected] etc]# mongod -f /etc/mongod3.conf [[email protected] etc]# mongod -f /etc/mongod4.conf [[email protected] etc]# netstat -ntap | grep mongodtcp        0      0 0.0.0.0:27019           0.0.0.0:*               LISTEN      17868/mongod        tcp        0      0 0.0.0.0:27020           0.0.0.0:*               LISTEN      17896/mongod        tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      17116/mongod        tcp        0      0 0.0.0.0:27018           0.0.0.0:*               LISTEN      17413/mongod

(4) Configuring a replication set of three nodes

  [[email protected] etc]# mongo> rs.status ()//view replica set//{"info": "Run Rs.initiate (...) If not yet do for the set "," OK ": 0," errmsg ":" No Replset config has been received "," Code ": 94," Codena            Me ":" notyetinitialized "," $clusterTime ": {" Clustertime ": Timestamp (0, 0)," signature ": { "Hash": Bindata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="), "KeyId": Numberlong (0)}}}> cfg={"_id": "KGC RS "," members ": [{" _id ": 0," host ":" 192.168.126.132:27017 "},{" _id ": 1," host ":" 192.168.126.132:27018 "},{" _id ": 2,"            Host ":" 192.168.126.132:27019 "}"}//Add replica set//{"_id": "Kgcrs", "members": [{"_id": 0,        "Host": "192.168.126.132:27017"}, {"_id": 1, "host": "192.168.126.132:27018" }, {"_id": 2, "host": "192.168.126.132:27019"}]}> rs.initiate (CFG)/ /Initialize configuration guarantees no data from node// 

(5) View replica set status

After you start the replication set, view the full state information for the replica set again through the Rs.status () command

Kgcrs:secondary> Rs.status () {"Set": "Kgcrs", "date": Isodate ("2018-07-17t07:18:52.047z"), "MyState": 1, "Term": Numberlong (1), "syncingto": "", "Syncsourcehost": "", "Syncsourceid":-1, "Heartbeatintervalmilli            S ": Numberlong (+)," Optimes ": {" Lastcommittedoptime ": {" ts ": Timestamp (1531811928, 1),            "T": Numberlong (1)}, "Readconcernmajorityoptime": {"ts": Timestamp (1531811928, 1), "T": Numberlong (1)}, "Appliedoptime": {"ts": Timestamp (1531811928, 1), "t ": Numberlong (1)}," Durableoptime ": {" ts ": Timestamp (1531811928, 1)," T ": number            Long (1)}}, "members": [{"_id": 0, "name": "192.168.126.132:27017",            "Health": 1, "state": 1, "Statestr": "PRIMARY",//Master node//"uptime": 2855,   "Optime": {             "TS": Timestamp (1531811928, 1), "T": Numberlong (1)}, "Optimedate": I  Sodate ("2018-07-17t07:18:48z"), "syncingto": "", "Syncsourcehost": "", "Syncsourceid": -1, "InfoMessage": "Could not find member-sync from", "Electiontime": Timestamp (1531811847, 1)            , "Electiondate": Isodate ("2018-07-17t07:17:27z"), "ConfigVersion": 1, "Self": true, "Lastheartbeatmessage": ""}, {"_id": 1, "name": "192.168.126.132:27018" , "Health": 1, "state": 2, "statestr": "secondary",//From Node//"uptime":            "Optime": {"ts": Timestamp (1531811928, 1), "T": Numberlong (1)            }, "optimedurable": {"ts": Timestamp (1531811928, 1), "T": Numberlong (1) }, "optImedate ": Isodate (" 2018-07-17t07:18:48z ")," Optimedurabledate ": Isodate (" 2018-07-17t07:18:48z ")," Lastheartbeat ": Isodate (" 2018-07-17t07:18:51.208z ")," Lastheartbeatrecv ": Isodate (" 2018-07-17t07:18:51.720z ") , "Pingms": Numberlong (0), "lastheartbeatmessage": "", "syncingto": "192.168.126.132:2 7017 "," Syncsourcehost ":" 192.168.126.132:27017 "," Syncsourceid ": 0," InfoMessage ":" "            , "ConfigVersion": 1}, {"_id": 2, "name": "192.168.126.132:27019",            "Health": 1, "state": 2, "statestr": "secondary",//From node// "Uptime": "Optime": {"ts": Timestamp (1531811928, 1), "T": Numberl Ong (1)}, "Optimedurable": {"ts": Timestamp (1531811928, 1), "T": N            Umberlong (1)}, "Optimedate": Isodate ("2018-07-17t07:18:48z"), "Optimedurabledate": Isodate ("2018-07-17t07:18:4 8Z ")," Lastheartbeat ": Isodate (" 2018-07-17t07:18:51.208z ")," Lastheartbeatrecv ": Isodate (" 2018-07- 17t07:18:51.822z ")," Pingms ": Numberlong (0)," lastheartbeatmessage ":" "," syncingto ":            "192.168.126.132:27017", "Syncsourcehost": "192.168.126.132:27017", "Syncsourceid": 0, "InfoMessage": "", "ConfigVersion": 1}], "OK": 1, "Operationtime": Timestamp (1531811928, 1), "$clusterTime": {"Clustertime": Timestamp (1531811928, 1), "signature": {"hash": BinD ATA (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="), "KeyId": Numberlong (0)}}}

Of these, health is 1, and 0 represents downtime. State 1 represents the primary node, and 2 represents the slave node.

Ensure that there is no data from the node when the replica set initializes the configuration

MongoDB Replica set Switchover

The MongoDB replica set enables high availability of the cluster and automatically switches to other nodes when the primary node fails. You can also manually perform a master-slave switchover of a replica set.

1. Failover switching

[[email protected] etc]# PS aux |       grep Mongod//view process//root 17116 1.2 5.8 1546916 58140?       Sl 14:31 0:51 mongod-f/etc/mongod.confroot 17413 1.0 5.7 1445624 57444?       Sl 14:34 0:39 mongod-f/etc/mongod2.confroot 17868 1.2 5.5 1446752 55032?       Sl 15:05 0:23 mongod-f/etc/mongod3.confroot 17896 0.8 4.7 1037208 47552? Sl 15:05 0:16 mongod-f/etc/mongod4.confroot 18836 0.0 0.0 112676 980 PTS/1 s+ 15:38 0:00 grep--colo R=auto mongod[[email protected] etc]# kill-9 17116//Kill 27017 process//[[email protected] etc]# PS aux |       grep mongodroot 17413 1.0 5.7 1453820 57456?       Sl 14:34 0:40 mongod-f/etc/mongod2.confroot 17868 1.2 5.5 1454948 55056?       Sl 15:05 0:24 mongod-f/etc/mongod3.confroot 17896 0.8 4.7 1037208 47552? Sl 15:05 0:16 mongod-f/etc/mongod4.confroot 18843 0.0 0.0 112676 976 pts/1 r+ 15:38 0:00 grep--colo R=auto MONGOD[[email protected] etc]# MONGO--port 27019kgcrs:primary> rs.status () "members": [{"_id"            : 0, "name": "192.168.126.132:27017", "Health": 0,//Downtime status//"state": 8, "Statestr": "(not Reachable/healthy)", "uptime": 0, "Optime": {"ts": Timestam            P (0, 0), "T": Numberlong ( -1) {"_id": 1, "name": "192.168.126.132:27018", "Health": 1, "state": 2, "statestr": "secondary",//From Node//"uptime": 146 7, "Optime": {"ts": Timestamp (1531813296, 1), "T": Numberlong (2)}            , "optimedurable": {"ts": Timestamp (1531813296, 1), "T": Numberlong (2) }, {"_id": 2, "name": "192.168.126.132:27019", "Health": 1, "s Tate ": 1," sTatestr ":" PRIMARY ",//Master node//" uptime ": 2178," Optime ": {" ts ": Timestamp (1531 813296, 1), "T": Numberlong (2)}

2. Manual master-Slave switching

Kgcrs:primary> Rs.freeze (30)//Pause 30s do not participate in the election kgcrs:primary> Rs.stepdown (60,30)//Surrender the primary node position, maintain from the node status of not less than 60 seconds, Wait 30 seconds to synchronize the master node and slave log 2018-07-17t15:46:19.079+0800 E QUERY [thread1] error:error doing query:failed:network Error while a ttempting to Run command ' Replsetstepdown ' on Host ' 127.0.0.1:27019 ': [email protected]/mongo/shell/db.js:168:1[ email protected]/mongo/shell/db.js:186:16[email protected]/mongo/shell/utils.js:1341:12@ (Shell) : 1:12018-07-17t15:46:19.082+0800 I NETWORK [Thread1] trying reconnect to 127.0.0.1:27019 (127.0.0.1) failed2018-07-17t15:46:19.085+0800 I NETWORK [thread1] Reconnect 127.0.0.1:27019 (127.0.0.1) okkgcrs:secondary>//AC Out of the master node immediately becomes from node//kgcrs:secondary> rs.status () "_id": 0, "name": "192.168.126.132:27017", "Healt            H ": 0,//outage status//" state ": 8," Statestr ":" (not Reachable/healthy) "," uptime ": 0,              "Optime": {"ts": Timestamp (0, 0),  "T": Numberlong ( -1)}, {"_id": 1, "name": "192.168.126.132:27018",            "Health": 1, "state": 1, "Statestr": "PRIMARY",//Master node status//"uptime": 1851,            "Optime": {"ts": Timestamp (1531813679, 1), "T": Numberlong (3) { "_id": 2, "name": "192.168.126.132:27019", "Health": 1, "state": 2, "STA Testr ":" secondary ",//From node state//" uptime ": 2563," Optime ": {" ts ": Timestamp (15 31813689, 1), "T": Numberlong (3)
How the MongoDB copy set is elected

The node types are divided into standard node (host), passive node (passive), and quorum node (arbiter).

    • Only standard nodes may be elected as active (primary) nodes, with the right to vote. The passive node has a complete copy, cannot become an active node, and has the right to vote. The quorum node does not replicate data and cannot become an active node, only the right to vote.
    • The difference between the standard node and the passive node is that the higher priority is the standard node and the lower one is the passive node.
    • The election rule is that the votes are high and the priority is the value of 0~1000, which is equivalent to an additional 0~1000 of votes. Election result: The number of votes is high, if the number of votes is the same, the new data winner.

1. Configure the replication Set priority

1) Reconfigure the MongoDB replica set of 4 nodes, set two standard nodes, one passive node and one quorum node.

[[email protected] etc]# mongo> cfg={"_id":"kgcrs","members":[{"_id":0,"host":"192.168.126.132:27017","priority":100},{"_id":1,"host":"192.168.126.132:27018","priority":100},{"_id":2,"host":"192.168.126.132:27019","priority":0},{"_id":3,"host":"192.168.126.132:27020","arbiterOnly":true}]}> rs.initiate(cfg)     //重新配置//kgcrs:SECONDARY> rs.isMaster(){    "hosts" : [                 //标准节点//        "192.168.126.132:27017",        "192.168.126.132:27018"    ],    "passives" : [             //被动节点//        "192.168.126.132:27019"    ],    "arbiters" : [            //仲裁节点//        "192.168.126.132:27020"

2) Analog primary node failure

If the primary node fails, another standard node will be elected as the new master node

[[email protected] etc]# mongod-f/etc/mongod.conf--shutdown//Standard node 27017//[[email protected] etc]# MONGO--            Port 27018//This will elect the second standard node as the primary node//kgcrs:primary> rs.status () "_id": 0, "name": "192.168.126.132:27017",            "Health": 0,//Downtime status//"state": 8, "Statestr": "(not Reachable/healthy)", "Uptime": 0, "Optime": {"ts": Timestamp (0, 0), "T": Numberlong ( -1) "_i D ": 1," name ":" 192.168.126.132:27018 "," Health ": 1," state ": 1," Statestr ":" PRIMARY ",//Standard node//" uptime ": 879," Optime ": {" ts ": Timestamp (1531 817473, 1), "T": Numberlong (2) "_id": 2, "name": "192.168.126.132:27019", "Heal Th ": 1," state ": 2," statestr ":" secondary ",//Passive node//" uptime ": 569," Op                Time ": {"TS": Timestamp (1531817473, 1), "T": Numberlong (2) "_id": 3, "name": "192.168.126.132 : 27020 "," Health ": 1," state ": 7," STATESTR ":" Arbiter ",//Quorum node//" uptime      ": 569,

3) Simulate all standard node failures

All standard nodes fail, and passive nodes cannot become master nodes

[[email protected] etc]# mongod -f /etc/mongod2.conf --shutdown //关闭标准节点27018//[[email protected] etc]# mongo --port 27019kgcrs:SECONDARY> rs.status()            "_id" : 0,            "name" : "192.168.126.132:27017",            "health" : 0,     //宕机状态//            "state" : 8,            "stateStr" : "(not reachable/healthy)",            "uptime" : 0,            "_id" : 1,            "name" : "192.168.126.132:27018",            "health" : 0,      //宕机状态//            "state" : 8,            "stateStr" : "(not reachable/healthy)",            "uptime" : 0,            "_id" : 2,            "name" : "192.168.126.132:27019",            "health" : 1,            "state" : 2,            "stateStr" : "SECONDARY",   //被动节点//            "uptime" : 1403,            "_id" : 3,            "name" : "192.168.126.132:27020",            "health" : 1,            "state" : 7,            "stateStr" : "ARBITER",    //仲裁节点//
MongoDB Replica Set Management

1. Configuration allows data to be read from the node

The slave node of the default MongoDB replica set cannot read data, and you can use the Rs.slaveok () command to allow data to be read from the node.

[[email protected] etc]# mongo --port 27017kgcrs:SECONDARY> show dbs    //读取不到数据库信息//2018-07-17T17:11:31.570+0800 E QUERY    [thread1] Error: listDatabases failed:{    "operationTime" : Timestamp(1531818690, 1),    "ok" : 0,    "errmsg" : "not master and slaveOk=false",    "code" : 13435,    "codeName" : "NotMaste    kgcrs:SECONDARY> rs.slaveOk()kgcrs:SECONDARY> show dbsadmin   0.000GBconfig  0.000GBlocal   0.000GB

2. View replication status information

You can use the Rs.printreplicationinfo () and Rs.printslavereplicationinfo () commands to view the status of the replica set.

kgcrs:SECONDARY> rs.printReplicationInfo()configured oplog size:   990MBlog length start to end: 2092secs (0.58hrs)oplog first event time:  Tue Jul 17 2018 16:41:48 GMT+0800 (CST)oplog last event time:   Tue Jul 17 2018 17:16:40 GMT+0800 (CST)now:                     Tue Jul 17 2018 17:16:46 GMT+0800 (CST)kgcrs:SECONDARY>  rs.printSlaveReplicationInfo()source: 192.168.126.132:27017    syncedTo: Tue Jul 17 2018 17:16:50 GMT+0800 (CST)    0 secs (0 hrs) behind the primary source: 192.168.126.132:27019    syncedTo: Tue Jul 17 2018 17:16:50 GMT+0800 (CST)    0 secs (0 hrs) behind the primary

3. Deploying Certified Replication

kgcrs:primary> use adminkgcrs:primary> db.createuser ({"User": "Root", "pwd": "123", "Roles": ["Root"]}) [Email  protected] ~]# vim/etc/mongod.conf//Edit four configuration files separately//....security:keyfile:/usr/bin/kgcrskey1//verify path//CL Usterauthmode:keyfile//authentication type//[[email protected] ~]# vim/etc/mongod2.conf [[email protected] ~]# VI m/etc/mongod3.conf [[email protected] ~]# vim/etc/mongod4.conf [[email protected] bin]# echo "Kgcrs key" > Kgcrskey1//Generate a key file for 4 instances//[[email protected] bin]# echo "Kgcrs key" > kgcrskey2[[email protected] bin]# echo "Kgcrs key" > kgcrskey3[[email protected] bin]# echo "Kgcrs key" > kgcrskey4[[email protected] bin]# chmod kgcrskey{1..4}[[email protected] bin]# mongod-f/etc/mongod.conf//Restart 4 instances//[[email protected] Bin ]# mongod-f/etc/mongod2.conf[[email protected] bin]# mongod-f/etc/mongod3.conf[[email protected] bin]# Mongod-f/etc/mongod4.conf[[email protected] bin]# MONGO--port 27017//Enter the standard node//kgcrs:primary> show DBS//Cannot view database//kgcrs:primary> Rs.status ()//Cannot view replication Set//kgcrs:primary> Use admin//Identity login Authentication//kgcrs:primary> db.auth ("root", "123") kgcrs:primary> show DBS//Can view data Library//admin 0.000GBconfig 0.000GBlocal 0.000gbkgcrs:primary> rs.status ()//can view copy set//"_id": 0, "name": "               192.168.126.132:27017 "," Health ": 1," state ": 1," Statestr ":" PRIMARY ",            "Uptime": 411, "_id": 1, "name": "192.168.126.132:27018", "Health": 1, "state": 2, "Statestr": "Secondary", "uptime": 324, "_id": 2, "name": "192.168.126.1 32:27019 "," Health ": 1," state ": 2," statestr ":" Secondary "," uptime ": 305 , "_id": 3, "name": "192.168.126.132:27020", "Health": 1, "state": 7, "stat      Estr ":" Arbiter ",      "Uptime": 280, 

Administrative maintenance for the deployment of MONGODB replication sets and replication sets on CentOS7

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.