Learn more about MongoDB replication sets in one article per day

Source: Internet
Author: User
Tags mkdir mongodb

Replication Set Concepts:

Replica sets are additional copies of data that are the process of synchronizing data across multiple servers, providing redundancy and increasing the availability of data, which enables data recovery for hardware failures and interrupted services

How replication sets work:
MongoDB复制集最少需要两个节点。主节点:负责处理客户端的请求,从节点:负责复制主节点上的数据搭配方式:一主一从或一主多从注:客户端在主节点写入数据,在从节点读取数据,主从进行数据交互,保证数据的一致性
MongoDB replica set Deployment (1) Configuring Replication sets
[[email protected] ~]# mkdir -p /data/moongodb/mongodb{2,3,4}   //创建多实例[[email protected] ~]# cd /data/mongodb/[[email protected] mongodb]# mongo  mongod2.log  mongodb2  mongodb3  mongodb4[[email protected] mongodb]# mkdir logs[[email protected] mongodb]# lslogs  mongo  mongodb2  mongodb3  mongodb4[[email protected] mongodb]# touch logs/mongodb{2,3,4}.log[[email protected] mongodb]# cd logs/[[email protected] logs]# lsmongodb2.log  mongodb3.log  mongodb4.log[[email protected] logs]# chmod 777 *.log  //赋予最大权限[[email protected] logs]# lsmongodb2.log  mongodb3.log  mongodb4.log[[email protected] logs]# ll总用量 0-rwxrwxrwx. 1 root root 0 7月  17 08:59 mongodb2.log-rwxrwxrwx. 1 root root 0 7月  17 08:59 mongodb3.log-rwxrwxrwx. 1 root root 0 7月  17 08:59 mongodb4.log
(2) Edit the configuration file for 4 MongoDB instances
[[email protected] etc]# vim mongod.confreplication://Note This item replsetname:root123//Add replica set name [[email protected] etc]# mongod-f/etc/mongod.conf--shutdown//Close service killing process with pid:1084[ [email protected] etc]# mongod-f/etc/mongod.conf//Turn on service about to fork child process, waiting until server was ready For connections.forked Process:11329child Process started successfully, parent exiting note: You need to restart the service after modifying the configuration file, the file is in effect [email  protected] etc]# cp-p mongod.conf mongod2.conf CP: Do you want to overwrite "mongod2.conf"? Y[[email protected] etc]# vim mongod2.confpath:/data/mongodb/logs/mongodb2.log//log storage location dbpath:/data/mongodb/m ONGODB2//Data storage location Port://Modify port number, port number cannot be the same [[email protected] etc]# CP- P mongod2.conf mongod3.conf [[email protected] etc]# cp-p mongod2.conf mongod4.conf [[email protected] etc]# V Im mongod3.conf//change serial number and port number, other and 2 same configuration [[email protected] etc]# vim mongod4.conf [[email protected] etc]# mongod-f/etc/mongod2.conf[[email protected] etc]# mongod-f/etc/ Mongod3.conf[[email protected] etc]# mongod-f/etc/mongod4.conf[[email protected] etc]# netstat-antp |        grep Mongod//Four instances are turned on TCP 0 0 0.0.0.0:27019 0.0.0.0:* LISTEN 11599/mongod      TCP 0 0 0.0.0.0:27020 0.0.0.0:* LISTEN 11627/mongod TCP 0           0 0.0.0.0:27017 0.0.0.0:* LISTEN 11459/mongod TCP 0 0 0.0.0.0:27018 0.0.0.0:* LISTEN 10252/mongod
(3) Configuring a replication set of three nodes
[[email protected] etc]# systemctl stop firewalld.service//Turn off firewall [[email protected] etc]# Setenforce 0[[emai L protected] etc] #mongo > show dbs> cfg={"_id": "root123", "members": [{"_id": 0, "host": "                             192.168.200.184:27017 "},{" _id ": 1," host ":" 192.168.200.184:27018 "},{" _id ": 2," host ":" 192.168.200.184:27019 "}]} Configure node replica set {"_id": "root123", "members": [{"_id": 0, "host": "192.16            8.200.184:27017 "}, {" _id ": 1," host ":" 192.168.200.184:27018 "}, {                                       "_id": 2, "host": "192.168.200.184:27019"}]}> db.stats () {"DB": "Test", "collections": 0, "views": 0, "Objects": 0, "avgobjsize": 0, "DataSize"  : 0, "storagesize": 0, "numextents": 0, "Indexes": 0, "indexsize": 0, "fileSize": 0, "fsusedsize":              0, "Fstotalsize": 0, "OK": 1,                    Status Ok=1, copy succeeded "$clusterTime": {"Clustertime": Timestamp (0, 0), "signature": { "Hash": Bindata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="), "KeyId": Numberlong (0)}}}123:secondary > Rs.status ()//view replica set status {"Set": "Root123", "date": Isodate ("2018-07-17t03:24:03.253z"), "my State ": 1," term ": Numberlong (1)," syncingto ":" "," Syncsourcehost ":" "," Syncsourceid ":-1," Heartbea Tintervalmillis ": Numberlong (+)," Optimes ": {" Lastcommittedoptime ": {" ts ": Timestamp (1531797 840, 1), "T": Numberlong (1)}, "Readconcernmajorityoptime": {"ts": Timestamp (15317             97840, 1), "T": Numberlong (1)}, "Appliedoptime": {"ts": Timestamp (1531797840, 1),            "T": Numberlong (1)}, "Durableoptime": {"ts": Timestamp (1531797840, 1),   "T": Numberlong (1)     }}, "members": [{"_id": 0, "name": "192.168.200.184:27017", "Heal            Th ": 1," state ": 1," Statestr ":" PRIMARY ",//Status 1 Main" uptime ": 980,            "Optime": {"ts": Timestamp (1531797840, 1), "T": Numberlong (1)},            "Optimedate": Isodate ("2018-07-17t03:24:00z"), "syncingto": "", "Syncsourcehost": "", "Syncsourceid":-1, "InfoMessage": "Could not find member to sync from", "Electiontime": Ti            Mestamp (1531797808, 1), "Electiondate": Isodate ("2018-07-17t03:23:28z"), "ConfigVersion": 1, "Self": true, "lastheartbeatmessage": ""}, {"_id": 1, "name": "               192.168.200.184:27018 "," Health ": 1," state ": 2," statestr ":" Secondary ",       Status 2 is from     "Uptime": "Optime": {"ts": Timestamp (1531797840, 1), "T": Numberlon g (1)}, "Optimedurable": {"ts": Timestamp (1531797840, 1), "T": Num Berlong (1)}, "Optimedate": Isodate ("2018-07-17t03:24:00z"), "optimedurabledate": Isoda Te ("2018-07-17t03:24:00z"), "Lastheartbeat": Isodate ("2018-07-17t03:24:02.633z"), "Lastheartbeatrec            V ": isodate (" 2018-07-17t03:24:02.920z ")," Pingms ": Numberlong (0)," lastheartbeatmessage ":" ", "Syncingto": "192.168.200.184:27017", "Syncsourcehost": "192.168.200.184:27017", "Syncsour            CeId ": 0," InfoMessage ":" "," ConfigVersion ": 1}, {" _id ": 2,               "Name": "192.168.200.184:27019", "Health": 1, "state": 2, "statestr": "Secondary",   Status 2 is from         "Uptime": "Optime": {"ts": Timestamp (1531797840, 1), "T": Numbe  Rlong (1)}, "Optimedurable": {"ts": Timestamp (1531797840, 1), "T": Numberlong (1)}, "Optimedate": Isodate ("2018-07-17t03:24:00z"), "optimedurabledate": I Sodate ("2018-07-17t03:24:00z"), "Lastheartbeat": Isodate ("2018-07-17t03:24:02.633z"), "Lastheartbea             Trecv ": Isodate (" 2018-07-17t03:24:02.896z ")," Pingms ": Numberlong (0)," lastheartbeatmessage ":" ", "Syncingto": "192.168.200.184:27017", "Syncsourcehost": "192.168.200.184:27017", "Sync SourceID ": 0," InfoMessage ":" "," ConfigVersion ": 1}]," OK ": 1," operationtime ": Timestamp (1531797840, 1)," $clusterTime ": {" Clustertime ": Timestamp (1531797840, 1)," signature ": {"Hash": BiNdata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="), "KeyId": Numberlong (0)}}}root123:primary>/ /Last Show this status note: Guaranteed no data from node when initializing configuration
(4) Adding and removing nodes
Root123:primary> Rs.add ("192.168.200.184:27020") {"OK": 1, "Operationtime": Timestamp (1531799035, 1), "$clus Tertime ": {" Clustertime ": Timestamp (1531799035, 1)," signature ": {" hash ": Bindata (0," aaaaaa            Aaaaaaaaaaaaaaaaaaaaa= ")," KeyId ": Numberlong (0)}}}root123:primary> rs.stats ()" _id ": 3,  "Name": "192.168.200.184:27020",//Add Success "Health": 1, "state": 2, "STATESTR": "Secondary",//also for slave node "uptime": "Optime": {"ts": Timestamp (153179 9060, 1), "T": Numberlong (1)},root123:primary> rs.remove ("192.168.200.184:27020") {"OK"        : 1,//delete succeeded "Operationtime": Timestamp (1531799257, 1), "$clusterTime": { "Clustertime": Timestamp (1531799257, 1), "signature": {"hash": Bindata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaa A= ")," KeyId " : Numberlong (0)}} 
(5) Automatic transfer of analog fault
[[email protected] mongodb]# PS aux |       grep mongodroot 12342 1.3 5.8 1465664 58768?       Sl 11:07 0:38 mongod-f/etc/mongod3.confroot 12387 1.0 5.9 1442988 59124?       Sl 11:07 0:29 mongod-f/etc/mongod4.confroot 12428 1.4 6.4 1582772 64516?       Sl 11:07 0:40 mongod-f/etc/mongod.confroot 12667 1.5 6.2 1459800 62268? Sl 11:17 0:35 mongod-f/etc/mongod2.confroot 13655 0.0 0.0 112676 984 pts/0 s+ 11:55 0:00 grep--colo R=auto mongod[[email protected] mongodb]# kill-9 12428[[email protected] mongodb]# Mongoroot123:SECONDARY > Rs.status () "members": [{"_id": 0, "name": "192.168.200.184:27017", "Heal            Th ": 0,//The first health value is 0, no" state "is already present: 8," Statestr ":" (not Reachable/healthy) ",            "Uptime": 0, "Optime": {"ts": Timestamp (0, 0), "T": Numberlong (-1)     },       "Optimedurable": {"ts": Timestamp (0, 0), "T": Numberlong (-1)},{            "_id": 2, "name": "192.168.200.184:27019", "Health": 1, "state": 1, "Statestr": "PRIMARY",//27019 jumps to the main node "uptime": 2039, "Optime": {"T                S ": Timestamp (1531799828, 1)," T ": Numberlong (2)}," Optimedurable ": { "TS": Timestamp (1531799828, 1), "T": Numberlong (2)},

Learn more and practise more, be happy unlimited ~!!!

Learn more about MongoDB replication sets in one article per day

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.