MongoDB Replica Set

Source: Internet
Author: User
Tags chmod failover mongodb create database

MongoDB Replica set skill target
    • Understanding the MongoDB replica set concept
    • Learn to deploy a MongoDB replica set
    • Understand the MongoDB election process
    • Learn MongoDB replica set management and maintenance replication sets overview The benefits of replication sets are as follows
    • Make your data more secure
    • Improve data availability (24*7)
    • Disaster recovery
    • Maintenance without downtime (e.g. index rebuild, backup, failover)
    • Read scaling (extra copy read)
    • The replica set is transparent to the application
How replication sets work

A copy set of MongoDB requires at least two nodes. One of them is the master node (primary), which handles the client request and the rest is the slave node (secondary), which replicates the data on the master node.

MongoDB each node collocation mode is: A master one from or a master multi-slave, the master node records all operations on it to Oplog, from the node periodically polling the master node to obtain operations, and then perform these operations on the data copy to ensure that the data from the node and the master node data is consistent

The MongoDB replication fabric replica set features the following
    • n Nodes of a cluster
    • Any node can be used as the master node
    • All write operations are on the primary node
    • It is automatically loaded with a
    • Automatic recovery
MongoDB replica set deployment first create a Data folder log folder
# 数据存放文件夹[[email protected] ~] cd /data/mongodb/[[email protected] mongodb]# lsmongodb1  mongodb2  mongodb3  mongodb4# 日志存放文件夹[[email protected] ~]# cd /data/logs/[[email protected] logs]# touch mongodb{2,3,4}.log [[email protected] logs]# lsmongodb1.log  mongodb2.log  mongodb3.log  mongodb4.log[[email protected] logs]# chmod 777 ./*.log #赋予当前文件夹以.log结尾做大权限[[email protected] logs]# lsmongodb1.log  mongodb2.log  mongodb3.log  
Copy the etc/mongod.conf configuration file and turn on the copy set #replication: Remove the # Sign in the next line join Replsetname:kgcrs copy set name is Kgcrs
[[email protected] logs]# cp -p /etc/mongod.conf /etc/mongod1.conf  
Open node
#开启节点[[email protected] logs]# mongod -f /etc/mongod1.conf about to fork child process, waiting until server is ready for connections.forked process: 25849child process started successfully, parent exiting[[email protected] logs]# mongod -f /etc/mongod2.conf about to fork child process, waiting until server is ready for connections.forked process: 25781child process started successfully, parent [[email protected] logs]# mongod -f /etc/mongod3.conf about to fork child process, waiting until server is ready for connections.forked process: 25431child process started successfully, parent exiting[[email protected] logs]# mongod -f /etc/mongod4.conf about to fork child process, waiting until server is ready for connections.forked process: 25851child process started successfully, parent exiting
Configuring a replication set of three nodes
> cfg={"_id":"kgcrs","members":[{"_id":0,"host":"127.0.0.1:27017"},{"_id":1,"host":"127.0.0.1:27018"},{"_id":2,"host":"127.0.0.1:27019"},{"_id":3,"host":"127.0.0.1:27020"},{"_id":4,"host":"127.0.0.1:27021"}]}{    "_id" : "kgcrs",    "members" : [        {            "_id" : 0,            "host" : "127.0.0.1:27017"        },        {            "_id" : 1,            "host" : "127.0.0.1:27018"        },        {            "_id" : 2,            "host" : "127.0.0.1:27019"        },        {            "_id" : 3,            "host" : "127.0.0.1:27020"        },        {            "_id" : 4,            "host" : "127.0.0.1:27021"        }    ]}#初始化配置时保证从节点没有数据> rs.initiate(cfg) #添加节点kgcrs:PRIMARY> rs.add("IP地址:端口号")#删除节点kgcrs:PRIMARY> rs.remove("IP地址:端口号)
View replica sets (short for RS replication set)
> rs.status () kgcrs:primary> rs.status () {"Set": "Kgcrs", "date": Isodate ("2018-07-17t06:03:15.378z"), "my State ": 1," term ": Numberlong (1)," syncingto ":" "," Syncsourcehost ":" "," Syncsourceid ":-1," Heartbea Tintervalmillis ": Numberlong (+)," Optimes ": {" Lastcommittedoptime ": {" ts ": Timestamp (1531807 392, 1), "T": Numberlong (1)}, "Readconcernmajorityoptime": {"ts": Timestamp (15318             07392, 1), "T": Numberlong (1)}, "Appliedoptime": {"ts": Timestamp (1531807392, 1),            "T": Numberlong (1)}, "Durableoptime": {"ts": Timestamp (1531807392, 1), "T": Numberlong (1)}, "members": [{"_id": 0, "name": "127.0.0.1:27017"            , "Health": 1, "state": 1, "Statestr": "PRIMARY", "uptime": 1603,         "Optime": {       "TS": Timestamp (1531807392, 1), "T": Numberlong (1)}, "Optimedate": isodate            ("2018-07-17t06:03:12z"), "syncingto": "", "Syncsourcehost": "", "Syncsourceid":-1, "InfoMessage": "", "Electiontime": Timestamp (1531807251, 1), "Electiondate": Isodate ("2        018-07-17t06:00:51z ")," ConfigVersion ": 1," Self ": true," lastheartbeatmessage ":" " }, {"_id": 1, "name": "127.0.0.1:27018", "Health": 1, "state" : 2, "statestr": "Secondary", "uptime": 154, "Optime": {"ts": Timesta MP (1531807392, 1), "T": Numberlong (1)}, "Optimedurable": {"ts": t Imestamp (1531807392, 1), "T": Numberlong (1)}, "Optimedate": Isodate ("2018-07-17t06 : 03:12z ")," OPtimedurabledate ": Isodate (" 2018-07-17t06:03:12z ")," Lastheartbeat ": Isodate (" 2018-07-17t06:03:13.741z "), "Lastheartbeatrecv": Isodate ("2018-07-17t06:03:14.290z"), "Pingms": Numberlong (0), "lasthe            Artbeatmessage ":", "syncingto": "127.0.0.1:27017", "Syncsourcehost": "127.0.0.1:27017", "Syncsourceid": 0, "InfoMessage": "", "ConfigVersion": 1}, {"_id" : 2, ' name ': ' 127.0.0.1:27019 ', ' health ': 1, ' state ': 2, ' statestr ': ' SECO                Ndary "," uptime ": 154," Optime ": {" ts ": Timestamp (1531807392, 1),                "T": Numberlong (1)}, "Optimedurable": {"ts": Timestamp (1531807392, 1), "T": Numberlong (1)}, "Optimedate": Isodate ("2018-07-17t06:03:12z"), "Optimedurab Ledate ": Isodate (" 2018-07-17t06:03:12z ")," Lastheartbeat ": Isodate (" 2018-07-17t06:03:13.741z ")," LASTHEARTBEATRECV ": ISOD Ate ("2018-07-17t06:03:14.245z"), "Pingms": Numberlong (0), "lastheartbeatmessage": "", "            Syncingto ":" 127.0.0.1:27017 "," Syncsourcehost ":" 127.0.0.1:27017 "," Syncsourceid ": 0, "InfoMessage": "", "ConfigVersion": 1}, {"_id": 3, "name": "127.0.0.             1:27020 "," Health ": 1," state ": 2," statestr ":" Secondary "," uptime ": 154,            "Optime": {"ts": Timestamp (1531807392, 1), "T": Numberlong (1)},            "Optimedurable": {"ts": Timestamp (1531807392, 1), "T": Numberlong (1) }, "Optimedate": Isodate ("2018-07-17t06:03:12z"), "Optimedurabledate": Isodate ("2018-07-17t06:0 3:12z ")," LasTheartbeat ": Isodate (" 2018-07-17t06:03:13.741z ")," Lastheartbeatrecv ": Isodate (" 2018-07-17t06:03:14.291z "),            "Pingms": Numberlong (0), "lastheartbeatmessage": "", "syncingto": "127.0.0.1:27017", "Syncsourcehost": "127.0.0.1:27017", "Syncsourceid": 0, "InfoMessage": "", "C             Onfigversion ": 1}, {" _id ": 4," name ":" 127.0.0.1:27021 "," Health ": 1,                "State": 2, "statestr": "Secondary", "uptime": 154, "Optime": {                "TS": Timestamp (1531807392, 1), "T": Numberlong (1)}, "Optimedurable": { "TS": Timestamp (1531807392, 1), "T": Numberlong (1)}, "Optimedate": Is Odate ("2018-07-17t06:03:12z"), "Optimedurabledate": Isodate ("2018-07-17t06:03:12z"), "lastheartbeat ": Isodate (" 2018-07-17t06:03:13.741z ")," Lastheartbeatrecv ": Isodate (" 2018-07-17t06:03:14.142z ")," Pingms ": Numberlong (0) , "Lastheartbeatmessage": "", "syncingto": "127.0.0.1:27017", "Syncsourcehost": "127.0    .0.1:27017 "," Syncsourceid ": 0," InfoMessage ":" "," ConfigVersion ": 1}], "OK": 1, "Operationtime": Timestamp (1531807392, 1), "$clusterTime": {"Clustertime": Timestamp (15318073 1), "signature": {"hash": Bindata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="), "keyId": Numberl Ong (0)}}
Failover switch Kill a process and drop the primary node
# down mongodb[[email protected] logs]# ps aux |       grep mongoroot 27647 0.3 2.8 1634180 53980?       Sl 13:36 0:06 mongod-f/etc/mongod.confroot 27683 0.3 2.7 1482744 52596?       Sl 13:36 0:06 mongod-f/etc/mongod1.confroot 27715 0.2 2.8 1466104 52796?       Sl 13:36 0:04 mongod-f/etc/mongod2.confroot 27747 0.2 2.7 1474360 52364?       Sl 13:36 0:04 mongod-f/etc/mongod3.confroot 27779 0.2 2.8 1465280 52936? Sl 13:36 0:04 mongod-f/etc/mongod4.confroot 28523 0.0 0.0 112676 984 pts/1 r+ 14:05 0:00 grep--color =auto mongo[[email protected] logs]# kill-9 27647[[email protected] logs]# ps aux |       grep mongoroot 27683 0.3 2.8 1490940 53304?       Sl 13:36 0:06 mongod-f/etc/mongod1.confroot 27715 0.2 2.8 1490692 53420?       Sl 13:36 0:05 mongod-f/etc/mongod2.confroot 27747 0.2 2.8 1618796 53596? Sl 13:36 0:05 mongod-f/etc/mongod3.confroot 27779 0.2  2.8 1489868 53252? Sl 13:36 0:05 mongod-f/etc/mongod4.confroot 28566 0.0 0.0 112676 980 PTS/1 r+ 14:06 0:00 grep--color =auto MONGO

#自动切换 (originally 27017 the primary server down after the automatic switch to 27020)

kgcrs:SECONDARY> rs.isMaster() #查看主从{    "hosts" : [        "127.0.0.1:27017",        "127.0.0.1:27018",        "127.0.0.1:27019",        "127.0.0.1:27020",        "127.0.0.1:27021"    ],    "setName" : "kgcrs",    "setVersion" : 1,    "ismaster" : false,    "secondary" : true,    "primary" : "127.0.0.1:27020",     "me" : "127.0.0.1:27017",
Manually switch
Kgcrs:primary> Rs.freeze (30)//Pause 30s do not participate in the election {"OK": 0, "errmsg": "Cannot freeze node when PRIMARY or running F or election. State:primary "," Code ":" codename ":" Notsecondary "," Operationtime ": Timestamp (1531808896, 1)," $clust Ertime ": {" Clustertime ": Timestamp (1531808896, 1)," signature ": {" hash ": Bindata (0," aaaaaaa Aaaaaaaaaaaaaaaaaaaa= ")," KeyId ": Numberlong (0)}}}#//hand over the primary node position, maintain from node state not less than 60 seconds, wait 30 seconds to synchronize the master node and slave node log Kgcrs: Primary> Rs.stepdown (60,30) 2018-07-17t14:28:43.302+0800 E QUERY [thread1] Error:error doing query:failed:networ K Error while attempting-run command ' Replsetstepdown ' on Host ' 127.0.0.1:27020 ': [email protected]/mongo/shell/d b.js:168:1[email protected]/mongo/shell/db.js:186:16[email protected]/mongo/shell/utils.js:1341:12@ ( Shell): 1:12018-07-17t14:28:43.305+0800 I NETWORK [Thread1] trying reconnect to 127.0.0.1:27020 (127.0.0.1) failed2018-07-17t14:28:43.306+0800 INETWORK [Thread1] Reconnect 127.0.0.1:27020 (127.0.0.1) ok# switch in order to 27018kgcrs:secondary> Rs.ismaster () {"Hosts": [ "127.0.0.1:27017", "127.0.0.1:27018", "127.0.0.1:27019", "127.0.0.1:27020", "127.0.0.1:270 "]," setName ":" Kgcrs "," setversion ": 1," IsMaster ": false," secondary ": true," PRIMARY ":" 127.0 .0.1:27018 "," Me ":" 127.0.0.1:27020 ",
Attempt to create database write data (MongoDB additions and deletions)
 #增kgcrs:P rimary> use kgcswitched to db kgckgcrs:primary> Db.t1.insert ({"id": 1, "name": "Zhangsan"}) Writeresult ({                         "Ninserted": 1}) kgcrs:primary> Db.t2.insert ({"id": 2, "name": "Zhangsan"}) Writeresult ({"ninserted": 1}) Kgcrs:primary> Show collectionst1t2# kgcrs:primary> Db.t1.insert ({"id": 2, "name": "Lisi"}) Writeresult ({ "Ninserted": 1}) kgcrs:primary> Db.t1.find () {"_id": ObjectId ("5b4da41868504a94462710e1"), "id": 1, "name": "Zhangs An "} {" _id ": ObjectId (" 5b4da5a468504a94462710e3 ")," id ": 2," name ":" Lisi "} #改kgcrs:P rimary> db.t1.update ({" id ": 1}, {$set: {"name": "Tom"}}) Writeresult ({"nmatched": 1, "nupserted": 0, "nmodified": 1}) kgcrs:primary> Db.t1.find () { "_id": ObjectId ("5b4da41868504a94462710e1"), "id": 1, "name": "Tom"} {"_id": ObjectId ("5b4da5a468504a94462710e3"), "I D ": 2," name ":" Lisi "} #删kgcrs:P rimary> db.t1.remove ({" id ": 2}) Writeresult ({" nremoved ": 1}) kgcrs:primary> db.t1. Find{"_id": ObjectId ("5b4da41868504a94462710e1"), "id": 1, "name": "Tom"} 
The above-mentioned additions and deletions to the operation in the local database Oh, we can take a look.
Kgcrs:primary> show Dbsadmin 0.000GBconfig 0.000GBkgc 0.000GBlocal 0.000GBschool 0.000gbschool2 0.000GB School8 0.000gbkgcrs:primary> use localswitched to DB localkgcrs:primary> show tablesmeoplog.rs #做 Record of all operations after copying set Replset.electionreplset.minvalidreplset.oplogTruncateAfterPointstartup_ Logsystem.replsetsystem.rollback.idkgcrs:primary> Db.oplog.rs.find () {"TS": Timestamp (1531814965, 1), "T": Numberlong (3), "H": Numberlong ("8639784432240761376"), "V": 2, "OP": "N", "ns": "", "Wall": Isodate ("2018-07-17t08:09 : 25.013Z ")," O ": {" msg ":" Periodic NoOp "}} {" TS ": Timestamp (1531814975, 1)," T ": Numberlong (3)," H ": Numberlong (" 6 221196488842671080 ")," V ": 2," OP ":" N "," ns ":" "," Wall ": Isodate (" 2018-07-17t08:09:35.014z ")," O ": {" msg ":" Perio DiC NoOp "}} {" TS ": Timestamp (1531814985, 1)," T ": Numberlong (3)," H ": Numberlong (" -8535865731309768612 ")," V ": 2," O P ":" N "," ns ":" "," Wall ": Isodate (" 2018-07-17t08:09:45.013z ")," O ":{"MSG": "Periodic NoOp"}}  {"TS": Timestamp (1531814995, 1), "T": Numberlong (3), "H": Numberlong ("4999394607928512799"), "V": 2, "OP": "N", "NS"  : "", "Wall": Isodate ("2018-07-17t08:09:55.024z"), "O": {"msg": "Periodic NoOp"}} {"TS": Timestamp (1531815005, 1), "T": Numberlong (3), "H": Numberlong (" -5991841109696910698"), "V": 2, "OP": "N", "ns": "", "Wall": Isodate ("2018-07- 17t08:10:05.024z ")," O ": {" msg ":" Periodic NoOp "}} {" TS ": Timestamp (1531815015, 1)," T ": Numberlong (3)," H ": Numbe  Rlong (" -8100024743592064147"), "V": 2, "OP": "N", "ns": "", "Wall": Isodate ("2018-07-17t08:10:15.014z"), "O": {"MSG"  : "Periodic NoOp"}} {"TS": Timestamp (1531815025, 1), "T": Numberlong (3), "H": Numberlong ("4558143489540169854"), "V" : 2, "OP": "N", "ns": "", "Wall": Isodate ("2018-07-17t08:10:25.025z"), "O": {"msg": "Periodic Noo
Election replication 1: Standard node (election master node in standard node) 2: Quorum node (can choose who as the primary node, do not participate in the election, no data on the quorum node) 3: Passive node (will not be elected as the primary node)

Passives passive node arbiters quorum node

-------------allows data to be read from the node-----------
[[email protected] logs]# mongo --port 27018kgcrs:SECONDARY> show dbskgcrs:SECONDARY> rs.slaveOk() #允许默认从节点读取数据
-------------View replication status information------------
kgcrs:SECONDARY> rs.help()kgcrs:SECONDARY> rs.printReplicationInfo()configured oplog size:   990MBlog length start to end: 1544secs (0.43hrs)oplog first event time:  Mon Jul 16 2018 05:49:12 GMT+0800 (CST)oplog last event time:   Mon Jul 16 2018 06:14:56 GMT+0800 (CST)now:                     Mon Jul 16 2018 06:14:59 GMT+0800 (CST)kgcrs:SECONDARY> rs.printSlaveReplicationInfo()source: 192.168.235.200:27018    syncedTo: Mon Jul 16 2018 06:16:16 GMT+0800 (CST)    0 secs (0 hrs) behind the primary source: 192.168.235.200:27019    syncedTo: Mon Jul 16 2018 06:16:16 GMT+0800 (CST)    0 secs (0 hrs) behind the primary #你会发现仲裁节点并不具备数据复制
--------------Change the Oplog size---------------
kgcrs:secondary> use localkgcrs:secondary> db.oplog.rs.stats () "ns": "Local.oplog.rs", "size": 20292, "Count": 178, "avgobjsize": "Storagesize": 45056, ...kgcrs:secondary> Rs.printreplicationinfo () CO nfigured oplog size:990mblog length start to End:2024secs (0.56hrs) Oplog First event Time:mon Jul 2018 05:49:12 G mt+0800 (CST) Oplog last event Time:mon Jul 2018 06:22:56 gmt+0800 (CST) Now:mon Jul 16 2018 06:2 3:05 gmt+0800 (CST) [[email protected] logs]# MONGO--port 27018kgcrs:secondary> use adminkgcrs:secondary> Db.shutdownserver () Logoff replication: Related startup parameters, and modify port port number 27028[[email protected] logs]# mongod-f/etc/mongod2.conf # Single-instance mode starts all Oplog records for the current node [[email protected] logs]# mongodump--port 27028--db local--collection ' oplog.rs ' [email  protected] logs]# MONGO--port 27028> use local> db.oplog.rs.drop () > Db.runcommand ({create: "oplog.rs", C Apped:true, Size: (2 * 1024 * 1024 *(1024x768)}) > Use admin> db.shutdownserver () Net:port:27018replication:replsetname:kgcrs oplogsizemb:2048[[e Mail protected] logs]# mongod-f/etc/mongod2.conf[[email protected] logs]# MONGO--port 27018kgcrs:PRIMARY > Rs.stepdown () #有效产生选举
-----------------Deploying Certified Replication-------------
kgcrs:PRIMARY> use adminkgcrs:PRIMARY> db.createUser({"user":"root","pwd":"123","roles":["root"]})[[email protected] bin]# vim /etc/mongod.confsecurity:   keyFile: /usr/bin/kgcrskey1   clusterAuthMode: keyFile[[email protected] bin]# vim /etc/mongod2.conf [[email protected] bin]# vim /etc/mongod3.conf [[email protected] bin]# vim /etc/mongod4.conf [[email protected] ~]# cd /usr/bin/[[email protected] bin]# echo "kgcrs key"> kgcrskey1[[email protected] bin]# echo "kgcrs key"> kgcrskey2[[email protected] bin]# echo "kgcrs key"> kgcrskey3[[email protected] bin]# echo "kgcrs key"> kgcrskey4[[email protected] bin]# chmod 600 kgcrskey{1..4}
Four instances restart in sequence
进入primarykgcrs:PRIMARY> show dbs   #无法查看数据库kgcrs:PRIMARY> rs.status()   #无法查看复制集kgcrs:PRIMARY> use admin    #身份登录验证kgcrs:PRIMARY> db.auth("root","123")kgcrs:PRIMARY> rs.status()  #可以查看数据库kgcrs:PRIMARY> show dbs    #可以查看复制集

MongoDB Replica Set

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.