Problem description: in a production environment, when the read Pressure on secondary replica members is high, you can add new secondary replica members to relieve the pressure. In order to ensure that the primary replica members do not stop and reduce the pressure on the primary replica members, mongodump backup data on the secondary replica members. In order to achieve rapid recovery of new secondary replica members, you can directly use NFS
Problem description: in a production environment, when the read Pressure on secondary replica members is high, you can add new secondary replica members to relieve the pressure. In order to ensure that the primary replica members do not stop and reduce the pressure on the primary replica members, mongodump backup data on the secondary replica members. In order to achieve rapid recovery of new secondary replica members, you can directly use NFS
Problem description:
In the production environment, when the read Pressure on secondary replica members is high, you can relieve the pressure by adding new secondary replica members.
In order to ensure that the primary copy members do not stop and relieve pressure on the primary copy members, mongodump the backup data on the secondary copy members;
To quickly recover new secondary replica members, you can mount secondary replica members to secondary replica members for backup operations in NFS mode;
To ensure data consistency, the-oplog parameter is used when mongodump data, and the-oplogReplay parameter is used for mongorestore;
To meet the needs of subsequent space resizing, you can use the-directoryperdb parameter to store the database in different directories.
Solution:
Step 1: mount a new machine to an NFS Disk
For more information, see configure the NFS Network File System and client usage in CentOS Linux.
Step 2: Backup secondary copy members
The local database will not be backed up, and other databases, including admin, will be backed up.
mongodump --host=192.168.11.1:27017 --oplog -o /mnt/mongo/mongodata -u xucy –p Passw0rd --authenticationDatabase admin > mongodump.log 2>&1 &
You can view the log data in real time and observe the backup progress.
tail –f mongodump.log
Step 3: Restore the database on the new instance
Mongorestore must be run when mongod is not started. It writes directly to the file.
mongorestore --host=192.168.11.2:27017 --oplogReplay --dbpath /data/var/lib/mongodb --directoryperdb /nfspool/mongodata > mongorestore.log 2>&1 &
You can view the log data in real time and observe the recovery progress.
tail –f mongorestore.log
Step 4: rebuild the oplog on the new instance
1. view the maintenance window and oplog size of the master copy:
Rs_main: PRIMARY> db. printReplicationInfo ()
Configured oplog size: 23862.404296875 MB
Log length start to end: 39405 secs (10.95hrs)
Oplog first event time: Sun Feb 08 2015 10:34:07 GMT-0600 (CST)
Oplog last event time: Sun Feb 08 2015 21:30:52 GMT-0600 (CST)
Now: Sun Feb 08 2015 21:30:53 GMT-0600 (CST)
2. Rebuild the oplog on the new machine:
Start standalone and execute the following Delete and create scripts:
> Use local> db. oplog. rs. drop ()> db. createCollection ("oplog. rs ", {" capped ": true," size ": 23*1024*1024*1024}) or> db. runCommand ({create: "oplog. rs ", capped: true, size: (23*1024*1024*1024 )})
Step 5: Restore oplog on the new instance
This oplog is the oplog. bson exported when mongodump.
mongorestore -d local -c oplog.rs /nfspool/mongodata/oplog.bson
Step 6: Start a new instance with a replica set
Add the same -- replSet and -- keyFile parameters to the new instance configuration and source replica set to start with the replica set
Step 7: add the node to the replica Set cluster
>rs.add(“192.168.11.2:27017”){“ok”:1}