Problem Description:
In a production environment, you can relieve stress by adding new secondary replica members when the read pressure of a secondary replica member is large.
In order to achieve the primary replica member non-stop, and reduce the pressure of the primary replica members, you can mongodump backup data on the secondary replica members;
In order to enable the rapid recovery of new secondary replica members, the secondary replica members can be mounted directly by NFS to the secondary replica members doing backup operations;
In order to ensure the consistency of the data, use the-oplog parameter when mongodump the data, and use the-oplogreplay parameter when Mongorestore.
In order to meet the later space expansion, the database is stored by the-directoryperdb parameter.
Workaround:
Step One: Hang the new machine as an NFS disk
See also: Configuring NFS Network File system on CentOS Linux and client use
Step two: Back up secondary replica members
The local database will not be backed up, and other libraries, including the admin, will be backed up.
Mongodump--host=192.168.11.1:27017--oplog-o/mnt/mongo/mongodata-u xucy–p passw0rd--authenticationDatabase admin &G T Mongodump.log 2>&1 &
You can observe the progress of the backup by viewing the log data in real time.
Tail–f Mongodump.log
Step three: Recover the database on a new instance
Mongorestore to run if Mongod is not started, it writes directly to the file.
Mongorestore--host=192.168.11.2:27017--oplogreplay--dbpath/data/var/lib/mongodb--directoryperdb/nfspool/ Mongodata > Mongorestore.log 2>&1 &
You can observe the recovery progress by viewing the log data in real time.
Tail–f Mongorestore.log
Step four: Rebuild the Oplog on the new instance
1. View the Maintenance window and oplog size of the primary replica:
Rs_main:primary> Db.printreplicationinfo ()
Configured Oplog SIZE:23862.404296875MB
Log length start to End:39405secs (10.95hrs)
Oplog First Event Time:sun 10:34:07 GMT-0600 (CST)
Oplog last event Time:sun 21:30:52 GMT-0600 (CST)
Now:sun Feb 21:30:53 GMT-0600 (CST)
2. Rebuild the Oplog on the new machine:
Start with standalone and execute the following delete and create script:
> Use local > Db.oplog.rs.drop () > Db.createcollection ("oplog.rs", {"capped": true, "size": 23 * 1024 * 1 024 * 1024}) or > Db.runcommand ({create: "oplog.rs", Capped:true, Size: (23 * 1024 * 1024 * 1024)})
Step five: Restore Oplog on a new instance
This oplog is an exported Oplog.bson when mongodump.
Mongorestore-d local-c Oplog.rs/nfspool/mongodata/oplog.bson
Step Six: Start a new instance with a replica set
The new instance configuration and the source replica set add the same--replset and--keyfile parameters, starting with the replica set
Step Seven: Add the node to the replication Chi group
>rs.add ("192.168.11.2:27017") {"OK": 1}
This article is from the SQL Server deep dives blog, so be sure to keep this source http://ultrasql.blog.51cto.com/9591438/1614361
MongoDB uses replica set backup to add new secondary replica members