(The following methods can only recover part of the data because the Oplog table does not have all of the synchronization logs saved, size limited)
Because the Oplog table (called a table after collection for custom) has no index, I have to select the data for one of the tables I need to recover.
So the Oplog table in each slice is exported separately to another server for processing:
1. Back up:
./mongodump--port 28011-d local-c oplog.rs-o/opt/backup/0706local/
2. Restore to another single node MongoDB server
Mongorestore--port 28011-d temp_local-c shard1_oplog--dir/opt/backup/0706local/local/oplog.rs.bson
......
Mongorestore--port 28012-d temp_local-c shard4_oplog--dir/opt/backup/0706local/local/oplog.rs.bson
How many slices there are, there are a number of newly created tables.
3. Indexing to facilitate query data:
> Db.shard4_oplog.createIndex ({ns:1,op:1})
{
"Createdcollectionautomatically": false,
"Numindexesbefore": 0,
"Numindexesafter": 1,
"OK": 1
}
4. Originally wanted to delete the useless data, but because Oplog originally is a capped table, cannot delete the data, had to do, so deal with it.
> Db.shard4_oplog.remove ({ns: "Ds.tb_monitor", op:{$ne: ' I '}})
Writeresult ({
"Nremoved": 0,
"Writeerror": {
"Code": 20,
"ErrMsg": "Cannot remove from a capped Collection:oplog.shard4_oplog"
}
})
5. Write code, after each row of data query to add to a new table:
About: ForEach:
Step 2:create Backup of 2.6 admin.system.users collection. Copy all documents in the
Admin.system.users (page 286) collection to Theadmin.system.new_userscollection:
Db.getsiblingdb ("admin"). System.users.find (). ForEach (function (Userdoc) {
Status = Db.getsiblingdb ("admin"). System.new_users.save (Userdoc);
if (Status.haswriteerror ()) {
Print (Status.writeerror);
}
}
);
Using a similar method, you write a feature that saves log data to another table:
(The table name is the name of the table to be recovered, the action type is ' I '---the data inserted by the new data is taken out and saved to another new table)
Db.shard3_oplog.find ({ns: "Ds.tb_monitor", op: ' I '}). ForEach (function (res_data) {
Obj_doc = res_data["O"]; #取出oplog中插入的JSON对象
Status = Db.tb_monitor.save (Obj_doc); #保存到新表中.
if (Status.haswriteerror ()) {
Print (Status.writeerror);
}
}
)
6. Then put the newly created table, export, and then restore to the cluster for a new table, and then use the 5 method to save to the source table.
./mongodump-d local-c Shard1_oplog-o/opt/backup/0706local/
Mongorestore--port 28000-d temp_local-c new_collection--dir/opt/backup/0706local/local/shard1_oplog.bson
Db.new_collection.find (). ForEach (function (res_data) {
Status = Db.tb_monitor.save (Res_data);
if (Status.haswriteerror ()) {
Print (Status.writeerror);
}
}
)