MongoDB basic (5) backup restoration and Export Import
The original snapshot of the operating system is used for Backup and restoration. After the Backup is successful, the restoration fails (see Backup and Restore with Filesystem Snapshots)
So this method is not recorded here.
Which of the following are currently tested? Backup restoration methods (for beginners ):
1. Copy and replace database files for backup and Restoration
2. Use mongodump and mongorestore
3. Use Export Import and Export
1. Copy and replace database files for backup and restoration (dangerous and difficult)
1.1 backup
A. Execute db. fsyncLock () in mongodb to refresh data written to the disk and lock the entire instance:
>db.fsyncLock()
B. Package and compress the entire database directory (you can also package some databases) as a backup:
[root@localhost ~]# tar -cvzf /root/mongodb_20150505.tar.gz /var/lib/mongo
C. Execute db. fsyncUnlock () in mongodb to unlock the database. The backup step is complete !~
>db.fsyncUnlock()
1.1 restore
A. Disable the mongodb service (or shut down the mongod service at the operating system level ):
>use admin>db.shutdownServer()
B. Delete the mongo data file! Make sure that the Backup exists and is normal !~ Otherwise, you will be unable to get back to the sky!
[root@localhost ~]# rm -rf /var/lib/mongo/*
C. decompress the backup file to the root directory, which is equivalent to restoring:
[root@localhost ~]# tar -xvzf /root/mongodb_20150505.tar.gz -C /
D. If the service cannot be started, delete the file mongod. lock First:
[root@localhost ~]# rm -f /var/lib/mongo/mongod.lock
E. Start the service and access the service normally.
[root@localhost ~]# service mongod start
2. Use mongodump and mongorestore
A. Simple backup to restore the local database, back up all and restore all:
mongodumpmongorestore /root/dump
After the backup is complete, a folder "dump" will be generated in the current directory:
B. Specify the database to be restored and its backup directory when restoring a single database:
mongorestore --db test /root/dump/test
You can also specify the output path using the computer name:
mongodump --host localhost.localdomain --port 27017 --out /root/mongodump-2015-05-05mongorestore --port 27017 --db test /root/mongodump-2015-05-05/test
Specify the user name and password for remote backup (this is not an attempt, refer to the official example ):
mongodump --host mongodb1.example.net --port 3017 --username user --password pass --out /opt/backup/mongodump-2013-10-24mongorestore --host mongodb1.example.net --port 3017 --username user --password pass /opt/backup/mongodump-2013-10-24
C. For more information, see # mongorestore -- help.
# mongorestore --helpgeneral options: --help print usage --version print the tool version and exitverbosity options: -v, --verbose more detailed log output (include multiple times for more verbosity, e.g. -vvvvv) --quiet hide all log outputconnection options: -h, --host= mongodb host to connect to (setname/host1,host2 for replica sets) --port= server port (can also use --host hostname:port)ssl options: --ssl connect to a mongod or mongos that has ssl enabled --sslCAFile= the .pem file containing the root certificate chain from the certificate authority --sslPEMKeyFile= the .pem file containing the certificate and key --sslPEMKeyPassword= the password to decrypt the sslPEMKeyFile, if necessary --sslCRLFile= the .pem file containing the certificate revocation list --sslAllowInvalidCertificates bypass the validation for server certificates --sslAllowInvalidHostnames bypass the validation for server name --sslFIPSMode use FIPS mode of the installed openssl libraryauthentication options: -u, --username= username for authentication -p, --password= password for authentication --authenticationDatabase= database that holds the user's credentials --authenticationMechanism= authentication mechanism to usenamespace options: -d, --db= database to use -c, --collection= collection to useinput options: --objcheck validate all objects before inserting --oplogReplay replay oplog for point-in-time restore --oplogLimit= only include oplog entries before the provided Timestamp(seconds[:ordinal]) --restoreDbUsersAndRoles restore user and role definitions for the given database --dir= input directory, use '-' for stdinrestore options: --drop drop each collection before import --writeConcern= write concern options e.g. --writeConcern majority, --writeConcern '{w: 3, wtimeout: 500, fsync: true, j:true}' (defaults to 'majority') --noIndexRestore don't restore indexes --noOptionsRestore don't restore collection options --keepIndexVersion don't update index version --maintainInsertionOrder preserve order of documents during restoration -j, --numParallelCollections= number of collections to restore in parallel (4 by default) --numInsertionWorkersPerCollection= number of insert operations to run concurrently per collection (1 by default) --stopOnError stop restoring if an error is encountered on insert (off bydefault)
3. Use Export Import and Export
3.1 export using export
-- Type: json or csv -- fields: select the exported column # export column {_ id, id, size} is in csv format: Export export -- db test -- collection tab -- type = csv -- fields _ id, id, size -- out/root/test_tab.csv # export the json format export -- db test -- collection tab -- type = json -- out/root/test_tab.json # output to shell, query id = 2 and output export -- db test -- collection tab -- query' {"id": 2} '-- sort' {"name ": 1} '# query and export -- db test -- collection tab -- type = csv -- query' {"id": 2}' -- fields _ id, id -- out/root/test_tab.csv # shorthand options [-- db] and [-- collection], use the Skip and limit functions to output export-d test-c tab -- sort '{"name ": -1} '-- limit 2 -- skip 2 -- out/root/test_tab.json # If it is remote, you need to add the parameter host, port, username, password -- host servername_or_ip -- port 37017 -- username user -- password pass
View help: Export export -- help
general options: --help print usage --version print the tool version and exitverbosity options: -v, --verbose more detailed log output (include multiple times for more verbosity, e.g. -vvvvv) --quiet hide all log outputconnection options: -h, --host= mongodb host to connect to (setname/host1,host2 for replica sets) --port= server port (can also use --host hostname:port)ssl options: --ssl connect to a mongod or mongos that has ssl enabled --sslCAFile= the .pem file containing the root certificate chain from the certificate authority --sslPEMKeyFile= the .pem file containing the certificate and key --sslPEMKeyPassword= the password to decrypt the sslPEMKeyFile, if necessary --sslCRLFile= the .pem file containing the certificate revocation list --sslAllowInvalidCertificates bypass the validation for server certificates --sslAllowInvalidHostnames bypass the validation for server name --sslFIPSMode use FIPS mode of the installed openssl libraryauthentication options: -u, --username= username for authentication -p, --password= password for authentication --authenticationDatabase= database that holds the user's credentials --authenticationMechanism= authentication mechanism to usenamespace options: -d, --db= database to use -c, --collection= collection to useoutput options: -f, --fields= comma separated list of field names (required for exporting CSV) e.g. -f "name,age" --fieldFile= file with field names - 1 per line --type= the output format, either json or csv (defaults to 'json') -o, --out= output file; if not specified, stdout is used --jsonArray output to a JSON array rather than one object per line --pretty output JSON formatted to be human-readablequerying options: -q, --query= query filter, as a JSON string, e.g., '{x:{$gt:1}}' -k, --slaveOk allow secondary reads if available (default true) --forceTableScan force a table scan (do not use $snapshot) --skip= number of documents to skip --limit= limit the number of documents to export --sort= sort order, as a JSON string, e.g. '{x:1}'
3.2 Export Import
# The column "_ id" will also be imported, note that duplicate key export import -- db mydb -- collection tab -- file/root/test_tab.json1_import -- db mydb -- collection tab -- type csv -- headerline -- file/root/test_tab.csv # If it is remote, you need to add the following parameters: host, port, username, password -- host servername_or_ip -- port 37017 -- username user -- password pass
View help: Export Import -- help
general options: --help print usage --version print the tool version and exitverbosity options: -v, --verbose more detailed log output (include multiple times for more verbosity, e.g. -vvvvv) --quiet hide all log outputconnection options: -h, --host= mongodb host to connect to (setname/host1,host2 for replica sets) --port= server port (can also use --host hostname:port)ssl options: --ssl connect to a mongod or mongos that has ssl enabled --sslCAFile= the .pem file containing the root certificate chain from the certificate authority --sslPEMKeyFile= the .pem file containing the certificate and key --sslPEMKeyPassword= the password to decrypt the sslPEMKeyFile, if necessary --sslCRLFile= the .pem file containing the certificate revocation list --sslAllowInvalidCertificates bypass the validation for server certificates --sslAllowInvalidHostnames bypass the validation for server name --sslFIPSMode use FIPS mode of the installed openssl libraryauthentication options: -u, --username= username for authentication -p, --password= password for authentication --authenticationDatabase= database that holds the user's credentials --authenticationMechanism= authentication mechanism to usenamespace options: -d, --db= database to use -c, --collection= collection to useinput options: -f, --fields= comma separated list of field names, e.g. -f name,age --fieldFile= file with field names - 1 per line --file= file to import from; if not specified, stdin is used --headerline use first line in input source as the field list (CSV and TSV only) --jsonArray treat input source as a JSON array --type= input format to import: json, csv, or tsv (defaults to 'json')ingest options: --drop drop collection before inserting documents --ignoreBlanks ignore fields with empty values in CSV and TSV --maintainInsertionOrder insert documents in the order of their appearance in the input source -j, --numInsertionWorkers= number of insert operations to run concurrently (defaults to 1) --stopOnError stop importing at first insert/upsert error --upsert insert or update objects that already exist --upsertFields= comma-separated fields for the query part of the upsert --writeConcern= write concern options e.g. --writeConcern majority, --writeConcern '{w: 3, wtimeout: 500, fsync: true, j: true}' (defaults to 'majority')
Because there is no replication, test the record after the backup is copied !~