In the above article, we learned how to use MySQL to learn MongoDB authorization and permissions. In this article, we will continue our learning journey and learn about the twoImport and Export.
1. MySQL Import and Export
(1) mysqlimport
This tool is located in the mysql/bin directory and is a very effective tool for MySQL to load (or import) data. This is a command line tool. There are two parameters and a large number of options to choose from. This tool imports a text file into your specified database and table. Finally, we want to import data from the student.txt file to the student table in the database class:
Mysqlimport class. student student.txt
(2) load data infile
This command is very similar to mysqlimport, but this method can be used in the MySQL command line. Like the mysqlimport tool, this command also has some optional parameters. For example, if you want to import data from your computer to a remote database server, run the following command:
Load data local infile "d: \ student.txt" into table student;
The local parameter above indicates that the file is a local file, and the server is the server you log on. In this way, you can use ftp to upload files to the server, and mysql completes the process for you.
(3) mysqldump
Mysqldump is similar to the opposite tool in many aspects. They have the same options. However, mysqldump can do more. It can load the entire database into a separate text file. This package contains all the SQL commands required to recreate your database. This command gets all the modes and converts them into DDL syntax, obtains all the data, and creates an INSERT statement from the data. This tool reverses all the designs in your database. Because everything is contained in a text file. This text file can be imported back to MySQL using a simple batch processing and a suitable SQL statement. This tool is incredibly simple and fast. There will never be a headache. Therefore, if you want to load the content of the entire database mydb to a file, you can use the following command:
Bin/mysqldump-p mydb> mydb.txt
2. MongoDB Import and Export
(1) Export Tool
MongoDB provides the export tool to export a collection to a file in json or csv format. You can specify which data items to export or export data based on the given conditions. The tool help information is as follows:
- [root@localhost bin]# ./mongoexport --help
- options:
- --help produce help message
- -v [ --verbose ] be more verbose (include multiple times for more
- verbosity e.g. -vvvvv)
- -h [ --host ] arg mongo host to connect to ( /s1,s2 for sets)
- --port arg server port. Can also use --host hostname:port
- --ipv6 enable IPv6 support (disabled by default)
- -u [ --username ] arg username
- -p [ --password ] arg password
- --dbpath arg directly access mongod database files in the given
- path, instead of connecting to a mongod server -
- needs to lock the data directory, so cannot be used
- if a mongod is currently accessing the same path
- --directoryperdb if dbpath specified, each db is in a separate
- directory
- -d [ --db ] arg database to use
- -c [ --collection ] arg collection to use (some commands)
- -f [ --fields ] arg comma separated list of field names e.g. -f name,age
- --fieldFile arg file with fields names - 1 per line
- -q [ --query ] arg query filter, as a JSON string
- --csv export to csv instead of json
- -o [ --out ] arg output file; if not specified, stdout is used
- --jsonArray output to a json array rather than one object per
- line
- [root@localhost bin]#
The following is a practical example to illustrate the usage of this tool:
Export table t1 in the foo database to json format:
- [root@localhost bin]# ./mongoexport -d foo -c t1 -o /data/t1.json
- connected to: 127.0.0.1
- exported 1 records
- [root@localhost bin]#
After the export is successful, check whether the style of the/data/t1.json file is what we want:
- [root@localhost data]# more t1.json
- { "_id" : { "$oid" : "4f927e2385b7a6814a0540a0" }, "age" : 2 }
- [root@localhost data]#
The export is successful as described above, but there is a problem: What should we do if the migration of heterogeneous databases is successful? For example, what should we do if we want to import MongoDB data to MySQL? MongoDB provides a csv export format to solve the migration problem of heterogeneous databases. The following section exports the age and name columns of table t2 of the foo database, as follows:
- [root@localhost bin]# ./mongoexport -d foo -c t2 --csv -f age,name -o /data/t2.csv
- connected to: 127.0.0.1
- exported 1 records
- [root@localhost bin]#
View the export result of/data/t2.csv:
- [root@localhost data]# more t2.csv
- age,name
- 1,"wwl"
- [root@localhost data]#
We can see that MongoDB provides us with a strong data export tool.
(2) Export Import Tool
MongoDB provides the volume import tool to import the content of a file in a specific format to a collection. The tool help information is as follows:
- [root@localhost bin]# ./mongoimport --help
- options:
- --help produce help message
- -v [ --verbose ] be more verbose (include multiple times for more
- verbosity e.g. -vvvvv)
- -h [ --host ] arg mongo host to connect to ( /s1,s2 for sets)
- --port arg server port. Can also use --host hostname:port
- --ipv6 enable IPv6 support (disabled by default)
- -u [ --username ] arg username
- -p [ --password ] arg password
- --dbpath arg directly access mongod database files in the given
- path, instead of connecting to a mongod server -
- needs to lock the data directory, so cannot be used
- if a mongod is currently accessing the same path
- --directoryperdb if dbpath specified, each db is in a separate
- directory
- -d [ --db ] arg database to use
- -c [ --collection ] arg collection to use (some commands)
- -f [ --fields ] arg comma separated list of field names e.g. -f name,age
- --fieldFile arg file with fields names - 1 per line
- --ignoreBlanks if given, empty fields in csv and tsv will be ignored
- --type arg type of file to import. default: json (json,csv,tsv)
- --file arg file to import from; if not specified stdin is used
- --drop drop collection first
- --headerline CSV,TSV only - use first line as headers
- --upsert insert or update objects that already exist
- --upsertFields arg comma-separated fields for the query part of the
- upsert. You should make sure this is indexed
- --stopOnError stop importing at first error rather than continuing
- --jsonArray load a json array, not one item per line. Currently
- limited to 4MB.
The following example shows how to use this tool:
Let's take a look at table t1 in the foo database:
- > db.t1.find();
- { "_id" : ObjectId("4f937a56450beadc560feaa9"), "age" : 5 }
- >
T1 contains a record with age = 5. Let's take a look at what the data in the json file looks like:
- [root@localhost data]# more t1.json
- { "_id" : { "$oid" : "4f937a56450beadc560feaa7" }, "age" : 8 }
- [root@localhost data]#
We can see that the t1.json file contains a data entry with age = 8. Next we will use the history import tool to import the records in the json file to table t1:
- [root@localhost bin]# ./mongoimport -d foo -c t1 /data/t1.json
- connected to: 127.0.0.1
- imported 1 objects
The returned information of the tool inserts a record into the table. We will verify it in the database:
- [root@localhost bin]# ./mongo
- MongoDB shell version: 1.8.1
- connecting to: test
- > use foo
- switched to db foo
- > db.t1.find();
- { "_id" : ObjectId("4f937a56450beadc560feaa9"), "age" : 5 }
- { "_id" : ObjectId("4f937a56450beadc560feaa7"), "age" : 8 }
- >