Http://blog.csdn.net/adparking/article/details/7098221
Load data [low_priority] [local] infile 'file_name.txt '[Replace | ignore]
Into Table tbl_name
[Fields
[Terminated by '\ T']
[Optionally] enclosed by '']
[Escaped by '\']
[Lines terminated by '\ n']
[Ignore number lines]
[(Col_name,...)]
The load data infile statement reads a table from a text file at a high speed. If you specify the local keyword, read the file from the client host. If local is not specified, the file must be on the server. (Local is available in MySQL 3.22.6 or a later version .)
For security reasons, when reading text files on the server, the files must be in the database directory or can be read by everyone. In addition, to use load data infile for files on the server, you must have the file permission on the server host. See permissions provided by MySQL 6.5.
If you specify the keyword low_priority, the execution of the load data statement is postponed until no other customers read the table.
Using local will be slower than allowing the server to directly access files, because the file content must be transmitted from the client host to the server host. On the other hand, you do not need the File Permission to load local files.
You can also use the mysqlimport utility to load data files, which are run by sending a load data infile command to the server. The -- local option enables mysqlimport to read data from the client host. If the client and server support the compression protocol, you can specify -- compress to achieve better performance on a slow network.
When searching for files on the server host, the server uses the following rules:
If an absolute path name is provided, the server uses this path name.
If a relative path name of one or more front parts is given, the server searches for files in the data directory relative to the server.
If a file name without a front-end component is provided, the server searches for a file in the database directory of the current database.
Note that these rules mean that a file such as "./myfile.txt" is read from the data directory of the server and written as "Prepare myfile.txt". A file is read from the database directory of the current database. Note which of the following statements read the db1 file from the database directory instead of DB2:
Mysql> Use db1;
Mysql> load data infile "./data.txt" into Table db2.my _ table;
The replace and ignore keywords control repeated processing of existing unique key records.If you specify replace, the new row replaces the existing row with the same unique key value. If you specify ignore, skip the input of duplicate rows of existing rows with a unique key. If you do not specify any option, an error occurs when the duplicate key is found and the remaining part of the text file is ignored.
If you use the local keyword to load data from a local file, the server cannot stop file transmission during the operation. Therefore, the default behavior is as if ignore was specified.
Load data infile is the inverse operation of select... into OUTFILE,
Select syntax. To write data from a database to a file, select... into OUTFILE is used. To read the file back to the database, load data infile is used. The syntax of the fields and lines clauses of the two commands is the same. The two clauses are optional, but if two clauses are specified, fields must be before lines.
If you specify a fields clause, each of its clauses (terminated by, [optionally] enclosed by and escaped by) is optional, except that you must specify at least one of them.
If you do not specify a fields clause, the default value is the same as if you write this statement:
Fields terminated by '\ t' enclosed by ''escaped '\\'
If you do not specify a lines clause, the default value is the same as if you write this statement:
Lines terminated by '\ N'
In other words, when the default value causes reading input, load data infile performs as follows:
Search for line boundary at line breaks
Split the row into fields at the location Operator
Do not expect fields to be enclosed by any quotation marks
The delimiters, linefeeds, or "\" starting with "\" are part of the literal characters of the field value.
Conversely, the default value causes Select... into OUTFILE to behave as follows when writing data to the output:
Write a locator between fields
Enclose fields without any quotation marks
Use "\" to escape the location, line break, or "\" characters that appear in the field
Line feed at the end of a row
Note: To write fields escaped by '\', you must specify two backslash values for the value read as a single backslash.
The ignore number lines option can be used to ignore the header of a column name at the beginning of the file:
Mysql> load data infile "/tmp/file_name" into table test ignore 1 lines;
When you use select together with load data infile... when into OUTFILE writes data from a database into a file and then immediately reads the file back to the database, the fields and processing options of the two commands must match. Otherwise, load data infile cannot correctly interpret the file content. Assume that you use Select... into OUTFILE to write fields separated by commas into a file:
Mysql> select * From Table1 into OUTFILE 'data.txt'
Fields terminated ','
From...
To read the files separated by commas, the correct statement is:
Mysql> load data infile 'data.txt 'into Table Table2
Fields terminated ',';
On the contrary, if you try to read the file using the statement shown below, it will not work because it uses the load data infile command to locate the location between fields:
Mysql> load data infile 'data.txt 'into Table Table2
Fields terminated by '\ T ';
The possible result is that each input row is interpreted as a single field.
Load data infile can be used to read files obtained from external sources. For example, a file in DBASE format will have fields separated by commas and enclosed by double quotation marks. If the row in the file is terminated by a line break, the command shown below describes the fields and row processing options you will use to load the file:
Mysql> load data infile 'data.txt 'into Table tbl_name
Fields terminated by ', 'enabledby '"'
Lines terminated by '\ n ';
Any field or line processing option can specify an empty string (''). If it is not null, the value of fields [optionally] enclosed by and fields escaped by must be a single character. Fields terminated by and lines terminated by can be more than one character. For example, write a line terminated by the carriage return or read a file containing such a line and specify a line terminated by '\ r \ n' clause.
Fields [optionally] enclosed by control field surrounded by characters. For output (Select... into OUTFILE), if optionally is omitted, all fields are surrounded by enclosed by characters. An example of such output (using a comma as the field separator) is shown below:
"1", "a string", "100.20"
"2", "a string containing a, comma", "102.20"
"3", "a string containing a \" quote "," 102.20"
"4", "a string containing a \", quote and comma "," 102.20"
If you specify optionally, the enclosed by character is only used to enclose the char and varchar fields:
1, "a string", 100.20
2, "a string containing a, comma", 102.20
3, "a string containing a \" quote ", 102.20
4, "a string containing a \", quote and comma ", 102.20
Note that the enclosed by character in a field value is escaped by using the escaped by character as its prefix. Note that if you specify an empty escaped by value, the output may not be correctly read by the load data infile. For example, if the escape character is empty, the output shown above is as follows. Note that the second field in the fourth row contains a comma following the quotation marks. It (incorrectly) seems to end the field:
1, "a string", 100.20
2, "a string containing a, comma", 102.20
3, "a string containing a" quote ", 102.20
4, "a string containing a", quote and comma ", 102.20
For input, if the enclosed by character exists, it is stripped from the end of the field value. (Whether or not optionally is specified, optionally does not work for the input interpretation) the escaped by character leading by character is interpreted as part of the current field value. In addition, the duplicate enclosed by in the field is interpreted as a single enclosed by character, if the field itself starts with this character. For example, if enclosed by '"' is specified, the quotation marks are as follows:
"The" "big" "boss"-> the "big" boss
The "big" boss-> the "big" boss
The "" big "" boss-> the "" big "" boss
Fields escaped by controls how to write or read special characters. If the fields escaped by character is not empty, it is used to prefix the following characters on the output:
Fields escaped by character
Fields [optionally] enclosed by character
The first character of fields terminated by and lines terminated by values.
ASCII 0 (in fact, the subsequent escape characters are written as '0' rather than a zero-value byte)
If the fields escaped by character is empty, no character is escaped. It may not be a good idea to specify an empty escape character, especially if the field value in your data contains any character in the table just given.
For input, if the fields escaped by character is not empty, the appearance of this character is stripped and subsequent characters are literally part of the field value. The exception is an escape "0" or "N" (that is, \ 0 or \ n, if the escape character is "\"). These sequences are interpreted as ASCII 0 (a zero-value byte) and null. See the following rules for null processing.
For more information about the "\"-escape syntax, in some cases, the field and row processing options interact:
If lines terminated by is an empty string and fields terminated by is not empty, the line is terminated by fields terminated.
If the fields terminated by and fields enclosed by values are both null (''), a fixed row (unlimitedly) format is used. It is in fixed row format and does not use separators between fields. On the contrary, column values are written and read only using the "display" width of the column. For example, if the column is declared as int (7), the column value is written with a field of 7 characters. For input, the column value is obtained by reading 7 characters. The fixed row format also affects the processing of null values. See the following. Note that if you are using a multi-Byte Character Set, the fixed length format will not work.
There are multiple processing methods for null values, depending on the fields and lines options you use:
For the default fields and lines values, for the output, null is written as \ n, for the input, \ n is read as null (assuming that the escaped by character is "\").
If fields enclosed by is not empty, the field containing the null character as its value will be read as a null value (different from the null character enclosed in fields enclosed, it is read as the string 'null ).
If fields escaped by is empty, null is written as the word null.
Use a fixed row format (it occurs when fields terminated by and fields enclosed by are both empty), and null is written as an empty string. Note: During file writing, null and null strings cannot be distinguished in the table because they are all written as null strings. If you need to distinguish the two when reading back files, you should not use a fixed row format.
Some situations that are not supported by load data infile:
Fixed-length rows (fields terminated by and fields enclosed by are empty) and blob or text columns.
If you specify a separator that is the same as the other, or a prefix of the other, load data infile cannot correctly interpret the input. For example, the following fields clause will cause problems:
Fields terminated by '"'enabledby '"'
If fields escaped by is empty, A field value that contains fields enclosed by or lines terminated by following the value of fields terminated by causes the load data infile to prematurely terminate reading a field or line. This is because load data infile does not correctly determine where the field or row value ends.
The following example loads the rows of all the persondata tables:
Mysql> load data infile 'persondata.txt 'into Table persondata;
The field table is not specified, So load data infile expects the input row to contain a field for each table column. Use the default fields and lines values.
If you want to load only some columns of a table, specify a field table:
Mysql> load data infile 'persondata.txt'
Into Table persondata (col1, col2 ,...);
If the field order in the input file is different from the column order in the table, you must specify a field table. Otherwise, MySQL cannot know how to match the input fields and columns in the table.
If a row has very few fields, it is set as the default value for columns without input fields.
If the field value defaults, the null field values have different interpretations:
For the string type, the column is set to a null string.
For the numeric type, the column is set to 0.
For the date and time types, the column is set to the appropriate "zero" value for this type.
If the column has a null value or (only for the first timestamp column) specifies a field table, if the timestamp column is removed from the field table, the timestamp column is only set to the current date and time.
If there are too many fields in the input line, the extra fields are ignored and the warning number is incremented by 1.
Load data infile considers all input strings. Therefore, you cannot use numeric values like the insert statement's Enum or set column. All Enum and set values must be specified as strings!
If you are using c api, when the load data infile query is complete, you can call the API function mysql_info () to obtain the query information. The format of the information string is shown below:
Records: 1 deleted: 0 skipped: 0 Warnings: 0
When values are inserted using an insert statement, a warning is reported in some cases. In addition to too few or too many fields in the input line, load data infile also generates a warning. Warnings are not stored anywhere; warning numbers can only be used to indicate whether everything goes well. If you get a warning and want to know exactly why you get them, one way is to use Select... into OUTFILE to another file and compare it with your original input file.