Today, Kaggle downloaded from the CSV data into MySQL, want to do a statistical analysis, Zennai CSV file is somewhat large. So just use the MySQL command to import, now mark, for future contingencies:
1. Import:
基本的语法:load data [low_priority] [local] infile ‘file_name txt‘ [replace | ignore]into table tbl_name[character set gbk][fields[terminated by‘t‘][OPTIONALLY] enclosed by ‘‘][escaped by‘\‘ ]][lines terminated by‘n‘][ignore number lines][(col_name, )]
Import command _example:
load data infile ‘csv文件路径\\test.csv‘ replace into table 表名 fields terminated by ‘,‘ optionally enclosed by ‘"‘ lines terminated by ‘\n‘ ignore 1 lines(Id,@dummy,DayOfWeek,PdDistrict,Address,X,Y);
Don't underestimate this simple example. Include a lot of things;
The first line is the import file;
The second line of syntax will find two words: Replace and ignore. The replace and ignore keywords control the repeated processing of existing unique key records. Suppose you specify replace. The new row will replace the existing rows with the same unique key value. Suppose you specify ignore. Skips repeated rows of input for an existing row with a unique key. Suppose you don't specify an option, and when you find a repeat key, an error occurs, and the remainder of the text file is ignored.
So I think the reality of your table is to design the primary key. It is better to not repeat the field;
The third to fourth line is very easy, and each detail field is separated by a comma, separated by commas.
Erminated by describes the delimiter for the Description field. By default, the tab character (\ t)
The enclosed by description is the enclosing character of the field. This means that the field is assumed to have an argument. As part of a field.
The other syntax is escaped by, which describes the escape character.
The default is a backslash (backslash:\)
Line five lines terminated by is cut for each line. Here's a question to be aware of. Assuming that the CSV file is generated under Windows, then cut with ' \ r \ n '. Use ' \ n ' under Linux.
Ignore 1 lines in line six ignores the first line. Because the first row is often the field name. There's a field in the back parenthesis that's very special @dummy, which is to assume that there is a field in the CSV file that I don't want to insert. Then turn the corresponding field name into @dummy.
PS: You want to insert the import time by the way. Just add set Update_time=current_timestamp at the end.
2. Export:
At the same time, the Export command is included:
select * from 表名into outfile ‘导出路径\\test.csv‘ fields terminated by ‘,‘ optionally enclosed by ‘"‘ escaped by ‘"‘ lines terminated by ‘\n‘;
Hope to communicate more!
CSV Bulk Import mysql command