Basic Syntax:
Load data [low_priority] [local] infile 'file _ name txt '[replace | ignore]
Into table tbl_name
[Fields
[Terminated by 'T']
[OPTIONALLY] enclosed by '']
[Escaped by '']
[Lines terminated by 'n']
[Ignore number lines]
[(Col_name,)]
The load data infile statement reads a table from a text file at a high speed. Before using this command, the mysqld process (service) must be running. For security reasons, when reading text files on the server, the files must be in the database directory or can be read by everyone. In addition, to use load data infile for files on the server, you must have the file permission on the server host.
Writing tens of millions or even hundreds of millions of data into mysql is really a headache.
Load data local infile seems to be the fastest method, but it still takes tens of minutes to load a 0.1 billion-level file.
If there is a master-slave structure, load the data on the master server for dozens of minutes, the file data will be written to binary log and then transmitted to the slave server,
Then, the slave server also needs to perform the load operation for tens of minutes. If the slave server only has one SQL _thread to execute the binary log, within these tens of minutes,
The slave server can only perform this load operation and cannot respond to subsequent SQL statements, resulting in a delay of dozens of minutes between the master and slave nodes.
In addition, if you load the data source for 20 minutes, www.111cn.net, and then find that there is a problem with the data source, you need to cancel the load, then it will take at least 20 minutes for mysql to roll back...
This time is very helpless.
Therefore, it is necessary to split this large file into several small files and load them separately.
The following are some test data:
[Root @ yw-0-0 huarong] # wc-l cfi.txt
20894227 cfi.txt, number of rows 20 M
There are nine fields, varchar (255), and no key.
The file size is 4,078,099,848 (3.8 GB), with an average of 195 bytes per line.
Load t1.sh directly to push data.
This time data is lost...
Innodb t1.sh load directly again
Time mysql test-e "load data local infile '/d01/huarong/cfi.txt' into table cfi"
[Root @ yw-0-0 huarong] #./t1.sh
Real 6m4. 720 s
User 0m0. 492 s
Sys 0m2. 213 s
Innodb t2.sh does not write binlog.
Time mysql test-e "set foreign_key_checks = 0; set SQL _log_bin = 0; set unique_checks = 0; load data local infile '/d01/huarong/cfi.txt 'into table cfi" www.111cn.net
[Root @ yw-0-0 huarong] #./t2.sh
Real 5m3. 964 s
User 0m0. 586 s
Sys 0m2. 788 s
Innodb t3.sh fifo, load 1 m rows of data each time.
Wget http://www.maatkit.org/get/mk-fifo-split
Perl./mk-fifo-split./cfi.txt -- fifo/tmp/cfi. fifo -- lines 1000000
While [-e/tmp/cfi. fifo]; do
Time mysql test-e "set foreign_key_checks = 0; set SQL _log_bin = 0; set unique_checks = 0; load data local infile '/tmp/cfi. fifo 'into table cfi"
Sleep 1;
Done
Real: 5m25. 84 s
User: 0m2. 197 s
Sys: 0m11. 244 s
Myisam: t2.sh does not write binlog
Real 3m24. 838 s
User 0m0. 626 s
Sys 0m2. 939 s
For more details, see: http://www.111cn.net/database/mysql/56628.htm