Today, when you are doing a program that imports Excel data into a database, you are ready to use JDBC BULK Insert because of the large amount of data. Then the Preparedstatement.addbatch () is used, and when the 1w data is added, the insert operation is performed, Preparedstatement.executebatch (). I thought it would be quick, and it took me more than 30 minutes to insert 65,536 data, which was totally unexpected. I asked my colleagues how they handled the bulk data import and found that they were also using JDBC BULK INSERT processing, but unlike me: they used Con.setautocommit (false); Then Preparedstatement.executebatch (), then execute Con.commit (); then try again, what is a miracle? It took half an hour just to import the data, and after adding these two sentences, it took only 15 seconds to complete. So went to check the reason, found on the Internet the following paragraph of explanation:
* When importing data to InnoDB, make sure this MySQL does not has autocommit mode enabled because that
Requires a log flush to disk for every insert. To disable autocommit during your import operation, surround it with
SET autocommit and COMMIT statements:
SET autocommit=0;
... SQL Import Statements ...
COMMIT;
For the first time, it is because there is no setautocommit (false), so for each INSERT statement, a log is written to the disk, so although bulk insert is set, the effect is like a single insert, resulting in very slow insertion.
Some of the code is as follows:
String sql = "INSERT INTO table * * * * * * *"; Con.setautocommit (false= con.preparestatement ( SQL); for (int i=1; i<65536; i++) { ps.addbatch (); // 1w Record insertion once if (i% 10000 = = 0) { ps.executebatch (); Con.commit (); }} // finally insert less than 1w of data Ps.executebatch (); Con.commit ();
JDBC BULK INSERT enables fast insertion of large volumes of data