Use JDBC to quickly insert data in batches in MySQL database bitsCN.com
When you use JDBC to connect to the MySQL database for data insertion, especially continuous insertion of large volumes of data (10 W +), How to improve efficiency?
In the JDBC programming interface, Statement has two methods worth noting:
void addBatch() throws SQLException
Adds a set of parameters to this PreparedStatement object's batch of commands.
int[] executeBatch() throws SQLException
Submits a batch of commands to the database for execution and if all commands execute successfully, returns an array of update counts. the int elements of the array that is returned are ordered to correspond to the commands in the batch, which are ordered according to the order in which they were added to the batch.
You can use the addBatch () and executeBatch () methods to process data in batches.
However, you must set manual submission in the database link first,
Connection. setAutoCommit (false)And then run the Statement
Connection. commit ().
Package cyl. demo. ipsearcher; import java. io. bufferedReader; import java. io. fileInputStream; import java. io. IOException; import java. io. inputStreamReader; import java. SQL. connection; import java. SQL. driverManager; import java. SQL. preparedStatement; import java. SQL. SQLException; public class DbStoreHelper {private String insert_ SQL; private String charset; private boolean debug; private String connectStr; private String username; private String password; public DbStoreHelper () {connectStr = "jdbc: mysql: // localhost: 3306/db_ip"; // connectStr + = "? Useserverprepsponts = false & rewriteBatchedStatements = true "; insert_ SQL =" INSERT INTO tb_ipinfos (iplong1, iplong2, ipstr1, ipstr2, ipdesc) VALUES (?,?,?,?,?) "; Charset =" gbk "; debug = true; username =" root "; password =" *** ";} public void storeToDb (String srcFile) throws IOException {BufferedReader bfr = new BufferedReader (new InputStreamReader (new FileInputStream (srcFile), charset); try {doStore (bfr);} catch (Exception e) {e. printStackTrace ();} finally {bfr. close () ;}} private void doStore (BufferedReader bfr) throws ClassNotFoundException, SQLException, IOE Xception {Class. forName ("com. mysql. jdbc. driver "); Connection conn = DriverManager. getConnection (connectStr, username, password); conn. setAutoCommit (false); // Set manual submission to int count = 0; PreparedStatement psts = conn. prepareStatement (insert_ SQL); String line = null; while (null! = (Line = bfr. readLine () {String [] infos = line. split (";"); if (infos. length <5) continue; if (debug) {System. out. println (line);} psts. setLong (1, Long. valueOf (infos [0]); psts. setLong (2, Long. valueOf (infos [1]); psts. setString (3, infos [2]); psts. setString (4, infos [3]); psts. setString (5, infos [4]); psts. addBatch (); // add batch processing countbatch (); // execute batch processing conn. commit (); // submit System. out. println ("All down:" + count); conn. close ();}}
After the execution is completed:
All down : 103498Convert finished.All spend time/s : 47
The total execution time is more than, and the total execution time is 47 seconds.
This efficiency is still not high, and it seems that it has not achieved the desired effect and needs further improvement. You can also add parameters to the MySQL JDBC connection string,
RewriteBatchedStatements = true, Mysql disables batch processing by default, open through this parameter, this parameter can override the SQL statement submitted to the database, see: http://www.cnblogs.com/chenjianjx/archive/2012/08/14/2637914.html
Useserverprepsponts = falseIf you do not enable this function (useserverprep;ts = false), run com. mysql. jdbc. PreparedStatement to assemble the local SQL statements. after the SQL statements are sent to the db, they are replaced? The final SQL statement.
With a slight improvement, add the following statement to the connection string (remove the comment in the code constructor): connectStr + = "? Useserverprepsponts = false & rewriteBatchedStatements = true ";
Execute the following command again:
All down : 103498Convert finished.All spend time/s : 10
With the same data volume, this execution takes only 10 seconds, greatly improving the processing efficiency.