Mysql database batch quick insert, mysql database
Recently, when processing a batch of data, you need to process the fields in the database table and then export them to a new table. However, this table has nearly million data records. The problem with this data volume is that it takes a long time to process it.
First of all, when inserting a sentence, it takes a long time to process a large amount of data and ignore it.
Next, I think that multi-threaded inserts and database connections need to be synchronized, so it is not very useful.
Finally, use PreparedStatement to pre-compile the SQL statement for batch insert batch processing.
Now let's test the batch insert operation.
1. Simple batch
public static void main(String[] args) { Connection conn = getConn(lsqlurl, luser, lpassword); long startTime = System.currentTimeMillis(); try { PreparedStatement pst = conn.prepareStatement("insert into testmy (id,name,age) values (?,?,?)"); for (int i = 0; i < 2000; i++) { pst.setInt(1, 3); pst.setString(2, "xx"); pst.setInt(3, 10); pst.addBatch(); } pst.executeBatch(); long endTime = System.currentTimeMillis(); System.out.println((endTime - startTime)/1000+"s"); System.out.println("test sql batch--->2000....."); } catch (SQLException e) { e.printStackTrace(); }finally { if(conn!=null) { try { conn.close(); } catch (SQLException e) { e.printStackTrace(); } } } }
You will find that the time is about 30 s.
2 k rows of data are inserted for 30 seconds.
The data insertion time for 940 rows is seconds (about 16 minutes ).
2. Modify the automatically submitted batch
public static void main(String[] args) { Connection conn = getConn(lsqlurl, luser, lpassword); long startTime = System.nanoTime(); try { conn.setAutoCommit(false); PreparedStatement pst = conn.prepareStatement("insert into test (id,name,age) values (?,?,?)"); for (int i = 0; i < 2000; i++) { pst.setInt(1, 3); pst.setString(2, "xx"); pst.setInt(3, 10); pst.addBatch(); } pst.executeBatch(); conn.commit(); long endTime = System.nanoTime(); System.out.println((endTime - startTime)/1000000+"ms"); System.out.println("test sql batch--->2000....."); } catch (SQLException e) { try { conn.rollback(); } catch (SQLException e1) { e1.printStackTrace(); } e.printStackTrace(); }finally { if(conn!=null) { try { conn.close(); } catch (SQLException e) { e.printStackTrace(); } } } }
It takes about 260 ms to insert 2 k rows.
The data inserted in 1.4 rows is about s.
In fact, the results are very obvious.
Because the database will automatically submit after the data volume reaches a certain value when batch is used. Instead of executing executeBatch. Therefore, we need to change automatic submission to manual submission.
There is another problem: when you actually execute a transaction, when an error occurs, the automatic commit will help you roll back. When you manually commit the transaction, you should perform the rollback on your own.
Therefore, you need to add rollback to catch.
In summary, we can use the automatically submitted batch to insert large amounts of data.
Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.