Recently, a batch of data needs to be processed from the fields in the table in the library and then exported to a new table. However, this table has a data volume of nearly 500w. The problem with this amount of data is that it takes a long time to process.
First think, a sentence of the insertion, large data processing time is long, ignore.
Next think, multi-threaded INSERT, think database connection is need to synchronize, so feel not very useful.
Finally, you use PreparedStatement to pre-compile SQL for Bulk Insert batch processing.
OK, let's do a batch insert test now.
1. Use simple Batch
Public Static void Main(string[] args) {Connection conn = Getconn (Lsqlurl, Luser, Lpassword);LongStartTime = System.currenttimemillis ();Try{PreparedStatement PST = conn.preparestatement ("INSERT into Testmy (id,name,age) VALUES (?,?,?)"); for(inti =0; I < -; i++) {Pst.setint (1,3); Pst.setstring (2,"XX"); Pst.setint (3,Ten); Pst.addbatch (); } pst.executebatch ();LongEndTime = System.currenttimemillis (); System.out.println ((endtime-starttime)/ ++"S"); System.out.println ("Test SQL Batch--->2000 ..."); }Catch(SQLException e) {E.printstacktrace (); }finally{if(conn!=NULL) {Try{Conn.close (); }Catch(SQLException e) {E.printstacktrace (); } } } }
You will find that the time will be around 30s.
2k rows of data are inserted for 30 seconds.
2w Row Data insertion time is 940 seconds (about 16min).
2, modify the automatic submission of batch
Public Static void Main(string[] args) {Connection conn = Getconn (Lsqlurl, Luser, Lpassword);LongStartTime = System.nanotime ();Try{Conn.setautocommit (false); PreparedStatement PST = conn.preparestatement ("INSERT into Test (id,name,age) VALUES (?,?,?)"); for(inti =0; I < -; i++) {Pst.setint (1,3); Pst.setstring (2,"XX"); Pst.setint (3,Ten); Pst.addbatch (); } pst.executebatch (); Conn.commit ();LongEndTime = System.nanotime (); System.out.println ((endtime-starttime)/1000000+"MS"); System.out.println ("Test SQL Batch--->2000 ..."); }Catch(SQLException e) {Try{Conn.rollback (); }Catch(SQLException E1) {E1.printstacktrace (); } e.printstacktrace (); }finally{if(conn!=NULL) {Try{Conn.close (); }Catch(SQLException e) {E.printstacktrace (); } } } }
2k row insertion time is about 260ms.
2w row data insertion is probably 1.4s.
In fact, the results are very obvious.
Because the data volume reaches a certain value when you use batch, the database is automatically committed. Instead of executing executebatch when you execute it. So we need to change the auto commit into manual commit.
Here's another question: when you're actually doing a transaction, when it goes wrong, auto-commit will help you rollback, and you should go back when you manually commit.
So you need to add rollback in the catch.
Okay, all right. We can insert large amounts of data using auto-submitted batch.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
MySQL Database bulk Quick Insert