First, the JDBC BULK INSERT
JDBC BULK INSERT is primarily used for data import and logging because the logs are usually written first in the file.
I tested three more common methods with the Mysql5.1.5 JDBC driver, respectively.
Method one, using the PreparedStatement plus batch method
Java code
try{
Class.forName ("com. MySQL.jdbc.Driver "); Conn=drivermanager.getconnection (O_url, Username,password); Conn.setautocommit (false); String SQL= "INSERT adlogs (ip,website,yyyymmdd,hour,object_id) VALUES (?,?,?,?,?)"; PreparedStatement Prest=conn.preparestatement (sql,resultset.type_scroll_sensitive,resultset.concur_read_only); for(intx = 0; X < size;x++) {prest.setstring (1, "192.168.1.1"); Prest.setstring (2, "localhost"); Prest.setstring (3, "20081009"); Prest.setint (4,8); Prest.setstring (5, "11111111"); Prest.addbatch (); } prest.executebatch (); Conn.commit (); Conn.close (); } Catch(SQLException ex) {Logger.getlogger (MyLogger.class. GetName ()). log (Level.severe,NULL, ex); } Catch(ClassNotFoundException ex) {Logger.getlogger (MyLogger.class. GetName ()). log (Level.severe,NULL, ex); }
Description Under construction statement, the meaning of the following two parameters:
The first parameter specifies the type of ResultSet. The options are:
Type_forward_only: Default type. Only forward access is allowed and is not affected by changes made to the database by other users. Type_scroll_insensitive: Allows you to move forward or backward through the list, even for specific positioning, such as moving to the fourth record in the list or moving backward two records from the current position. Will not be affected by changes made to the database by other users. Type_scroll_sensitive: Like Type_scroll_insensitive, allows positioning in records. This type is affected by changes made by other users. If a user deletes a record after executing the query, that record disappears from the resultset. Similarly, changes to the data values
will also be reflected in the ResultSet.
The second parameter sets the concurrency of ResultSet, which determines whether ResultSet can be updated. The options are:
Concur_read_only: This is the default value, specified cannot be updated
ResultSet concur_updatable: Specifies that the ResultSet can be updated
Method two methods of using statement plus batch
Java code
Conn.setautocommit (false); =conn.createstatement (resultset.type_scroll_sensitive,resultset.concur_read_only); for (int x = 0; x < size;x++) { stmt.addbatch ("INSERT intoadlogs (ip,website,yyyymmdd,hour,object_ ID) VALUES (' 192.168.1.3 ', ' localhost ', ' 20081009 ', 8, ' 23123 ') '); Stmt.executebatch (); Conn.commit ();
Method Three: Direct use of statement
Java code
Conn.setautocommit (false); =conn.createstatement (resultset.type_scroll_sensitive, resultset.concur_read_only); for (int x = 0; x < size;x++) { Stmt.execute ("INSERT intoadlogs (ip,website,yyyymmdd,hour,object_ ID) VALUES (' 192.168.1.3 ', ' localhost ', ' 20081009 ', 8, ' 23123 ') '); Conn.commit ();
The average test time for inserting 100,000 data separately using the above method is:
Method One:844s method Two:421s method Three:16.359s
It can be seen that JDBC's batch statement insertion is not a performance improvement, but rather slower than not using batch, of course, this may be related to the implementation of the specific JDBC driver. The attachment is my test code that can be used to run on my own computer.
The main thing to do when performing bulk inserts is to automatically commit the cancellation, so that it doesn't matter whether the batch syntax is in JDBC or not.
Java code
Conn.setautocommit (false)
Second, the JDBC batch update
by Huattan in the original old project, with the JDBC operation, using the JDBC batch operation, and the previous period of time to see "Crazy Java Handout", also saw Li gang author's related introduction. Consider that there will be someone to use the JDBC operation, so, Antioch or the whole, it became the following text. With JDBC DML statements (insert,delete,update), we may need to have a few tables at the same time DML operations, for example, when we insert a table, but also to the B table to insert related records, but also to update the C table associated Records, This will involve three DML SQL statements, if not the bulk update function, we have to deal with alone, the efficiency is relatively lower. When using a bulk update, multiple SQL statements are collected at the same time as a batch of operations and committed simultaneously. It is important to note that the bulk update must be supported by the underlying database, possibly through the DatabaseMetaData Supportbatchupdates method to see if the underlying database supports bulk update
// Conn is the type of connection databasemetadatadbmd=conn.getmetadata (); // If A is true, it means that the data is supported for batch updates boolean a=dbmd.supportsbatchupdates ();
To ensure that the operations of the bulk update handle the error correctly, the operations of the bulk update must be treated as a single transaction, and if the batch update fails during execution, the transaction is rolled back to the state before the operation started. To achieve this effect, the program should turn off autocommit before starting the bulk operation, and then start collecting the update statements, commit the transaction after the bulk operation completes, and restore the previous autocommit mode
As shown in the following code snippet
Try { //Save current Auto-commit modebooleanautocommit=Conn.getautocommit (); //Turn off auto-commitConn.setautocommit (false); //collect multiple SQL statements at the same time using statementStmt.addbatch (INSERT_SQL1); Stmt.addbatch (INSERT_SQL2); Stmt.addbatch (UPDATE_SQL3); .. //commit all SQL statements at the same timeStmt.executebatch (); //Submit ChangesConn.commit (); Conn.setautocommit (autocommit); } Catch(Exception e) {e.printstacktrace () conn.rollback (); }
attached: a point to note is that if the Addbatch () method is added to the SELECT query statement, the program will directly appear the wrong
This is another way to batch update. But I think that's the way it is.
tx=== con.preparestatement ( "Update CUSTOMERS set age=age+1 where age>0"); Stmt.excuteupdate (); Tx.commit ();
Third, JDBC Bulk Delete
String sql= "Delete from table where IdIn (0"; String id[]=for (int i=0;i<id.length;i++) { sql+ = "," +Id[i];} SQL+ = ")"; Preparedstatementps=
Public voidDeletebat (integer[] catno) {Try{Connection con=dbutil.getinstance (). Getcon (); String SQL= "Delete from cat where catno=?"; Con.setautocommit (false); PreparedStatement PS=con.preparestatement (SQL); for(Integer in:catno) {ps.setint (1, in); Ps.addbatch (); } int[] result=Ps.executebatch (); Con.commit (); for(intI:result) {System.out.println (i); } } Catch(ClassNotFoundException e) {e.printstacktrace (); } Catch(SQLException e) {e.printstacktrace (); }
jdbc-BULK INSERT, bulk delete, batch update