Java implementation JDBC BULK INSERT Data _java

Source: Internet
Author: User
Tags bulk insert commit stmt

First, three kinds of JDBC batch insert programming methods are introduced, and the specific contents are as follows

JDBC Bulk inserts are used primarily for data import and logging because the log is generally written in the file first.
I used the MySQL 5.1.5 JDBC driver to test three of the more commonly used methods, respectively.

Method One: the method of using PreparedStatement plus batch

 try {class.forname (" com.mysql.jdbc.Driver "); 
 conn = Drivermanager.getconnection (O_url, userName, password); 
 Conn.setautocommit (FALSE); 
 String sql = "INSERT adlogs (ip,website,yyyymmdd,hour,object_id) VALUES (?,?,?,?,?)"; 
 PreparedStatement Prest = conn.preparestatement (sql,resultset.type_scroll_sensitive,resultset.concur_read_only); 
  for (int x = 0; x < size; x + +) {prest.setstring (1, "192.168.1.1"); 
  Prest.setstring (2, "localhost"); 
  Prest.setstring (3, "20081009"); 
  Prest.setint (4, 8); 
  Prest.setstring (5, "11111111"); 
 Prest.addbatch (); 
 } prest.executebatch (); 
 Conn.commit (); 
Conn.close (); 
The catch (SQLException ex) {Logger.getlogger (MyLogger.class.getName ()). log (Level.severe, NULL, ex); 
The catch (ClassNotFoundException ex) {Logger.getlogger (MyLogger.class.getName ()). log (Level.severe, NULL, ex); } 

Note the meaning of the following two parameters when building the statement under construction:
The first parameter specifies the type of ResultSet. The options are:
type_forward_only: default type. Only allow forward access once, and will not be affected by other users ' changes to the database.
type_scroll_insensitive: allows you to move forward or backward in a list, or even to a specific location, such as moving to a fourth record in a list or moving two records backwards from the current position. Will not be affected by other users ' changes to the database.
type_scroll_sensitive: like type_scroll_insensitive, allows positioning in records. This type is affected by changes made by other users. If a user deletes a record after executing the query, that record disappears from the ResultSet. Similarly, changes to the data values are reflected in the ResultSet.
The second parameter sets the concurrency of the ResultSet, which determines whether the ResultSet can be updated. The options are:
concur_read_only: This is the default value, specifying that the cannot be updated
ResultSet concur_updatable: Specifies that the ResultSet can be updated

Method Two: the method of using statement plus batch

Conn.setautocommit (false); 
 Statement stmt = conn.createstatement (resultset.type_scroll_sensitive, resultset.concur_read_only); 
 for (int x = 0; x < size; x + +) { 
 Stmt.addbatch (INSERT into Adlogs (ip,website,yyyymmdd,hour,object_id) VALUES (' 192.168.1.3 ', ' localhost ', ' 20081009 ', 8, ' 23123 '); 
 } 
Stmt.executebatch (); 
Conn.commit (); 

method Three: direct use of statement

Conn.setautocommit (false); 
Statement stmt = conn.createstatement (resultset.type_scroll_sensitive, 
     resultset.concur_read_only); 
for (int x = 0; x < size; x + +) { 
 Stmt.execute (INSERT into Adlogs (ip,website,yyyymmdd,hour,object_id) VALUES (' 192.168.1.3 ', ' localhost ', ' 20081009 ', 8, ' 23123 '); 
} 
Conn.commit (); 

The average test time for inserting 100,000 data separately using the above method is:
method One:17.844s
method Two:18.421s
method Three:16.359s

You can see that JDBC's batch statement insertion is not as performance-enhancing, but slower than not using batch, which may be related to the implementation of JDBC-specific drivers. The attachment is my test code that can be used to run on my own computer.

The main thing to do when performing bulk inserts is to cancel the autocommit so that it doesn't matter if you use JDBC's batch syntax.
Conn.setautocommit (False)

Personally feel that the first method is the most convenient and practical.

JDBC Batch Insert data example Explanation:

A recent program to import Excel data into a database is prepared to adopt JDBC bulk inserts due to the large amount of data. The Preparedstatement.addbatch () is used, and when the 1w data is added, the insert operation is performed, Preparedstatement.executebatch (). I thought it would be quick, and it took me more than 30 minutes to insert 65,536 data, completely unexpected. So I asked my colleagues how they handled this mass data import and found that they were also using JDBC BULK INSERT processing, but unlike me: they used Con.setautocommit (false); Then Preparedstatement.executebatch (), then execute Con.commit (), then try again, what is a miracle? It took half an hour just to import the data, and after adding these two words, it was only 15 seconds to finish. So to check the reasons for the Internet, found the following section of the description:

* When importing the data into the InnoDB, make sure which MySQL does not have autocommit mode is enabled because that

Requires a log flush to disk for every insert. To disable autocommit during your import operation, surround it with

SET autocommit and COMMIT statements:

SET autocommit=0;
... SQL Import Statements ...
COMMIT;

The first time, precisely because there is no setautocommit (false), then for each INSERT statement, will produce a log to disk, so although the set of bulk INSERT, but the effect is like a single insert, resulting in slow insertion speed.

Some of the code is as follows:

String sql = "INSERT INTO table * * *";
Con.setautocommit (false);
PS = con.preparestatement (sql);
for (int i=1; i<65536; i++) {
 ps.addbatch ();
 1w Records Insert Once
 if (i% 10000 = 0) {
  ps.executebatch ();
  Con.commit ();
 }
Finally inserting the data of less than 1w
ps.executebatch ();
Con.commit ();

The above is a side dish, next "serve":

1. Test Bulk Write Data

 Long start = System.currenttimemillis ();
 Daorecord Daorecord = new Daorecord ();
 list<t> list = new arraylist<t> ();
 for (int i = 1; I <= 1000. i++) {for
  (int j = 1; J <= 1000; j +) {
  T t = new T ();
  T.seti (i);
  T.SETJ (j);
  List.add (t);
  }
 Daorecord.insertbatch (list);
 SYSTEM.OUT.PRINTLN ("Time Consuming:" + (System.currenttimemillis ()-start) + "milliseconds");

2. Batch Write Data test

 public void Insertbatch (list<t> List) {
 String sql = INSERT INTO T (go,back) values (?,?);
 DBHelper DBH = new DBHelper (SQL);
 Connection conn = Dbh.returnconn ();
 try {
  conn.setautocommit (false); Note this sentence must be false for reasons see the first reference
  PreparedStatement PS = conn.preparestatement ( SQL);
  for (int i = 0; i < list.size (); i++) {
  ps.setint (1, List.get (i). Geti ());
  Ps.setint (2, List.get (i). GETJ ());
  Ps.addbatch ();
  if (i% 10000 = = 0) {
   ps.executebatch ();
   Conn.commit ();
   }
  Ps.executebatch ();
  Conn.commit ();
  Conn.close ();
 } catch (SQLException e) {
  //TODO automatically generated catch block
  e.printstacktrace ();
 }
 }

Data table:

Experimental results:

The above is the entire content of this article, I hope to help you learn.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.