How to quickly bulk insert data in a Mysql database using JDBC _mysql

Source: Internet
Author: User
Tags bulk insert commit mysql database

When using JDBC to connect to the MySQL database for data insertion, especially the mass data continuous insertion (10w+), how to improve efficiency?

In the JDBC programming interface, there are two methods that are particularly noteworthy: statement

void Addbatch () throws SQLException

Adds a set of parameters to this PreparedStatement object ' s batch of commands.

Int[] ExecuteBatch () throws SQLException

Submits a batch of commands to the database for execution and if all commands execute successfully, returns an array of Date counts. The int elements of the array is returned are ordered to correspond to the commands in the batch, which are ordered a Ccording to the which they were added to the batch.

Batch processing of data can be achieved by using the Addbatch () and ExecuteBatch () methods.

It is worth noting, however, that you first need to set up manual commits in the database link, Connection.setautocommit (false), and then execute Connection.commit () after statement execution.

Package cyl.demo.ipsearcher; 
Import Java.io.BufferedReader; 
Import Java.io.FileInputStream; 
Import java.io.IOException; 
Import Java.io.InputStreamReader; 
Import java.sql.Connection; 
Import Java.sql.DriverManager; 
Import java.sql.PreparedStatement; 
 
Import java.sql.SQLException; 
  public class Dbstorehelper {private String insert_sql; 
  Private String CharSet; 
 
  Private Boolean debug; 
  Private String connectstr; 
  Private String username; 
 
  private String password; 
    Public Dbstorehelper () {connectstr = "jdbc:mysql://localhost:3306/db_ip"; 
    Connectstr = "Useserverprepstmts=false&rewritebatchedstatements=true"; 
    Insert_sql = "INSERT into Tb_ipinfos (IPLONG1,IPLONG2,IPSTR1,IPSTR2,IPDESC) VALUES (?,?,?,?,?)"; 
    CharSet = "GBK"; 
    debug = true; 
    Username = "root"; 
  Password = "* * *"; public void Storetodb (String srcfile) throws IOException {BufferedReader bfr = new BufferedReader (New INPUTST Reamreader (New FileinPutstream (Srcfile), CharSet)); 
    try {dostore (BFR); 
    catch (Exception e) {e.printstacktrace (); 
    finally {bfr.close (); } private void Dostore (BufferedReader bfr) throws ClassNotFoundException, SQLException, IOException {Class 
    . forname ("Com.mysql.jdbc.Driver"); 
    Connection conn = drivermanager.getconnection (Connectstr, Username,password); Conn.setautocommit (FALSE); 
    Sets the manual commit int count = 0; 
    PreparedStatement PSTs = conn.preparestatement (Insert_sql); 
    String line = null; 
      while (null!= (line = Bfr.readline ())) {string[] infos = Line.split (";"); 
      if (Infos.length < 5) continue; 
      if (Debug) {System.out.println (line); 
      } psts.setlong (1, long.valueof (infos[0)); 
      Psts.setlong (2, long.valueof (infos[1)); 
      Psts.setstring (3, infos[2]); 
      Psts.setstring (4, infos[3]); 
      Psts.setstring (5, infos[4]);     Psts.addbatch (); Add Bulk Processing Count++; } psts.executebatch (); Perform batch processing conn.commit (); 
    Submit System.out.println ("All Down:" + count); 
  Conn.close (); 
 } 
 
}

After completion of execution:

All down:103498
Convert finished.
All spend time/s: 47

Altogether 10w+, the execution time spends altogether 47 seconds.

This efficiency is still not high, it does not seem to achieve the desired effect and needs further improvement.

Parameters can also be added to the MySQL JDBC connection string.

rewritebatchedstatements=true, MySQL closes the batch processing by default, which is opened with this parameter to rewrite the SQL statements submitted to the database.

Useserverprepstmts=false, if not open (Useserverprepstmts=false), Using com.mysql.jdbc.PreparedStatement for local SQL assembly, finally sent to DB is replaced by the final SQL.

With this slight improvement, add the following statement to the connection string (remove the annotation from the Code construction method):
Connectstr = "Useserverprepstmts=false&rewritebatchedstatements=true";

To do the following again:

All down:103498
Convert finished.
All spend time/s: 10

The same amount of data, this implementation took only 10 seconds, processing efficiency greatly improved.

The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.