Because of the previous time to interview, asked how to efficiently insert 100,000 records into the database, did not deal with similar problems before, and did not read the relevant information, the results did not answer, today to check some information, summed up three ways:
Test the database for MySQL!!!
Method One:
public static void Insert () {//open time long begin = New Date (). GetTime ();//sql prefix string prefix = "INSERT INTO Tb_big_data (CO UNT, Create_time, Random) VALUES "; try {//save sql suffix stringbuffer suffix = new StringBuffer ();//Set Transaction as non-autocommit Conn.setautocommi T (false);//Statement St = Conn.createstatement ();//better than st,pst preparedstatement PST = conn.preparestatement ("");// Outer loop, total COMMIT transaction count for (int i = 1; i <=; i++) {//Commit step for (int j = 1; J <= 10000; j + +) {//Build SQL suffix Suffix.append ("(" + J * i + ", sysdate ()," + I * j* math.random () + "),"); Build full SqlString sql = prefix + suffix.substring (0, Suffix.length ()-1);//Add Execution Sqlpst.addbatch (SQL);//Perform Operation Pst.executeba TCH ();//COMMIT Transaction Conn.commit ();//Clear the last added data suffix = new StringBuffer ();} First-Class connection Pst.close (); Conn.close ();} catch (SQLException e) {e.printstacktrace ();} End time Long end = new Date (). GetTime ();//Time-consuming System.out.println ("Cast:" + (End-begin)/+ + "MS");}
Output time:cast:23 ms
This method is currently tested to be the most efficient method!
Method Two:
public static void Insertrelease () {Long begin = new Date (). GetTime (); String sql = "INSERT into Tb_big_data (count, Create_time, Random) VALUES (?, Sysdate (),?)"; try {Conn.setautocommit (false); PreparedStatement PST = conn.preparestatement (SQL), for (int i = 1; I <=, i++) {for (int k = 1; k <= 10000; k++) {Pst.setlong (1, k * i);p St.setlong (2, k * i);p st.addbatch (); Pst.executebatch (); Conn.commit ();} Pst.close (); Conn.close ();} catch (SQLException e) {e.printstacktrace ();} Long end = new Date (). GetTime (); System.out.println ("Cast:" + (End-begin)/+ + "MS");}
Note: There is no comment, and the above is similar, the following will be analyzed!
Console output:cast:111 ms
The execution time is 5 times times the above Method!
Method Three:
public static void Insertbigdata (Springbatchhandler sbh) {Long begin = new Date (). GetTime (); JdbcTemplate JdbcTemplate = Sbh.getjdbctemplate (); final int count = 10000; String sql = "INSERT into Tb_big_data (count, Create_time, Random) VALUES (?, Sysdate (),?)"; Jdbctemplate.batchupdate (SQL, new BatchPreparedStatementSetter () {//) sets parameters for prepared statement. The number of times this method will be called in the entire process public void Setvalues (PreparedStatement pst, int i) throws SQLException {Pst.setlong (1, i);p st.setint ( 2, i);} Returns the number of result set bars that are updated public int getbatchsize () {return count;}}); Long end = new Date (). GetTime (); System.out.println ("Cast:" + (End-begin)/+ + "MS");}
This method uses the spring batchupdate execution, because of the efficiency problem, the data volume only 10,000!
Execution time:cast:387 ms
Summary: Method One and method two are very similar, the only difference is that the method one uses "INSERT into TB (...) values (...), (...) ...;” To perform the insert operation in the same way
Method Two is "INSERT into TB (...) values (...); INSERT into TB (...) values (...); ... "way, but for the test, I do not know the difference is so big!
Of course, this is only the current test, the specific execution time and step size is also very much related! If the step is changed to 100, it is possible that the method will be a few minutes, this can be self-test ha ...
Method Three online is highly respected, but, efficiency everyone also saw, 10,000 records, time-consuming 6 minutes, it can be seen that the efficiency is not ideal! and method three need to configure Spring ApplicationContext environment to apply!
However, the method three in the Ssh/spring-mvc availability is still very high!
Just started to study the big data problem, the above is only true test results, and not necessarily is the fact, there are good suggestions, please correct me, thank you!
Learn from each other, you can progress faster!
Later will send the source code, you can go directly to download the test!
My opinion on BULK INSERT data (level 1 million data, MySQL)