How can I optimize Java Jdbc to reduce interaction with Oracle and improve batch processing performance?

Source: Internet
Author: User

I have not picked up Java for several years (N> 3 ), the reason why I wrote this article is purely for developers to use jdbc to reduce interaction and improve batch update Performance in batches. If you are a DBA, after work, you can recommend this article to developers. Maybe he knows these examples. If he does not know, it is also a kind of welfare. Considering that the interaction time can be reduced between the application client and the database server DB server, batch update processing is definitely a good tool for restructuring and optimizing Code. However, you must pay attention to this optimization method, if you re-invent a wheel on your own, the effect is often unsatisfactory. For example, Tom Kytes mentioned the following two examples in his book, the user profile and advanced queue functions are implemented by the application in Oracle, respectively, if you have some experience, you will surely know that these two features are actually originally supported by Oracle Enterprise Edition software, and the final result of self-implementation in DB is naturally a failure of the project. Similar to some friends who want to optimize the batch update operation in Oracle JDBC during development, they thought of this method, for example, to INSERT 15000 rows of data, in JAVA, 15000 INSERT statements are spliced into a PL/SQL block. The variables involved in these 15000 SQL statements still use PreparedStatement. the setXXX method is used as the JAVA-level SQL string, for example, begin -- I am a spliced SQL anonymous block insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); Insert into insertit values (?,?,?,?); ..................... Commit; end; The above 15000 INSERT statements are spliced into a PL/SQL block,preparedstatement.exe cute () and submitted to the DB. This reduces the interaction between the Jdbc Thin Client and the DB Server. Aside from that, it takes some time to cyclically control the writing of Concatenated SQL statements in JAVA. What is the execution efficiency of this write method and PreparedStatement. setExecuteBatch in JDBC, or PreparedStatement + addBatch () + executeBatch? We test the actual performance of the three writing methods in a simple JAVA program and check its performance in the database. The following is the JAVA code (do not tangle the code style if you do not write it for many years ): /** To change this template, choose Tools | Templates * and open the template in the editor. */package apptest; import oracle. jdbc. *; import java. SQL. *;/***** @ author xiangbli */public class Apptest {/*** @ param args the command line arguments */public static void main (String [] args) throws SQLException {// TODO code applicati On logic here try {Class. forName ("oracle. jdbc. driver. oracleDriver ");} catch (Exception e) {} Connection cnn1 = DriverManager. getConnection ("jdbc: oracle: thin: @ 192.168.56.101: 1521: cdb1", "c # maclean", "oracle"); Statement stat1 = cnn1.createStatement (); cnn1.setAutoCommit (false); ResultSet rst11_stat1.exe cuteQuery ("select * from v $ version"); while (rst1.next () {System. out. println (rst1.getString (1);} lo Ng startTime = System. currentTimeMillis (); long stopTime = System. currentTimeMillis (); String str = "begin \ n -- I am a spliced SQL anonymous block \ n"; int I; for (I = 0; I <= 15000; I ++) {str = str. concat ("insert into insertit values (?,?,?,?); \ N ");} str = str. concat ("commit; end;"); System. out. print (str); cnn1.createstatement(cmd.exe cute ("alter system flush shared_pool"); System. out. print ("\ n alter system flush shared_pool has refreshed the sharing pool to avoid the impact of SQL cursor cache on the first test \ n"); PreparedStatement pstmt = cnn1.prepareStatement (str); int j; for (j = 0; j <= 15000; j ++) {pstmt. setInt (1 + j * 4, 1); pstmt. setInt (2 + j * 4, 1); pstmt. setInt (3 + j * 4, 1); pstmt. setInt (4 + j * 4, 1 );}/ /System. out. println ("Statement Execute Batch Value" + (OraclePreparedStatement) pstmt ). getExecuteBatch (); startTime = System. currentTimeMillis (); pstmt.exe cute (); stopTime = System. currentTimeMillis (); System. out. println ("splicing 15000 insert SQL statements the first time Elapsed time was" + (stopTime-startTime) + "miliseconds. "); startTime = System. currentTimeMillis (); pstmt.exe cute (); stopTime = System. current TimeMillis (); System. out. println ("splicing 15000 insert SQL statements the second time consumed Elapsed time was" + (stopTime-startTime) + "miliseconds. "); cnn1.createstatement(cmd.exe cute (" alter system flush shared_pool "); System. out. print ("\ n alter system flush shared_pool has refreshed the sharing pool to avoid the impact of SQL cursor cache on the second test \ n"); startTime = System. currentTimeMillis (); int batch= 1000; PreparedStatement pstmt2 = cnn1.prepareStatement ("insert into inserti T values (?,?,?,?) "); (OraclePreparedStatement) pstmt2 ). setExecuteBatch (batch); for (int z = 0; z <= 15000; z ++) {pstmt2.setInt (1, z); pstmt2.setInt (2, z); pstmt2.setInt (3, z); pstmt2.setInt (4, z); pstmt2.executeUpdate ();} (OraclePreparedStatement) pstmt2 ). sendBatch (); cnn1.commit (); stopTime = System. currentTimeMillis (); System. out. println ("batch size =" + batch + "regular cycle 15000 insert SQL first run time consumed Elapsed time was" + (st OpTime-startTime) + "miliseconds."); startTime = System. currentTimeMillis (); PreparedStatement pstmt3 = cnn1.prepareStatement ("insert into insertit values (?,?,?,?) "); (OraclePreparedStatement) pstmt3 ). setExecuteBatch (batch); for (int z = 0; z <= 15000; z ++) {pstmt3.setInt (1, z); pstmt3.setInt (2, z); pstmt3.setInt (3, z); pstmt3.setInt (4, z); pstmt3.executeUpdate ();} (OraclePreparedStatement) pstmt3 ). sendBatch (); cnn1.commit (); stopTime = System. currentTimeMillis (); System. out. println ("batch size =" + batch + "regular cycle 15000 insert SQL second run time consumed Elapsed time was" + (st OpTime-startTime) + "miliseconds."); String insert = "insert into insertit values (?,?,?,?) "; PreparedStatement pstmt4 = cnn1.prepareStatement (insert); startTime = System. currentTimeMillis (); for (int u = 0; u <= 15000; u ++) {pstmt4.setInt (1, u); pstmt4.setInt (2, u); pstmt4.setInt (3, u); pstmt4.setInt (4, u); pstmt4.addBatch ();} pstmt4.executeBatch (); cnn1.commit (); stopTime = System. currentTimeMillis (); System. out. println ("BATCH update time consumed for the first time Elapsed time was" + (stopTime-startTime) + "Miliseconds.") ;}} the concatenated SQL statement and PreparedStatement. setExecuteBatch are executed twice. The cursor cache is not performed for the first time, and the cursor cache is used for the second time. PreparedStatement + addBatch () + executeBatch () is executed only once. The following are the test results of the JAVA program: alter system flush shared_pool has refreshed the sharing pool to avoid the impact of the SQL cursor cache on the time consumed by the first test and splicing of 15000 insert SQL statements for the first run of Elapsed time was 441299 miliseconds. concatenate 15000 insert SQL statements for the second run time Elapsed time was 5938 miliseconds. alter system flush shared_pool has refreshed the sharing pool to avoid the impact of SQL cursor cache on the second test batch size = 1000 regular cycle 15000 insert SQL first run time consumed Elapsed time was 322 miliseconds. batch size = 1000 regular cycle 15000 insert SQL second run time Elapsed time Was 131 miliseconds. the time consumed by BATCH update for the first time Elapsed time was 80 miliseconds. the following is the result of 10046 trace on the db server: begin -- I am a spliced SQL anonymous block insert into insertit values (: 1,: 2,: 3,: 4 ); insert into insertit values (: 5,: 6,: 7,: 8); insert into insertit values (: 9,: 10,: 11,: 12 ); insert into insertit values (: 13,: 14,: 15,: 16); insert into insertit values (: 17,: 18,: 19,: 20 ); insert into insertit values (: 2 1,: 22,: 23,: 24); insert into insertit values (: 25,: 26,: 27,: 28); insert into insertit values (: 29 ,: 30,: 31,: 32); insert into insertit values (: 33,: 34,: 35,: 36); insert into insertit values (: 37,: 38 ,: 39,: 40); insert into insertit values (: 41,: 42,: 43,: 44); insert into insertit values (: 45,: 46,: 47 ,: 48); insert into insertit values (: 49,: 50,: 51,: 52); insert into insertit values (: 5 3,: 54,: 55,: 56); insert into insertit values (: 57,: 58,: 59,: 60); insert into insertit values (: 61 ,: 62,: 63,: 64); insert into insertit values (: 65,: 66,: 67,: 68); insert into insertit values (: 69,: 70 ,: 71 ,: 72 );................................... insert into insertit values (: 59989,: 59990,: 59991,: 59992); insert into insertit values (: 59993,: 59994,: 59995,: 59996); insert into insertit Values (: 59997,: 59998,: 59999,: 60000); insert into insertit values (: 60001,: 60002,: 60003,: 60004); commit; end; call count cpu elapsed disk query current rows ------- ------ -------- ---------- ------------ ---------- Parse 1 176.10 179.33 0 97 0 0 Execute 2 150.51 2 4 0 2 Fetch 0 155.37 0 0 0 0 0 0 0 ------------------------------------------------------------------ ----- Total 3 326.61 334.71 2 101 0 2 ==> this is the total time consumed by splicing SQL statements on the DB SERVER side, and the CPU time is 334 seconds insert into insertitvalues (: 1,: 2 ,: 3,: 4) call count cpu elapsed disk query current rows ------- ------ -------- ---------- Parse 2 0.00 0.00 0 0 0 0 Execute 32 0.09 4 0.11 823 1000 Fetch 0 30002 0 0 0 0 0 0 ----------------------------------------------------------- -- ---------- Total 34 0.09 0.11 4 823 1000 30002 = "this is the PreparedStatement. the result of setExecuteBatch takes 0.11 seconds, and the cpu time is 0.09 seconds. Because the batch size is 1000, it is actually executed every 1000 INSERT operations, therefore, the total number of executions is about 30 insert into insertitvalues (: 1,: 2,: 3,: 4) call count cpu elapsed disk query current rows ------- ------ -------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.03 0.04 1 93 475 15001Fet Ch 0 0.00 0.00 0 0 0 ------- ------ -------- ---------- total 2 0.03 0.04 1 93 475 15001 => This is using addBatch () + executeBatch (), execute and parse are only once, that is, 15000 pieces of data are parsed and executed only once, similar to bulk collect INSERT in PL/SQL. The preceding figure shows that the syntax of the concatenated SQL statement is several hundred times larger than that of setExecuteBatch and executeBatch. Splicing 15000 INSERT statements into a Pl/SQL block is slow mainly because it is a long PL/SQL block, during the first Parse parsing, Oracle uses the PL/SQL engine to scan the entire block. From the above tkprof results, we can see that parse consumes 179 seconds, even if you do not use setExecuteBatch or executeBatch, you can use only the most common cyclic SQL block with batch size = 1. Its execution needs to be constantly switched between the Pl/SQL engine and the SQL engine, so Execute is also very slow. It needs to allocate 60000 bind variables, which puts too much pressure on the PGA, it is very likely that SWAP will cause some inexplicable bugs due to long variables and SQL. In fact, setExecuteBatch and executeBatch in the original ecology of JDBC are designed to reduce interaction between client-server. SetExecuteBatch is submitted to Oracle at a time after a certain number of executable SQL statements are reached. Generally, the recommended Batch Size is 20 ~ 50. I tested the situation of different batch sizes in this example: alter system flush shared_pool has refreshed the shared pool, avoid the impact of SQL cursor cache on the second test batch size = 1 regular cycle 15000 insert SQL first run time Elapsed time was 4990 miliseconds. batch size = 1 regular cycle 15000 insert SQL second run time Elapsed time was 3778 miliseconds. successfully built (total time: 9 seconds) alter system flush shared_pool has refreshed the sharing pool, avoid the impact of SQL cursor cache on the second test batch size = 10 regular cycle 15000 insert SQL first run time Elapsed time was 829 miliseconds. Batch size = 10 regular cycle 15000 insert SQL the second run time Elapsed time was 474 miliseconds. alter system flush shared_pool has refreshed the sharing pool to avoid the impact of SQL cursor cache on the batch size of the second test = 20 regular cycles 15000 insert SQL statements the time consumed for the first run Elapsed time was 694 miliseconds. batch size = 20 General loop 15000 insert SQL the second run time Elapsed time was 366 miliseconds. successful build (total time: 1 second) alter system flush shared_pool has refreshed the sharing pool to avoid SQL cursor cache impact the second test batch size = 30 regular cycle 15000 I The first time that nsert SQL runs, Elapsed time was 516 miliseconds. batch size = 30 regular cycle 15000 insert SQL the second run time Elapsed time was 204 miliseconds. successfully built (total time: 1 second) alter system flush shared_pool has refreshed the sharing pool, avoid the impact of SQL cursor cache on the second test batch size = 40 regular cycle 15000 insert SQL first run time Elapsed time was 481 miliseconds. batch size = 40 regular cycle 15000 insert SQL second run time Elapsed time was 188 miliseconds. successful build (total time: 1 second) alter system flush Shared_pool has refreshed the sharing pool to avoid the impact of SQL cursor cache on the batch size of the second test = 50 regular cycles 15000 insert SQL statements the time consumed for the first run Elapsed time was 456 miliseconds. batch size = 50 regular cycle 15000 insert SQL the second run time Elapsed time was 171 miliseconds. successfully built (total time: 1 second) alter system flush shared_pool has refreshed the sharing pool, avoid the impact of SQL cursor cache on the second test batch size = 100 regular cycle 15000 insert SQL first run time Elapsed time was 450 miliseconds. batch size = 100 general loop 15000 insert SQL second run Elapsed time was 123 miliseconds. successfully built (total time: 1 second) alter system flush shared_pool has refreshed the sharing pool, avoid the impact of SQL cursor cache on the second test batch size = 200 regular cycle 15000 insert SQL first run time Elapsed time was 402 miliseconds. batch size = 200 regular loop 15000 insert SQL second run time Elapsed time was 92 miliseconds. successfully built (total time: 1 second) alter system flush shared_pool has refreshed the sharing pool, avoid SQL cursor cache impact the second test batch size = 500 regular cycle 15000 insert SQL first run time consumed Elapsed t Ime was 386 miliseconds. batch size = 500 regular loop 15000 insert SQL second run time Elapsed time was 95 miliseconds. successfully built (total time: 1 second) alter system flush shared_pool has refreshed the sharing pool, avoid the impact of SQL cursor cache on the second test batch size = 1000 regular cycle 15000 insert SQL first run time Elapsed time was 369 miliseconds. batch size = 1000 regular loop 15000 insert SQL second run time Elapsed time was 87 miliseconds. after successful build (total time: 1 second) www.2cto.com, You can see 200 ~ The Execute batch size of 500 achieves better performance. The PreparedStatement + addBatch () + executeBatch () method is similar to bulk collect in PL/SQL. It places the variables in the batch array and inserts the data in the array into the table once during execution, that is, parsing and executing are performed once. Although there is a small amount of pressure on the PGA, we can see from the above example that the fastest speed is achieved. To sum up, the best way for Java Jdbc to reduce interaction and improve batch processing performance is to use PreparedStatement setExecuteBatch and executeBatch as much as possible. For software development, especially the development of upper-layer software (except OS, Database, and programming language), never invent a new "Wheel" on your own, necessary basic tools include "wrenches", "Screws", and "tires". They are all ready-made and proven to be highly available. If you re-invent the wheel by yourself, it will inevitably take no time, effort, or effort to please.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.