Mode 1:
For loop, inserting data once each time.
Mode 2:
The batch operation of the JDBC PreparedStatement
Preparedstatement.addbatch (); Preparedstatement.executebatch ();
Do not exceed 50 records at a time:
1. Because the database is locked when you insert it, however if you insert too much at once you will cause other business to wait.
2. can cause memory overflow
The essence of Mode 2 is: INSERT into table (A,B,C,D) VALUES (AV,BV,CV.DV), insert into table (A,B,C,D) VALUES (...) ...
Mode 3:
For loop stitching SQL. Become INSERT into table (A,B,C,D) VALUES (AV,BV,CV,DV), (...), (..). ). This form
The experience of others:
Using the ssh+ Oracle database in recent projects, using the C3P0 connection pool, requires that 10,000 data be inserted in the 2 table when performing an action. The JDBC method is used to insert it. get a sequence before inserting the data into the database. A test time, crashed, took nearly 3 minutes (which gets two tables of sequence is time-consuming), and then think of a similar project has been done before, using the primary key self-increment strategy, the primary key changed to self-increment. Test again, it can, less than 3 seconds. Record this code for future reference.
This article is also good:
This forum article based on the author's personal experience focuses on the use of inserts to insert large amounts of data in common techniques, more information, see below: Insert large amounts of data using Insert personal experience Summary In many cases, we will need to insert a large amount of data into a table, And I want to finish the work in as short a time as possible, here, and share with you some of the experience I usually do with a lot of data inserts. Premise: Before making the insert data, if it is a non-production environment, remove the indexes and constraints for the table, and then build the indexes and constraints when the insert is complete. 1. INSERT INTO TAB1Select* fromTab2;commit; This is the most basic INSERT statement, and we insert the data from the TAB2 table into the TAB1 table. Based on experience, tens data can be completed within 1 hours. However, this method will generate a very fast arch, need to focus on the production of the archive, timely start backup software, to avoid the arch directory explosion. 2. ALTER TABLE TAB1 Nologging;insert/*+ Append*/Into TAB1Select* fromtab2;commit;alter table Tab1 logging; This method will greatly reduce the resulting arch and increase the time to a certain extent, according to experience, the Tens data can be completed in 45 minutes. Note, however, that this method is suitable for a single-process serial approach, and if multiple processes are running concurrently, the post-initiated process will have enqueue waiting. Note This method must not dataguard on the use (but if the database has force logging that is not afraid, hehe)!! 3. INSERT INTO TAB1Select /*+ parallel*/* fromTab2;commit; For a full-table scan of a statement after a SELECT, we can add parallel hint to increase its concurrency, and it is important to note that the maximum concurrency is limited by the initialization parameter parallel_max_servers. Concurrent processes can be viewed through v$px_session, or PS-ef |grep ora_p View. 4. Alter session enable parallel Dml;insert/*+ parallel*/Into TAB1Select* fromTab2;commit; In contrast to Method 2, concurrent inserts, which have not yet been compared and Method 2 which are more efficient (I estimate is Method 2 fast), have tested friends welcome additions. 5. INSERT INTO TAB1Select* fromtab2 partition (p1); insert into Tab1Select* fromtab2 partition (p2); insert INTO Tab1Select* fromTAB2 partition (p3); INSERT INTO Tab1Select* fromTAB2 partition (p4); For partitioned tables you can take advantage of TAB1 concurrent inserts for multiple processes, the more partitions you have, the more processes you can start. I've tried inserts.2.600 million rows of records of a table, 8 partitions, 8 processes, if using method 2, a single process completion may take 40 minutes, but because there are 8 partitions 8 processes, the post-process has enqueue, so the time required is 40 minutes x8But if you use Method 5, a single process takes 110 minutes, but because it can execute concurrently, the total time required is approximately 110 minutes. 6. Declaretype Dtarray is TABLE of VARCHAR2 ( -) INDEX by binary_integer;v_col1 dtarray;v_col2 dtarray;v_col3 Dtarray; Beginselect col1, col2, col3 BULK collectinto v_col1, v_col2, V_col3from tab2; FORALL I in1.. v_col1. Countinsert into Tab1 ...; END; the way to use bulk binding (bulk binding). When looping through a SQL statement that executes a bound variable, the PL/in SQL and SQL engine (engines), a large number of context switches (contexts switches) occur. With bulk binding, data can be batched from the Plsql engine to the SQL engine, reducing the context switching process and improving efficiency. This method is more suitable for online processing without downtime. 7. sqlplus-s user/pwd<Runlog.txtSetCopycommit2;SetArraySize the; copy fromUser/[email protected]-To user/[email protected]-Insert Tab1using Select* fromtab2;exiteof Insert with Copy, note that insert does not have an into keyword here. The advantage of this method is that Copycommit and arrarysize can be set to control the frequency of commit together, and the above method is one commit per 10000 rows.
From: http://freebile.blog.51cto.com/447744/587120/
Java BULK INSERT data into the database