Here's a way to use the APPEND hint to make it possible to insert hundreds of billions of data in 10 minutes.
--Create table
CREATE TABLE Tmp_test_chas_lee
(
F01 VARCHAR2 (20),
F02 number is not NULL,
f03 VARCHAR2 (21),
f04 VARCHAR2 (21),
F05 number,
F06 Number (20)
);
--Create a temporary table to provide the serial number
CREATE GLOBAL temporary Table T_sequence_num (
Sequencenum Number (8) NOT NULL
)
On COMMIT PRESERVE ROWS;
--Start inserting data
Begin
--Mr. 10,000 serial number
Delete from T_sequence_num;
For I in 0..9999 loop
Insert into T_sequence_num (sequencenum) values (i);
End Loop;
--use Append prompt, 10,000 each time, data insertion
For I in 1..10 loop
Insert/*+ Append */Into Tmp_test_chas_lee
(F01, F02, f03, f04, F05, f06)
Select
8613800000000 + i * 10000 + t_sequence_num.sequencenum as MSISDN,
' 12106000 ',
0,
' 20120312072000 ',
' 500231891000 ',
Null
From T_sequence_num;
--Each batch must be submitted once
Commit
End Loop;
End
/ As you can see, the key to this approach is to use the APPEND hint, which is using Direct Path Insert. The effect is amazing, hundreds of billions of data, 10 minutes or so, that is, more than 100,000 of records inserted per second. This method is very simple, is to use the APPEND hint, the method is to have a sequence table, to assist data generation. Performance testers do not have to spend a lot of time waiting for test data to be generated. Issues to be noted: 1. When inserting data, do not set index 2 on the table. You can create index 3 by nologging and parallel after the data has been inserted. Here the default is 10,000 records submitted once, you can change the larger, should be faster
4. After inserting the data append, you must submit it before you can perform another operation on the table 5. Each field of the generated data can be flexibly generated according to its own needs.