Preface:
Recently, a job needs to test the performance of large-scale data.
5 million data records are required. This is a large amount of data. We cannot import data to the database using the CVS file.
I started to think of a solution. I used a Java program to execute an update Statement 5 million times and insert the data into the database.
This method is easy to operate, but the efficiency must be very slow.
The following shows the optimal solution.
Insert data using Stored Procedures
Create or replace function insert_users_test () returns void as $ body $ declare randomsid text; randomna_id text; p_source text: = 'hangzhou'; p_sourcen text: = '2016'; p_length INT: = 9; w_result text: = ''; w_index INT: = 0; curtime timestamp; enttime timestamp; begin for I in 1 .. 5000000 loop begin -- user_id text column generation (number combination) w_result: = ''; w_index: = 0; p_length: = 9; for I in 1 .. p_length loop w_index: = floor (random () * length (p_sourcen): integer + 1; w_result: = w_result | substring (p_sourcen, w_index, 1); End loop; randomsid: = w_result; -- user_name text column generation (combination of letters and numbers) p_length: = 8; w_result: = ''; w_index: = 0; for I in 1 .. p_length loop w_index: = floor (random () * length (p_source): integer + 1; w_result: = w_result | substring (p_source, w_index, 1); End loop; randomna_id: = w_result; curtime: = 'now'; enttime: = curtime + '-1 hours'; insert into user (user_id, user_name, enttime, utdate_time) values (randomsid, randomna_id, last_login_time, curtime); Exception when unique_violation then NULL; end loop; end; $ body $ language 'plpgsql ';
Run select insert_users_test ();
Some of the above usage instructions
Random () removes random numbers between 0 and 1.
Length (STR) returns the length of the string.
The length of the substring (STR, beginindex, length) that is operated on.
: Forced conversion
: = Value assignment operator
--------------------------------------------------
Each time a random digit is generated, these digits are combined to generate the data we need.
Capture exceptions. If the uniqueness constraint is violated, continue.