Windows 2000 + Oracle9i hardware environment CPU 1.8g + Ram 512 m now we have two tables as follows: t1 -- large table 10000 t1_fk_id T2 -- small table 5000 t2_pk_idt1 join the simulation data through the field ID in the table with the primary key ID of T2 as follows: -- T2 has 5000 pieces of data create table t2asselect rownum ID,. * From all_objects A where 1 = 0; -- create/recreate primary, unique and foreign key constraints alter table T2 add constraint t2_pk_id primary key (ID ); insert/* + append */into t2select rownum ID,. * From all_obj Ects a where rownum <= 5000; -- T1 has 10000 pieces of data. Create Table t1asselect rownum Sid, T2. * From T2 where 1 = 0; -- create/recreate primary, unique and foreign key constraints alter table T1 add constraint t1_fk_id foreign key (ID) References T2 (ID); insert/* + append */into t1select rownum Sid, T2. * From T2; insert/* + append */into t1select rownum Sid, T2. * From T2; -- Update the subobject_name field, which is previously nullupdate T2 Set t2.subobject _ name = 'delimiter' we hope to update all the subobject_name fields of T1 into 'sequinehuang'. That is to say, 10000 pieces of data of T1 will be updated. Method 1: Write PL/SQL, open cursordeclare l_varid varchar2 (20); l_varsubname varchar2 (30); cursor mycur is select t2.id, Region _ name from T2; begin open mycur; loop fetch mycur into l_varid, l_varsubname; exit when mycur % notfound; update T1 set t1.subobject _ name = l_varsubname where t1.id = l_varid; end loo P; close mycur; end; --- time consumed 39.716s this is obviously the most traditional method. If the data volume is large (40 million), the "snapshot too old" error exit method 2 will be reported. use loop to operate in batches declare I number; j number; begini: = 1; J: = 0; select count (*) into J from T1; loopexit when I> J; update T1 set t1.subobject _ name = (select t2.subobject _ name from T2 where t1.id = t2.id) Where t1.id> = I and t1.id <= (I + 1000); I: = I + 1000; end loop; end; -- it takes 0.656 S. A total of 10 loops are made here. If the data volume is large, although the task can be completed, the speed is Still unsatisfactory. (For example, we increase T1-large table to 100000 T2. small table to 50000) 10.139 s method 3. -- Virtualize a table to perform operations. When the data volume is large, the efficiency is much higher than method 2. Update (select t1.subobject _ name A1, t2.subobject _ name B1 from T1, t2 where t1.id = t2.id) set a1 = b1; -- the time consumed is 3.234 S (T1 -- the large table is increased to 100000 T2. -- the small table is increased to 50000) Method 4. -- Because update is a resource-consuming operation, there will be redo and undo operations. In this example, we can use the following method to create a new table, because insert is much faster than update, after that, you will have an old table and a new table. Then, you can analyze the specific situation as needed ~~~~~ Create Table T3 as select * from T1 where rownum <1; alter table T3 nologging; insert/* + append */into t3select T1. * from T1, T2 where t1.id = t2.id; -- takes 0.398 S (T1 -- the size of a large table increases to 100000 T2. -- the size of a small table increases to 50000)
*
All of the above operations have excluded the time required to analyze the execution plan