Recently, in a millions of data on the business table to go heavy, to heavy operation even rammed. Here's a quick memory.
1, Query Business table data volume, see a total of more than 200 W bar
Sql> Select COUNT (*) from Tb_bj_banker_etl;
2552381
2, the query table should be removed from the amount of duplication of data, a total of more than 80 W
Sql> Select COUNT (*) from TB_BJ_BANKER_ETL where (ID) in (SELECT ID from TB_BJ_BANKER_ETL GROUP by ID have count (*) &G t;1) and rowID not in (select Max (ROWID) from TB_BJ_BANKER_ETL GROUP by ID have count (*) >1);
830099
3, so, in the evening before work, executed the following statement script, in order to go heavy
Sql> Delete from Tb_bj_banker_etl where (ID) in (SELECT ID from TB_BJ_BANKER_ETL GROUP by ID have count (*) >1) and R Owid not IN (select Max (ROWID) from TB_BJ_BANKER_ETL GROUP by ID has count (*) >1);
Sql> commit;
4, the next day, arrived at the scene, found that the Pl/sql developer tool executed last night is still in execution
First of all, to realize that more than 80 W to heavy data ran a night did not run out? Is there something wrong?
Suspected to have a lock-table.
The query is then queried for the user who has the lock table.
SELECT
A.owner, --object-owned user
a.object_name, --object name
b.xidusn,
b.xidslot,
B. XIDSQN,
b.session_id,-- lock table User's session
B.oracle_username, --lock table User's ORACLE username
b.os_user_name , --lock table user's operating system login user name
b.process,
b.locked_mode,
c.machine,-- lock table user's computer name
C.status, --Lock table status
C.server,
C.sid,
c.serial#,
c.program --database management tools for lock table users from
all_objects A,
V $LOCKED _object B,
SYS. Gv_$session C
WHERE
a.object_id = b.object_id and
b.process = c.process
In the following results, it can be seen that the lock table is only to repeat the statement of the initiation session, and no other user caused the lock table, this statement is still executing? With doubt, begin to try to solve.
1 bjhyl tb_bj_banker_etl 9000 913 bjhyl Administrator 4036:972 3 workgroup\backdb ACTIVE dedicated 913 3381 Plsqldev . exe
2 bjhyl tb_bj_banker_etl 9000 913 bjhyl Administrator 4036:972 3 workgroup\backdb INACTIVE dedicated 649 41791 Dev.exe
3 bjhyl tb_bj_banker_etl 9000 913 bjhyl Administrator 4036:972 3 workgroup\backdb INACTIVE dedicated 817 27777 Dev.exe
4 bjhyl tb_bj_banker_etl 9000 913 bjhyl Administrator 4036:972 3 workgroup\backdb INACTIVE dedicated 841 1981 Ev.exe
5, the use of batch times, to solve the problem of heavy tamping
As the direct to the weight can not be carried out, so think of a batch of times to heavy method, try.
First time:
delete from tb_bj_banker_etl where (ID) in (SELECT ID from TB_BJ_BANKER_ETL GROUP by ID has count (*) >1) and rowID not IN (select Max (ROWID) from TB_BJ_BANKER_ETL GROUP by ID have count (*) >1) and rownum<=100000;
commit;
Second:
delete from tb_bj_banker_etl where (ID) in (SELECT ID from TB_BJ_BANKER_ETL GROUP by ID has count (*) >1) and rowID not IN (select Max (ROWID) from TB_BJ_BANKER_ETL GROUP by ID have count (*) >1) and rownum<=100000;
a commit;
.....
.....
eighth time:
delete from tb_bj_banker_etl where (ID) in (SELECT ID from TB_BJ_BANKER_ETL GROUP by ID has count (*) >1 ) and rowID not in (select Max (ROWID) from the TB_BJ_BANKER_ETL group by ID having count (*) >1);
Results: By dividing more than 800,000 data into 10w data for a single operation, a total of more than 140 seconds was completed to achieve the goal of weighing 800,000 data. But why the direct treatment of the tamping die, awaiting follow-up analysis.
The above is the temporary processing to heavy 80w data when the whole process of tamping dead, hope can help everyone.