My questions:
If you look at me, the number 1 will go through the index, and the number 4 will not go through the index to the entire table. I hope that the number 4 will go through the index, and the mandatory index statement will not work, what should I do?
(My statement :)
update t_mt partition(P_MT5_1) mt set (mt.stat,mt.rpttime)=(select stat,rpttime from t_statbuf buf where sj>=to_date('2013-05-04 00:00:00','yyyy-mm-dd hh24:mi:ss') and sj<to_date('2013-05-05 00:00:00','yyyy-mm-dd hh24:mi:ss') and buf.msgid=mt.msgid)where mt.ctime>=to_date('2013-05-04 00:00:00','yyyy-mm-dd hh24:mi:ss') and mt.ctime<to_date('2013-05-05 00:00:00','yyyy-mm-dd hh24:mi:ss') and mt.ourgetstatus='0' and mt.stat='0' --and mt.ret='0' and exists(select 1 from t_statbuf buf where sj>=to_date('2013-05-04 00:00:00','yyyy-mm-dd hh24:mi:ss') and sj<to_date('2013-05-05 00:00:00','yyyy-mm-dd hh24:mi:ss') and buf.msgid=mt.msgid) ;
Said the great god (ranking:
You 'd better use merge. The two tables are associated for update, and you have to perform two queries for update, and the multi-Field update effect is poor.
Modified SQL:
merge into t_mt partition(P_MT5_1) mt using (select * from t_statbuf where sj>=to_date('2013-05-05','yyyy-mm-dd') and sj<to_date('2013-05-06','yyyy-mm-dd') ) buf on(buf.msgid=mt.msgid)when matched thenupdate set mt.stat=buf.stat,mt.rpttime=buf.rpttimewhere mt.ctime>=to_date('2013-05-05','yyyy-mm-dd') and mt.ctime<to_date('2013-05-06','yyyy-mm-dd') and mt.ourgetstatus='0' and mt.stat='0' --and mt.ret='0' ;
Result:
I used the above SQL statement to update it for more than 20 minutes and no results have been returned. The following is the result (how is it so fast !!~!) : 66334 rows, 121777 rows,
Continue to ask:
Can merge into delete data? For the modified data, the other table needs to be deleted using the original statement or the entire table.
A:
Merge into deletion depends on the Oracle version. A high version is acceptable.
If an error is reported in the middle:
This is because the table records you updated are not unique.
I wrote:
1. If there is a status update in MT, MX will not be retrieved to MX. Starting from 4-27, there will be about 2 million rows updated. The estimated time is 30 minutes:
merge into t_busi_presend_mx mxusing (select * from t_mt where ctime>=to_date('2013-04-27','yyyy-mm-dd') ) mt on(mx.id=mt.mcid)when matched thenupdate set mx.sjsendstatus=mt.stat,mx.rpttime=mt.rpttimewhere mx.cjsj>=to_date('2013-04-27','yyyy-mm-dd') and mx.sjsendstatus='NONE'
2. Delete the same msgid record in Buf (2.04 million) and that in MT (10 million:
1. Use merge to delete 2 million entries. It takes 535 seconds. (Update set must be added before merge delete)
merge into t_statbuf buf using (select * from t_mt where ctime>=to_date('2013-04-27','yyyy-mm-dd') ) mton(buf.msgid=mt.msgid)when matched thenupdate set buf.stat=nulldelete where( buf.msgid=mt.msgid)
2. I did not test it, but it was virtualized and associated by others. It took only dozens of seconds. Therefore, it seems that merge is not needed to delete the file and exists directly.
You can try it like this:
create table tmp_t1 nologgng asselect buf.msgid from buf,mt where buf.id=mt.id;create table tmp_mt nologging asselect * from mt;create index ... on tmp_mt(msgid);delete from tmp_mt mt where exists(select 1 from tmp_t1 t1 where mt.msgid=t1.msgid);commit;delete from buf where exists(select 1 from tmp_t1 t1 where buf.msgid=t1.msgid);commit;insert into buf select * from tmp_mt;commit;
DELETE t_statbuf m2 WHERE EXISTS(SELECT m1.msgid FROM t_mt m1 WHERE ctime>=to_date('2013-04-27','yyyy-mm-dd') AND m1.msgid=m2.msgid)
Previous experiences:
Merge is more practical, but if you encounter a large amount of data updates, merge's efficiency also degrades, you may need to Perform Batch update operations, of course, this update volume is also tens of millions of data records.
If the deleted data is small Buf and the mt data volume is large, the same data is obtained first, and then exists is used.
Each update entry must be scanned once.
Hash join is faster for two result sets with the same data volume.
Multi-hash of the associated result set is slow if the result set of quick Association is less hash. When there is little difference between the two data tables, the number of associated result sets is faster. Sometimes the number of two data tables varies greatly, but hash is slow, but I have tried it. The difference between the two data tables and nested does not seem very great.
Q:
What is the difference?
A:
Similar data volume
Q:
Tens of millions of A and hundreds of millions of B tables?
HJ (hash join) Faster
If thousands of A and millions of B, of course, NL (nested loop) is faster.
A:
It would be slow if the data association result set is only several millions.
Number of result sets
Q:
Number of result sets?
A:
Well, the number of associated target results in count