Lately, I met a case that the range filter predicates due to wrong cardinality issue. Let's check the following query.
Recently I encountered a performance problem caused by incorrect base number caused by range filtering. Let's look at the following query:
The real records number is found 38,000,000.
Real Records: About million
The explain plan shows 72838, optimizer think it has good filtration. So put this JOIN in the first order. Actually, it is totally wrong.
The execution plan shows 72838. Here the optimizer thinks it has a good filter core, so it is placed at the first position of JOIN in multiple tables. Obviously, it is completely wrong.
SQL> set autotrace traceonly explain;
SQL> set linesize 999
SQL> SELECT
2 T. durationsecsqty timeinseconds,
T. moneyamt moneyamount,
T. wageamt wageamount,
T. applydtm applydate,
T. adjapplydtm adjustedapplydate,
T. STARTDTM,
T. ENDDTM,
T. HOMEACCOUNTSW
FROM
TKCSOWNER. wfctotal t,
TKCSOWNER. PAYCODE1MMFLAT MP
WHERE
MP. EFFECTIVEDTM <= T. APPLYDTM
And mp. EXPIRATIONDTM> T. APPLYDTM
And mp. PAYCODEID = T. PAYCODEID
/
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
---------------------------------------------------------------------
| 0 | select statement | 72838 | 5192K | 37450 |
| * 1 | hash join | 72838 | 5192K | 37450 |
| 2 | table access full | PAYCODE1MMFLAT | 323 | 6783 | 3 |
| 3 | table access full | WFCTOTAL | 8938K | 443M | 37317 |
Now, let me comment the range filter.
Let me comment on the range conditions:
"MP. EFFECTIVEDTM <= T. APPLYDTM
And mp. EXPIRATIONDTM> T. APPLYDTM"
SQL> SELECT
2 T. durationsecsqty timeinseconds,
T. moneyamt moneyamount,
T. wageamt wageamount,
T. applydtm applydate,
T. adjapplydtm adjustedapplydate,
T. STARTDTM,
T. ENDDTM,
T. HOMEACCOUNTSW
FROM
TKCSOWNER. wfctotal t,
TKCSOWNER. PAYCODE1MMFLAT MP
WHERE
/* MP. EFFECTIVEDTM <= T. APPLYDTM
And mp. EXPIRATIONDTM> T. APPLYDTM */
MP. PAYCODEID = T. PAYCODEID 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17/
Execution Plan
----------------------------------------------------------
Plan hash value: 564403449
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
---------------------------------------------------------------------------
| 0 | select statement | 29M | 1583M | 37405 |
| * 1 | hash join | 29M | 1583M | 37405 |
| 2 | index fast full scan | PK_PAYCODE1MMFLAT | 323 | 1615 | 1 |
| 3 | table access full | WFCTOTAL | 8938K | 443M | 37317 |
The Cardinality show 29,135,142, it is already close to the correct value.
The base is 29,135,142, which is close to the correct result.
So how optimizer work out the cardinality with range filter in table join?
So how does the optimizer get out of range scanning in table join?
The answer is 5%, always 5%.
The answer is 5%
29135142*5% * 5% = 72837.8, This is exact equal to the result of test 1.
So if you meet any performance issue with range filter in tbale join, I am not surprise. I think Oracle need to improve the CBO to get better support on such situation.