Risks of incorrect selection of Oracle Database Design Field Types
An Oracle data type is inaccurate. We will construct a table storage date field, a storage varchar2 field, and a storage date field for a test. I have also written the following:
Problems caused by inconsistent field type design and actual business
SQL> drop table test purge;
SQL> create table test as select
To_char (to_date ('1970-01-01 ', 'yyyy-MM-dd') + rownum, 'yyyymmdd') s_date,
To_date ('1970-01-01 ', 'yyyy-MM-dd') + rownum d_date
From all_objects;
SQL> create index ind_t_sdate on test (s_date) nologging;
SQL> create index ind_t_ddate on test (d_date) nologging;
SQL> exec dbms_stats.gather_table_stats (user, 'test', cascade => true );
SQL> set timing on
SQL> set autotrace traceonly
SQL> select * from test where s_date between '000000' and '000000 ';
22 rows have been selected.
Used time: 00: 00: 00.00
Execution Plan
----------------------------------------------------------
Plan hash value: 953148778
Bytes -------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |
Bytes -------------------------------------------------------------------------------------------
| 0 | select statement | 3 | 51 | 3 (0) | 00:00:01 |
| 1 | table access by index rowid | TEST | 3 | 51 | 3 (0) | 00:00:01 |
| * 2 | index range scan | IND_T_SDATE | 3 | 2 (0) | 00:00:01 |
Bytes -------------------------------------------------------------------------------------------
-- We can see that the number of rows evaluated by CBO is 3, and 22 is returned.
Predicate Information (identified by operation id ):
---------------------------------------------------
2-access ("S_DATE"> = '000000' AND "S_DATE" <= '000000 ')
Statistics
----------------------------------------------------------
1 recursive cballs
0 db block gets
7 consistent gets
0 physical reads
0 redo size
944 bytes sent via SQL * Net to client
349 bytes encoded ed via SQL * Net from client
3 SQL * Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
22 rows processed
SQL> select * from test
Where d_date between to_date ('20140901', 'yyyymmdd') and
To_date ('20140901', 'yyyymmdd ');
22 rows have been selected.
Used time: 00: 00: 00.00
Execution Plan
----------------------------------------------------------
Plan hash value: 112387541
Bytes -------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time |
Bytes -------------------------------------------------------------------------------------------
| 0 | select statement | 23 | 391 | 3 (0) | 00:00:01 |
| 1 | table access by index rowid | TEST | 23 | 391 | 3 (0) | 00:00:01 |
| * 2 | index range scan | IND_T_DDATE | 23 | 2 (0) | 00:00:01 |
Bytes -------------------------------------------------------------------------------------------
-- We can see that the CBO evaluation is basically accurate.
Predicate Information (identified by operation id ):
---------------------------------------------------
2-access ("D_DATE"> = TO_DATE ('00:00:00 ', 'syyyy-mm-dd hh24: mi: ss ')
AND "D_DATE" <= TO_DATE ('00:00:00 ', 'syyyy-mm-dd hh24: mi: ss '))
Statistics
----------------------------------------------------------
1 recursive cballs
0 db block gets
7 consistent gets
0 physical reads
0 redo size
944 bytes sent via SQL * Net to client
349 bytes encoded ed via SQL * Net from client
3 SQL * Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
22 rows processed
Summary: although the two sqls consume the same resources, the evaluation result of SQL1 is incorrect. When multiple tables are associated, This is a hidden risk and can easily lead to incorrect execution plans. In addition to the preceding reasons, the problem still exists: Using varchar2 to store date results in N-Plus storage formats. Once I saw a situation where date formats are varied (including year, month, day, And hour, year, month, day, hour, minute), including Chinese and English: full angle, half angle, null, or even undefine (probably from js ). In addition, the index cannot be used, and the data type must be changed. It takes more than a day to write the conversion script.