Solution to slow query caused by high water level line and water level line
As we all know, with the continuous DML operations on table records, the high watermark line (HWM) of the table will be continuously improved. After the DELETE operation, although the table data is deleted, however, it does not reduce the table's high level. Unless you use the TRUNCATE operation to perform table queries, Oracle will scan the data blocks below the table's high level, that is, the scan time is not reduced. Therefore, deleting data by DELETE does not improve the Table query efficiency.
The following example is used to solve the slow query problem caused by high water level:
-- In the example, the tablespace used by the test table is 128 MSQL> SELECT. bytes/1024/1024 | 'M' FROM user_segments a WHERE. segment_name = 'tc _ RES_PHY_EQP_PRO_MID_517 ';. BYTES/1024/1024 | 'M' ------------------------------------------- 128 M -- the cost of querying a record is 4357, and consistent reading is: 15730 takes 0.53 seconds. SQL> set autotrace onSQL> SELECT 1 FROM TC_RES_PHY_EQP_PRO_MID_517 a WHERE. obj_id = 17202000000001; 1 ---------- 1 execution plan ---------------------------------------------- ------------ Plan hash value: 854298875 Bytes | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time | minute | 0 | select statement | 175 | 2275 | 4357 (2) | 00:00:53 | * 1 | table access full | TC_RES_PHY_EQP_PRO_MID_5 17 | 175 | 2275 | 4357 (2) | 00:00:53 | identified Predicate Information (identified by operation id): ----------------------------------------------- 1-filter ("". "OBJ_ID" = 17202000000001) Note ------dynamic sampling used for this statement (level = 2) Statistical Information ---------------------------------------------------------- 0 recursive call0 db block gets 15730 consistent gets 0 physical reads 0 redo size 520 bytes sent via SQL * Net to client 520 bytes encoded ed via SQL * Net from client 2 SQL * Net roundtrips /from client 0 sorts (memory) 0 sorts (disk) 1 rows processed -- DELETE most data now, with only one test data left: SQL> DELETE FROM TC_RES_PHY_EQP_PRO_MID_517 a WHERE. obj_id <> 17202000000001; 1172857 rows have been deleted. -- Query That the tablespace occupied by this segment is still 128 MSQL> set autotrace offSQL> SELECT. bytes/1024/1024 | 'M' FROM user_segments a WHERE. segment_name = 'tc _ RES_PHY_EQP_PRO_MID_517 ';. BYTES/1024/1024 | 'M' ------------------------------------------- 128 MSQL> COMMIT; the submission is complete. SQL> SELECT. bytes/1024/1024 | 'M' FROM user_segments a WHERE. segment_name = 'tc _ RES_PHY_EQP_PRO_MID_517 ';. BYTES/1024/1024 | 'M' ----------------------------------------- 128 M -- the cost of querying a record is: 4316, and consistent reading is: 15730 takes 0.52 seconds. SQL> set autotrace onSQL> SELECT 1 FROM TC_RES_PHY_EQP_PRO_MID_517 a WHERE. obj_id = 17202000000001; 1 ---------- 1 execution Plan ------------------------------------------------------------ Plan h Ash value: 854298875 Bytes | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time | Bytes | 0 | select statement | 1 | 13 | 4316 (1) | 00:00:52 | * 1 | table access full | TC_RES_PHY_EQP_PRO_MID_517 | 1 | 13 | 4316 (1) | 00:00:52 | identified by operation id: ----------------------------------------------------------------- 1-filter ("". "OBJ_ID" = 17202000000001) Note ------dynamic sampling used for this statement (level = 2) Statistical Information ---------------------------------------------------------- 0 recursive CILS 0 db bl Ock gets 15730 consistent gets 0 physical reads 0 redo size 520 bytes sent via SQL * Net to client 520 bytes encoded ed via SQL * Net from client 2 SQL * Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed -- in general, the TABLE's rowid will not change. We use alter table TABLE_NAME enable row movement; SQL> ALTER TABLE TC_RES_PHY_EQP_PRO_MID_517 ENABLE ROW MOVEMENT; the TABLE has been changed. -- Fragment and recycle space -- this operation is compared to alter table move: -- 1. will not consume more tablespaces -- 2. it can be executed online without making the index invalid-3. you can use the CASCADE parameter to contract the index of the table at the same time-4. after alter table move, the location of the tablespace will certainly change, but the location of the SHRINK tablespace does not change. SQL> ALTER TABLE TC_RES_PHY_EQP_PRO_MID_517 SHRINK SPACE; the TABLE has changed. -- The cost of querying a record is 2, and the consistent reading is 4, which takes 0.01 seconds. SQL> SELECT 1 FROM TC_RES_PHY_EQP_PRO_MID_517 a WHERE. obj_id = 17202000000001; 1 ------------ 1 execution Plan -------------------------------------------------------- Plan hash value: 854298875 Bytes | Id | Operation | Name | Rows | Bytes | Cost (% CPU) | Time | minute | 0 | select statement | 1 | 13 | 2 (0) | 00:00:01 | * 1 | table access full | TC_RES_PHY_EQP_PRO_MID_517 | 1 | 13 | 2 (0) | 00:00:01 | identified by operation id: ----------------------------------------------------------------- 1-filter ("". "OBJ_ID" = 17202000000001) Note ------dynamic sampling used for this statement (level = 2) statistics 0 recursive cballs 0 db block gets 4 consistent gets 0 physical reads 0 redo size 520 bytes sent via SQL * Net to client 520 bytes received via SQL * Net from client 2 SQL * net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed
-- The tablespace occupied is only 4 MB.
SQL> SELECT a.bytes/1024/1024 || 'M' FROM user_segments a WHERE a.segment_name = 'TC_RES_PHY_EQP_PRO_MID_517';A.BYTES/1024/1024||'M'-----------------------------------------4M
Of course, enable row movement also has an impact on system performance. In TOM's blog, find this Q & A about row movement:
You Asked
Hi Tom
I have seen your posting on ENABLE ROW MOVEMENT which is available in 10g. It looks
Very nice option since we can relocate and reorganize the heap tables without any outage
Since it does not invalidate indexes. But is there any performance hit or any other
Disadvantages for using this. I wowould like to use this in our new application.
Rgds
Anil
And we said...
Well, the tables have to be in an ASSM (Automatic Segment Space Managment) tablespace
This to work (so if they are not, you have to move them there first in order to do this
Over time ).
It will necessarily consume processing resources on your machine while running (it will
Read the table, it will delete/insert the rows at the bottom of the table to move them
Up, it will generate redo, it will generate undo ).
I wocould suggest benchmarking -- collect performance metrics about the table before and
After parameter Ming the operation. You wowould have CT full scans to operate more efficiently
After, you wowould perform CT index range scans to either be unchanged or "better" as you have
More rows per block packed together (less data spread). You wocould be looking for that
Happen -- statspack or the tools available in dbconsole wocould be useful for measuring
That (the amount of work completed MED by your queries over time)
That is to say, enable row movement also has side effects, because after the table opens the ROW migration, if the data is updated, the system will DELETE the data.
Then perform the INSERT operation, resulting in more redo and undo operations, and the rowid also changes.
The following is an explanation of row migration and row connection:
Row chain: When a row is too large to fit into any block, row chaining occurs. in this case, the Oracle devide the row into smaller chunks. each chunk is stored in a block along with the necessary poiters to retrive and assemble the entire row.
Row migration: when a row is to be updated and it cannot find the necessary free space in its block, the Oracle will move the entire row into a new block and leave a pointer from the orginal block to the new location. this process is called row migration.