Impact of direct path reading in Oracle 11g on delayed block clearing

Source: Internet
Author: User

In Oracle 11g version, full-Table serial scan may use direct path read instead of db file scattered read used in earlier versions, obviously direct path read has more advantages:

1. reduces the use of plug-ins and avoids possible use of plug-ins

2. the size of physical IO no longer depends on the block in buffer_cache. Imagine that Block 1, 3, and 7 of an eight-block extent are in the cache, while Block 2, 4, and 6 are not cached, when reading this extent, the traditional method will perform four db file sequential read operations on blocks 2, 4, and 6, which is a terrible situation, the efficiency is often much lower than reading all eight blocks in this interval at a time, although Oracle tries its best not to cache large table blocks (always put at the coldest end of the queue after reading) to avoid this situation, direct path read can completely avoid such problems, read as many physical blocks as possible at a time.

Of course, direct path reading also introduces some disadvantages:

1. You need to perform A segment-level checkpoint (A segment checkpoint) on the object before reading A segment in the direct path ).

2. Duplicate delayed block cleanup operations may occur (we assume you know what delayed block cleanout is ).


Http://www.oracledatabase12g.com/archives/direct-read-impact-on-delayed-block-read.html


The metalink Document [ID 793845.1] describes the changes in the new version:

Applies:

Oracle Server-Enterprise Edition-Version: 11.1.0.6 to 11.1.0.7
This problem can occur on any platform.
Symptoms

After migrating an 11g database from a standalone to a 4-node RAC, a noticeable
Increase of 'direct path read' waits were observed at times.

Here are the Cache sizes and Top 5 events.

Waits

Cache Sizes Begin End
~~~~~~~~~~~ --------------------
Buffer Cache: 3,232 M 3,616 M Std Block Size: 8 K
Shared Pool Size: 6,736 M 6,400 M Log Buffer: 8,824 K
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
Wait % DB
Event Waits Time (s) (MS) time Wait Class
---------------------------------------------------------------------------
Database CPU 13,916 42.1
Direct path read 1,637,344 13,359 8 40.4 User I/O
Db file sequential read 47,132 1,111 24 3.4 User I/O
DFS lock handle 301,278 1,028 3 3.1 Other
Db file parallel read 14,724 554 38 1.7 User I/O

Changes

Migrated from a standalone database to a 4-node RAC.
Moved from Unix file system storage to ASM.

Using Automatic Shared Memory Management (ASMM ).
The setting of db_cache_size in spfile/pfile is low compared to normal workload requirements.
Cause

There have been changes in 11g in the heuristics to choose between direct path reads or reads through buffer cache for serial table scans.
In 10g, serial table scans for "large" tables used to go through cache (by default) which is not the case anymore. in 11g, this demo-to read via direct path or through cache is based on the size of the table, buffer cache size and varous other stats.
Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches.
Solution

When using Automatic Shared Memory Management (ASMM) and with buffer cache low limit set at a low end compared to the normal workload requirements and usually after startup, 11g might choose to do serial direct path read scans for large tables that do not fit in the SGA. when ASMM increases the buffer cache due to increased demand, 11g might not again do serial direct path read scans for these same large tables. if you like to avoid this from happening, you shoshould note the buffer cache and share pool requirements for a normal workload and set the low limits of buffer cache and shared pool in spfile/pfile close to these normal workload values.
Db_cache_size
Shared_pool_size

Next we will test the impact of direct path reading on the deletion of delayed blocks:

SQL> create table TV as select rownum rn, rpad ('A', 600, 'z') rp from dual
2 connect by level <= 300000;

The table has been created.

Create a session:

SQL> set linesize 200 pagesize 1400;
SQL> select count (*) from TV;

COUNT (*)
----------
300000

SQL> select vm. sid, vs. name, vm. value
2 from v $ mystat vm, v $ sysstat
3 where vm. statistic # = vs. statistic #
4 and vs. name in ('cleanouts only-consistent read gets ',
5 'session logical reads ',
6 'physical reads ',
7 'physical reads direct ');

SID NAME VALUE
------------------------------------------------------------------------------------
25 session logical reads 27281
25 physical reads 27273
25 physical reads direct 27273
25 cleanouts only-consistent read gets 0

-- Obviously, the query adopts the direct path reading method.

SQL> update TV set rn = rn + 1; -- try batch update

SQL> alter system flush buffer_cache;
-- Refresh the cache, causing delayed block clearing, and submit

The system has been changed.

SQL> commit;

Submitted.

Create a new session B:

SQL> set linesize 200 pagesize 1400;
SQL> select count (*) from TV;

COUNT (*)
----------
300000

SQL> select vm. sid, vs. name, vm. value
2 from v $ mystat vm, v $ sysstat
3 where vm. statistic # = vs. statistic #
4 and vs. name in ('cleanouts only-consistent read gets ',
5 'session logical reads ',
6 'physical reads ',
7 'physical reads direct ', 'redo size ');

SID NAME VALUE
------------------------------------------------------------------------------------
25 session logical reads 54554
25 physical reads 27273
25 physical reads direct 27273
25 redo size 0
25 cleanouts only-consistent read gets 27273
-- Delayed block clearing is generated when direct path read is used for query, But redo is not generated.
  • 1
  • 2
  • Next Page

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.