Notes on understanding wait events and statistics

Source: Internet
Author: User

Buffer busy waits

It indicates that some buffers in the buffer cache are simultaneously accessed by multiple processes.
View the statistics of various buffer wait types by using V $ WAITSTAT.
SELECT class, count from v $ waitstat where count> 0 order by count DESC;

You can also view V $ SESSION_WAIT to observe the current buffer wait information, where P1-FILE_ID, P2-BLOCK_ID,

What can I find through DBA_EXTENTS?SEGMENTQuiltCompetition.
Select * from v $ session_wait where event = 'buffer busy waits'
SELECTSegment_ Owner,Segment_ Name
FROM DBA_EXTENTS
WHERE file_id = <& p1>
AND <& p2> BETWEEN block_id AND block_id +Blocks-1;

ForSegment HeaderCompetition:
1. It is probably freelist'sCompetitionThe ASSM in LMT can solve the problem.
2. If you cannot use ASSM, you can increase the number of FREELIST instances. If not, use freelist group.
ViewSegmentFreelist:
SELECTSEGMENT_ NAME, FREELISTS
FROM DBA_SEGMENTS
WHERESEGMENT_ NAME =SegmentName
ANDSEGMENT_ TYPE =SegmentType;

For data blockCompetition:
1. Optimize SQL to avoid using index with poor selectivity
2. Use ASSM in LMT or add FREELIST to prevent multiple processes from inserting data to the same block at the same time.

For undoHeaderCompetition:
Use automatic undo management or add rollback segments

For undo BlocksCompetition:
Use automatic undo management, or increase the rollback segments size.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Free buffer waits

No free buffer is found, and dbwr is notified to write dirty data to the disk to obtain free buffer.
Factors that wait for the completion of dbwr are:
1. Slow IO, asynchronous disk, raw device
2. Wait for some resources, such as latches
3. The Buffer cache is too small, so dbwr takes a long time to write dirty data.
4. The Buffer cache is too large to allow dbwr to write enough dirty data to meet its needs.

Check where DBWR is blocked
1. Check V $ FILESTAT to see where the most write operations occur
2. Check the I/O status of the operating system.
3. Check whether the CACHE is too small. Check whether the buffer cache hit ratio is very low. Use V $ DB_CACHE_ADVICE to determine whether the CACHE should be increased.
4. If the CACHE is large enough and there is no IO problem, consider using asynchronous IO or multiple DBWR processes to satisfy the load
Adjust the DB_WRITER_PROCESSES parameter (from DBW0 to DBW9 and from DBWa to DBWj) for multiple CPUs (at least one db writer for every 8 CPUs) or multiple processor groups (at least as your db writers as processor groups)
5. DBWR_IO_SLAVES

LATCH FREE

1. Check what types of LATCH are waiting, such as shared pool latch and cache buffer LRU chain.
Check V $ SESSION_WAIT
P1-Address of latch p2-Latch number p3-sleep waits
SELECT n. name, SUM (w. p3) Sleeps
From v $ SESSION_WAIT w, V $ LATCHNAME n
WHERE w. event = 'latch free'
AND w. p2 = n. latch #
Group by n. name;

2. Check the resource usage of LATCH. For example, if library cache latch is highly competitive, check the hard and soft parse rates.
3. Check whether the SQL statements executed by sessions with LATCH competition need to be optimized.

Shared Pool and Library Cache Latch Contention

The main problem lies in parse.
1. Unshared SQL
Manually check whether the SQL statements executed only once are similar:
SELECT SQL _text FROM V $ SQLAREA
WHERE executions <4 order by SQL _text;
Or:
Select substr (SQL _text, 1, 60), COUNT (*)
From v $ SQLAREA
WHERE executions <4
Group by substr (SQL _text, 1, 60)
Having count (*)> 1;

2. Reparsed Sharable SQL
SELECT SQL _TEXT, parse_cils, EXECUTIONS
From v $ SQLAREA
Order by parse_cils;
When parse_cils is similar to EXECUTIONS, it indicates REPARSE. Optimize these SQL statements

3. By Session
Check whether a session has executed many parse operations. It is best to combine time to check the parse Rate
SELECT pa. sid, pa. value "Hard Parses", ex. value "Execute Count"
FROM v $ sesstat pa, v $ sesstat ex
WHERE pa. sid = ex. sid
AND pa. statistic # = (select statistic #
FROM v $ statname where name = 'parse count (hard )')
AND ex. statistic # = (select statistic #
FROM v $ statname where name = 'execute count ')
AND pa. value> 0;

Cache buffer lru chain

To protect the buffer chain in the cache, you must obtain this latch before adding, moving, and removing the buffer from the list.
It is caused by a large number of buffer input and output, such as inefficient SQL repeated access to unsuitable index (large index range scans) or invalid full table scans. Read Buffer will cause the buffer to move in LUR.
View Statements with very high logical I/O or physical I/O, using unselective indexes
Or the CACHE is too small, DBWR cannot write dirty data in time, so that the foreground process takes a long time to maintain latch to find free buffer.

Cache buffers chains

It is obtained when the buffer is searched, added, or deleted from the buffer cache. It is manifested that some data blocks are seriously damaged.Competition, That is, the hotspot block.

The difference between cache buffer lru chain latch and cache buffers chains latch:

The key point is that they protect different data structures: the former protects LRU chain pointer. LRU chain is used to discover free buffers, move the hot buffer to the MRU end, and assist in dirty Data Writing and checkpoint operations. The latter is used to protect the hash chain. The Hash chain is used to access the cache in the buffer through the hash algorithm (based on the file and block id ).Blocks.

Db file scattered read

Multiblock read is used to read data into a lot of discontinuous memory (based on file_id/block_id, the hash algorithm is used to distribute data in different places), usually occurs in fast full scan of index, or full table scan.
View V $ SESSION_WAIT: P1-FILE_ID, P2-BLOCK_ID, P3-NUMBER of blcoks (> 1)

In large databases, physical read waits and idle wait are usually at the top. Of course, you should also consider whether there are the following features:
Direct read wait (fts with parallel)/db file scattered read wait/Poor buffer cache hit ratio/slow user response time.

Check which sessions are being scanned for the full table:
SELECT s. SQL _address, s. SQL _hash_value, w. p1, w. p2
From v $ SESSION s, V $ SESSION_WAIT w
WHERE w. event LIKE 'db file % read' AND w. sid = s. sid;

View the object
SELECTSegment_ Owner,Segment_ Name
FROM DBA_EXTENTS
WHERE file_id = & p1 AND & p2 between block_id AND block_id +Blocks-1;

View which SQL
SELECT SQL _text
From v $ SQL
WHERE HASH_VALUE = & SQL _hash_value and address = & SQL _address

Db file sequential read

It indicates that data is read into the continuous memory through single block read, and is often indexed.
View V $ SESSION_WAIT: P1-FILE_ID, P2-BLOCK_ID, P3-NUMBER of blcoks (= 1)
 

Direct path read and direct path read (lob)

Reads data directly from the disk to the PGA, and bypasses the SGA. usually occurs in DSS or WH
Cause:
1. If the sorting is too large, it cannot be completed in sort memory and transferred to temp disk. And then read it to form a direct read.
2. Parallel slaves Query
3. The server process is processing buffers faster than the I/O system can return the buffers.
Solution:
1. query V $ TEMPSEG_USAGE to find the SQL statement that generates sort. query V $ SESSTAT to check the size of sort.
2. Adjust the SQL statement. If WORKAREA_SIZE_POLICY = MANUAL, increase SORT_AREA_SIZE;
If WORKAREA_SIZE_POLICY = AUTO, increase PGA_AGGREGATE_TARGET.
3. If the table is defined as a high degree of parallelism, optimizer will be guided to use parallel slaves for full table scan.

Direct path write

The situation is the same as read. Wait for the buffer to be obtained directly from the PGA and write it to the disk.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.