Wait events on buffercache caused by bufferlock

Source: Internet
Author: User
What is bufferlock? Block modification in buffercache must be protected by bufferlock. When you want to modify a block or read a block, the shared lock or exclusive lock will be applied to the block header. Although this time is only an instant, other users just want to modify this block, it will wait for bufferbusywaits, readbyothersession, etc.

What is buffer lock? Block modification in buffer cache requires buffer lock. When you want to modify a block or read a block, the shared lock or exclusive lock will be applied to the block header. Although this time is only an instant, other users just want to modify this block, wait for buffer busy waits, read by other session, etc.

What is buffer lock? Block modification in buffer cache requires buffer lock. When you want to modify a block or read a block, the shared lock or exclusive lock will be applied to the block header. Although this time is only an instant, other users just want to modify this block, wait for events such as buffer busy waits and read by other session.

A session that reads or modifies a buffer in the SGA must first acquire the cache buffers chains latch and traverse the buffer chain until it finds the necessary buffer header. then it must acquire a buffer lock or a pin on the buffer header in shared or exclusive mode, depending on the operation it intends to perform. once the buffer header is pinned, the session releases the cache buffers chains latch and performs the intended operation on the buffer itself. if a pin cannot be obtained, the session waits onBuffer busy waitsWait event. This wait event does not apply to read or write operations that are stored in sessions 'private PGAs.

To read or write a block on the buffer cache, a session must first obtain the cache buffers chains latch and then scan the block header on the entire chain until it finds the appropriate block header. In this process, other sessions can no longer obtain the cache buffers chains latch, but must wait for the latch: cache buffers chains to wait for the event. The hash latch, hash bucket, hash chain, and buffer header are all located on the shared pool. The real block content is on the buffer cache. After a session finds a suitable block and finds the required Block Based on the block header address, it marks the block header as shared or exclusive, depending on the operation the session wants to perform, after completion, the cache buffers chains latch will be released. Then do the operation you want to do, such as select or update. If a conflict is found when the block header is marked with shared or exclusive, This is the buffer lock conflict, and the buffer busy waits wait event will occur.
Latch: the contention of the cache BUFFER chain on the same chain.

Buffer busy waits
The buffer busy waits event occurs when a session wants to access a data block in the buffer cache that is currently in use by some other session. The other session is either reading the same data block

Into the buffer cache from the datafile, or it is modifying the one in the buffer cache.

When a session attempts to access the same data block used by other sessions in the buffer cache, the buffer busy waits event occurs. These sessions are reading these blocks from the data file to the buffer cache, or you are modifying these blocks in the buffer cache.

In order to guarantee that the reader session has a coherent image of the block with either all of the changes or none of the changes, the session modifying the block marks the block header with a flag

Leader other sessions know that a change is taking place and to wait until the complete change is applied.

To ensure the consistent reading of these data blocks by these read sessions, the session for modifying the data block will be marked in the header of the data block to inform other sessions that the current data block is being modified, read only after the modification is completed and the modification is committed.


Wait events caused by buffer lock: buffer busy waits buffer busy global cache/CR (from 10 Gb to gc buffer busy) read by other session (from 10 Gb)
9i: buffer busy waits wait event parameter P1: FILE # (absolute file number) select * from v $ datafile where FILE # = 10; P2: Block # P3: if the cause code P3 is 130, it is equivalent to 10 Gb read by other session. If it is 220, it is equivalent to 10 Gb buffer busy waits. These two are the most common reasons. Do you remember that less than 200 of the causes are IO-related.
If the error code is greater than 10 Gb, the cause code is no longer used. The P3 value is changed to the class (class #) of the competing block. This is not the WAIT_CLASS # Of the waiting event #. Select * from v $ waitstst. If CLASS # of P3 is 1, and session SQL is query, it is equivalent to the previous 130. If session SQL is DML, it is equivalent to the previous 220. Buffer lock contention caused by read by other session select/select caused by select/select occurs when the same block is loaded into the memory. In other words, if the block you read is also read from the disk, you have to wait for the read by other session. Why does this happen? Because the buffer lock needs to be obtained in Exclusive mode when the buffer is initially created, and other sessions that want to read the block in shared Mode need to wait for the buffer lock in Exclusive mode to be released. (This is similar to the concept of obtaining the library cache pin in Exclusive mode for the corresponding SQL Cursor when Hadr Parsing occurs ). When this wait event occurs, physical IO waits such as db file sequential read and db file scatterred read occur. If the block to be read has been loaded into the SGA, multiple sessions obtain the buffer lock of the corresponding block in the shared mode, there will naturally be no buffer lock contention. (But when the block header is changed to the shared tag of the shared block header and the shared tag of the release block header, the corresponding cache buffer chain latch must be locked in exclusive mode. The latch: cache buffer chain will have to be discussed separately ).
Solution: reduce physical IO. Reduce logical IO to reduce physical IO, so optimize SQL to reduce buffer get. If the SGA is large, it can also reduce physical IO. Therefore, you can set a slightly larger SGA to reduce such wait events without frequently performing physical IO from the disk. Accelerate physical IO, which is related to storage performance
Buffer busy waits/read by other session caused by select/update has two situations: 1. when the session sends a select query and the database has been modified, it must read the CR block of the past image. At this time, the CR block is not in the buffer zone, and the undo block must be read from the disk. In the read process, other sessions also need to query these CR blocks, and the read by other session will appear. This is the select/select contention for consistent reads. 2. during the update process, the session will modify the information on the undo block. Therefore, the buffer lock of the undo block must be obtained in exclusive mode. In this case, the undo block is used for consistent queries sent by other sessions, to obtain the buffer lock of the undo block in the shared mode, a buffer busy waits wait event occurs. Imagine that if the undo block is large and contains many rows, each update row needs to obtain the buffer lock of the block with exclusive, which is easy to happen. However, in the AUM environment, the probability is very low. Why is it an undo block? Note: Do you think there is anything wrong with this? For the select/update contention, won't it happen in normal data blocks? Why only in Undo blocks? To update a common data block, you must add an exclusive lock in an instant. The read block action requires obtaining the shared lock of the block to read the undo information of the block header, so as to know where to find the corresponding undo block to construct the CR block. Shared instances may conflict with exclusive instances. In fact, common blocks are managed by the hash chain. If you update or select a common block, you must first obtain the cache buffer chain sub-latch, which puts the competition on the slave sublatch. The undo block is not managed by the hash chain. From this we can also see that the fine-grained locks in the memory are also divided into the upper latch and the lower buffer lock. Solution: reduce the SQL logical read and increase the buffer cache.
When multiple sessions of buffer busy waits caused by update/update the row of the same block at the same time, if the row is the same, transactions can be synchronized through the TX lock, but for different rows, you need to use buffer lock for synchronization. Contention occurs during this process and waits for the buffer busy waits event. Multiple modification requests to the Index and index leaf block may also cause buffer busy waits. The undo block header will also be used for contention. For example, multiple sessions will also execute update at the same time. Solution: buffer busy waits waiting caused by update/update can be solved by avoiding performing update on the same block at the same time. Creating an optimal partition that considers the update form will become the best solution. By taking high PCTFREE or using smaller blocks, the blocks can be dispersed, because it can reduce the contention for buffer locks, but may have negative effects, so we need to test.
Write complete waits DBWR occupies the buffer lock in exclusive mode when writing the dirty buffer to the disk. At this time, other processes that read or modify the buffer must wait for the completion of this operation and wait for the write complete waits event. This also shows that exclusive/shared buffer lock conflicts.
Solution: The write complete waits is widely used and may be a DBWR performance problem. IO system performance is slow. With a long db file parallel write, a small number of db_writer_processes values will lead to excessive waiting. Frequent checkpoints may cause too much DBWR workload, because frequent log switching caused by less FAST_START_MTTR_TARGET and too small redo log files will lead to frequent incremental checkpoints. During direct path read caused by Parallel Query, there are also truncate, drop, and hot backup checkpoints. At this time, it will bring unnecessary load to DBWR, which will also lead to an increase in the number of waiting events.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.