Oracle Common Wait Event description

Source: Internet
Author: User
Tags metalink

Oracle's wait events are an important basis for measuring Oracle health and metrics. The concept of wait events was introduced in Oracle7.0.1.2, with roughly 100 waiting events. This number is increased to approximately 150 in Oracle 8.0, with approximately 200 events in oracle8i and approximately 360 waiting events in oracle9i. There are two main categories of wait events, idle (idle) Wait events and non-idle (non-idle) wait events.   Idle events means that Oracle is waiting for some kind of work, and we don't have to pay much attention to this part of the event when diagnosing and optimizing the database.   Common idle events are:  ? Dispatcher timer ? lock element cleanup ? Null event ? Parallel query dequeue wait ? Parallel query idle wait-slaves ? Pipe get ? PL/SQL lock timer ? Pmon timer-pmon ? RDBMS IPC message ? Slave wait ? Smon timer ? Sql*net break/reset to client ? Sql*net message from client ? Sql*net message to client ? Sql*net more data to client ? Virtual circuit status ? Client message  non-idle wait events are specific to Oracle's activities, which refer to the waiting events that occur during database tasks or application runs, which we should focus on when we adjust the database.   Some common non-idle wait events are:  ? db file scattered read ? DB file sequential read ? Buffer busy waits ? Free buffer waits ? enqueue ? Latch free ? log file parallel write ? Log fiLe sync 1. db file scattered read-db files distributed read   This situation usually shows the wait associated with the full table scan. When the database performs a full table sweep, the data is dispersed (scattered) into the buffer Cache based on performance considerations. If this wait event is significant, it may indicate that for some tables with full table scans, no indexes have been created, or no appropriate indexes have been created, we can can be checked to see if the data tables have been determined The correct settings have been made.   However, this wait event does not necessarily mean poor performance, and under certain conditions Oracle proactively uses full table scans to replace index scans to improve performance, which is related to the amount of data accessed and Oracle will have a smarter choice under the CBO under Rbo Oracle More inclined to use indexes.   Because full table scanning is placed in the cold junction (cold End) of the LRU (Least recently used, least recently applicable) list, for frequently accessed smaller data tables, you can choose to cache them into memory to avoid repeated reads.   When this wait event is significant, it can be diagnosed in conjunction with the V$session_longops dynamic performance view, which records things that have been running for a long time (longer than 6 seconds), many of which are full table scan operations (anyway, this is something that deserves our attention.) )。  2. DB file sequential read-db files are read sequentially.   This event typically shows read operations (such as index reads) related to a single block of data. If this wait event is significant, it may indicate that there is a problem with the table's connection order in a multi-table connection, that the driver table may not be used correctly, or that it may indicate that it is not indexed selectively.   In most cases we say that by indexing you can get records more quickly, so for a coded, well-tuned database, this wait is pretty much normal. However, in many cases, using an index is not the best choice, such as reading large numbers of data in large tables, full table scanning may be significantly faster than index scanning, so in development we should be aware that such queries should avoid using index scanning.  3. Free buffer-Release buffer   This wait event indicates that the system is waiting for free space in memory, which indicates that there is no free memory space in the current buffer. If the application is well-designed, the SQL writing specification, fully binding variables, then this wait may indicate that the BUFFER Cache setting is small, you may need to increase db_buffer_cache.  free Buffer waits may indicate that the DBWR is not writing fast enough, or there is serious competition in the disk, you may want to consider increasing checkpoints, using more DBWR processes, or increasing the number of physical disks, dispersing the load, and balancing IO.  4. Buffer busy-buffers busy   The wait event indicates that a buffer is waiting for a unshareable to be used, or that it is currently being read into buffer cache. In general, buffer Busy wait does notshould be greater than 1%. Check the buffer Wait Statistics section (or v$waitstat) to see if the wait is in the segment header (Segment header). If so, consider increasing the free list (freelist, for oracle8i DMT) or adding freelist groups (in many cases this adjustment is immediate and before 8.1.6, this freelists parameter cannot be modified dynamically; In 8.1.6 and later versions, dynamic modification feelists need to set compatible at least 8.1.6) .  If this wait is located in the undo header, you can resolve the buffer by increasing the rollback segment (rollback segment). If the wait is on the undo block, we may need to examine the application to properly reduce the large-scale consistent reads, or reduce the density of data in a consistent read (consistent read) table or increase the db_cache_size.   If you are waiting for data block, consider moving the table or data that is frequently accessed concurrently to another data block or a wider distribution (you can increase the Pctfree value, expand the data distribution, reduce competition), to avoid this "hot" data block, Or you might consider increasing the free list in the table or using a locally managed tablespace (locally Managed tablespaces).   If you wait in the index block, consider rebuilding the index, splitting the index, or using a reverse key index. To prevent buffer busy waits associated with blocks, you can also use smaller blocks: In this case, there are fewer records in a single block, so the block is not so "busy", or you can set a larger pctfree, which expands the physical distribution of the data and reduces the hot competition between records.   When executing DML (INSERT/UPDATE/DELETE), Oracle writes information to data blocks, and for multi-transaction concurrent access data tables, contention and waiting for ITL may occur, in order to reduce this wait, you can increase Initrans, Use multiple ITL slots. In Oracle9i, a new concept was introduced: ASSM (Segment Space Management Auto). With this new feature, Oracle uses bitmaps to manage space usage.  ASSM combined with LMT completely changed the storage mechanism of Oracle, bitmap freelist can alleviate buffer busy waiting (buffer busy wait), which was a serious problem in previous versions of Oracle9i.  oracle claims that ASSM significantly improves the nature of DML concurrency operationsYes, because different parts of the bitmap (the same) can be used simultaneously, thus eliminating the serialization of finding the remaining space. Based on Oracle's test results, using bitmap freelist will eliminate contention for all segmented headers (resources), as well as ultra-fast concurrent insertions. In Oracle9i, Buffer Busy Wait is no longer common   5. Latch Free-latch Release  latch is a low-level queueing mechanism for protecting shared memory structures in the SGA. Latch is like a memory lock that gets and releases quickly. Used to prevent shared memory structures from being accessed concurrently by multiple users. If latch is not available, the latch release failure is logged (latch free miss). There are two kinds of latch-related types:   immediately.   can wait.   If a process attempts to obtain a latch in immediate mode, and the latch is already held by another process, the process will not wait for the latch to be available if the latch cannot be used. It will continue to perform another operation.   Most latch issues are related to the following operations:  is not very good to use binding variables (library cache latch), redo generation problems (redo allocation latch), buffer storage competition (cache Buffers LRU chain), and the presence of a "hotspot" block (cache buffers chain) in the buffer cache.   Usually we say that if you want to design a failed system, regardless of the binding variables, this condition is sufficient, for heterogeneous systems, the consequences of not using bound variables are extremely serious.   There are also some latch waiting for the bug, should be concerned about the publication of Metalink related bugs and the release of patches. This problem should be studied when the latch miss ratios is greater than 0.5%. The latch mechanism of  oracle is competition, which deals with CSMA/CD in the network, all user processes compete for latch, willing-to-wait for a willing wait type (latch), if a process does not get latch in the first attempt , then it waits and tries again, if after _spin_count the contention cannot get latch, then the process goes to sleep, lasts a specified length of time, and then wakes up again, repeating the previous steps in order. The default value in 8i/9i is _spin_count= 2000.   If the SQL statement cannot be adjusted, above the 8.1.6 Version, OracleA new initialization parameter is available: Cursor_sharing can enforce binding on the server side by setting cursor_sharing = Force. Setting this parameter may bring some side effects, for Java programs, there are related bugs, the specific application should be concerned about the Metalink bug Bulletin.  6. Log Buffer space-Journal buffers   This wait occurs when you create a redo log for the log buffer, which is faster than LGWR writes, or if the log switch is too slow. This wait appears, usually indicates that the redo log buffer is too small, in order to solve this problem, you can consider increasing the size of the log file, or increase the size of the log buffer.   Another possible cause is a bottleneck in disk I/O, consider using a disk that writes faster. Settings under allowed conditions consider using a bare device to hold log files and improve write efficiency. In a general system, the lowest standard is not to put log files and data files together, because usually log files are read-only, separate storage can be achieved performance gains.  7. Log file switch-logfile switch   When this wait occurs, all requests for commit (commit) need to wait for the "log file switchover" to complete.  log file switch consists of two sub-events:  log file switch (archiving needed)  log file switch (checkpoint incomplete) &nbs P;log file switch (archiving needed)   This wait event usually occurs when the first log archive is not completed and the wait occurs when the log group loop is full. This wait may indicate a problem with IO. Workaround:  You can consider increasing the log file and increasing the log group   moving the archive file to fast disk   tuning log_archive_max_processes.  log file Switch (checkpoint Incomplete)-Log switchover (checkpoint not completed)   When your log group is finished, LGWR attempts to write the first log file, if the database does not complete writing the dirty block recorded in the first log file (for example, the first checkpoint is not completed), The wait event appears.   This wait event usually represents your DBWRWrite speed is too slow or IO has problems.   To solve this problem, you may want to consider adding additional DBWR or increasing your log group or log file size.  8. Log file sync-logfile synchronization   When a user commits or rolls back data, LGWR writes the session-time redo to the redo log by the log buffer. The log file synchronization process must wait for this process to complete successfully. To reduce this wait event, you can try to commit more records at once (frequent commits can lead to more overhead). Place the redo logs on a faster disk, or alternately use the redo logs on different physical disks to reduce the impact of archiving on LGWR.   for soft raid, generally do not use raid 5,RAID5 for frequently written systems will have a large performance loss, you can consider the use of file system direct input/output, or use bare devices (raw device), so that the performance of writing can be improved.  9. Log file single write the event is related only to the header block of the Write log file, which typically occurs when new group members are added and serial numbers are promoted.   Header blocks are written individually, because part of the header information is the file number, each file is different. Update log file header This operation is done in the background, generally rarely waiting, without much attention.  10. Log file Parallel write  writes redo records from log buffer to redo log files, mainly referring to regular write operations (relative to log file sync). If your log group has more than one group member, when flush log buffer, the write operation is parallel, and this wait event may occur.   Although this write operation is processed in parallel, the write operation will not be completed until all I/O operations are completed (if your disk supports asynchronous IO or uses IO SLAVE, this wait may occur even if there is only one redo log file member).   This parameter is compared to the log file sync time to measure the write cost of log file. This is often referred to as the synchronization cost rate.  11. Control file Parallel write--parallel write   This event may occur when the server process updates all control files. If the wait is very short, you can not consider it. If the wait time is longer, check the physical disk I/O that holds the control file for bottlenecks.   Multiple control files are identical copies for mirroring for increased security. For a business system, multiple control files should be stored on different disks,Generally speaking three is sufficient, if only two physical hard disks, then two control files are also acceptable. It is not practical to save multiple control files on the same disk. To reduce this wait, consider the following method:  reduce the number of control files (under the premise of ensuring security)   If the system supports, use asynchronous io  to transfer control files to the IO-burdened physical disk  12. The control file sequential Read/control files single write controls the continuous read/control file for individual writes to a single control file I/O has problems when both events occur. If the wait is obvious, check the individual control file to see if there is an I/O bottleneck in the storage location.  13. Direct path write-directly to write the wait occurs when the system waits to confirm that all outstanding asynchronous I/O have been written to disk. For this write wait, we should find the data file with the most frequent I/O operations (if there are too many sort operations, most likely a temporary file), spread the load and speed up its write operations.   If there is too much disk sequencing on the system that can cause temporary table space operations to occur frequently, consider using the local management table space, which can be divided into smaller files, different disks, or bare devices.  14. Idle event-free Event   Finally we look at a few idle wait events. Generally speaking, idle waiting means that the system waits for nothing to do, or waits for a user's request or response, and usually we can ignore these wait events. Idle events can be queried through the Stats$idle_event table.   Let's take a look at the main idle wait events for the system, and you should have a rough idea of what these events are, if your top 5 wait events are mostly those events, then generally your system is a little bit more relaxed.   Ext: http://www.360doc.com/content/11/0309/08/5287961_99660319.shtml

Oracle Common Wait Event description

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.