Enqueue is a lock mechanism to protect shared resources. This locking mechanism protects shared resources, such as recorded data, to prevent two people from updating the same data at the same time. Enqueue
It includes a queuing system, namely, the FIFO (first-in-first-out) queuing system.
Enqueue waits for Common ST, HW, TX, TM, etc.
The ST enqueue is used for the interval allocation of the tablespace (DMT) for space management and dictionary management. In DMT, the typical competition for uet $ and fet $ data dictionary tables is. For LMT-Enabled
Version. Use local tablespace management as much as possible, or manually pre-allocate a certain number of partitions (Extent) to reduce dynamic expansion.
Severe queue competition.
Let's take a look at an example:
DB Name DB Id Instance Inst Num Release OPS Host ------------------- ------------ -------- ----------- --- ------------------ DB 40757346 aaa 1 8.1.7.4.0 NO server Snap Id Snap Time Sessions ------- ------------------ Begin Snap: 2845-03 02:10:16 46 End Snap: 2848-03 03:40:05 46 Elapsed: 89.82 (mins)For a Statspack report, the sampling time is a very important dimension. Any waiting time is insufficient for reference.Cache Sizes ~~~~~~~~~~~ Db_block_buffers: 51200 log_buffer: 2097152 db_block_size: 16384 shared_pool_size: 209715200 ........... top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % TotalEvent Waits Time (cs) wt Time ---------------------------------------------------------- ------- enqueue 53,793 16,192,686 67.86 rdbms ipc message 19,999 5,927,350 24.84 pmon timer 1,754 538,797 smon timer 17 2.26 2.19SQL * Net message from client 522,281 94,525 520,104 2.18 limit -------------------------------------------------------------In Statspack analysis, Top 5 wait events are the most important part. In this system, except for the enqueue wait event, the other four are idle wait events, so you do not need to pay attention to them. Let's take a look at the enqueue wait event. During the sampling interval of 89.82 (mins), the total number of enqueue waits for up to 16,192,686 cs, that is, about 45 hours. This wait is too significant. In fact, this system is also experiencing great difficulties. after observing the queue wait, we should pay attention to what resources the queue is waiting. For more information about the quick jump Statspack, see the following details:Enqueue activity for DB: DB instance: AAA snaps: 2716-2718-> ordered by waits DESC, gets descenqueue gets waits -------------- ---------- st 1,554 bytes -------------------------------------------------------------We can see that the main queue is waiting for the ST to lock. For DMT, we say that this wait is closely related to the competition between FET $ and UET $. Let's look back at the captured SQL statements:-> End Buffer Gets Threshold: 10000-> Note that resources reported for PL/SQL events des the resources used by all SQL statements called within the PL/SQL code. as individual SQL statements are also reported, it is possible and valid for the summed total % to exceed 100 Buffer Gets Executions Gets per Exec % Total Hash Value =-------------- ------------ -------------- 4,800,073 10, 268 467.5 51.0 2913840444 select length from fet $ where file #=: 1 and block #=: 2 and ts #=: 3 803,187 10,223 78.6 8.5 delete from uet $ where ts # =: 1 and segfile # =: 2 and segblock # =: 3 and ext # =: 4 454,444 10,300 44.1 4.8 1839874543 select file #, block #, length from uet $ where ts # =: 1 and segfile # =: 2 and segblock # =: 3 and ext # =: 4 23,110 10,230 2.3 0.2 3230982141 insert into fet $ (file #, block #, ts #, length) Values (: 1,: 2,: 3,: 4) 21,201 347 61.1 0.2 1705880752 select file # from file $ where ts # =: 1 .... 9,505 12 792.1 0.1 1714733582 select f. file #, f. block #, f. ts #, f. length from fet $ f, ts $ t where t. ts # = f. ts # and t. dflextpct! = 0 and t. bitmapped = 0 6,426 235 27.3 0.1 1877781575 delete from fet $ where file # =: 1 and block # =: 2 and ts # =: 3
As we can see, frequent operations on UET $ and FET $ system tables in the database have become the main bottlenecks of the system. So far, we have been able to accurately locate the problem for the system, and the corresponding solution is also very easy to determine. In 8.1.7, using LMT instead of DMT is the fundamental solution to the problem, of course, the implementation should be considered comprehensively, and the actual situation should be much more complicated.
HW enqueue refers to the wait related to the high level mark of the segment; manual allocation of the appropriate area can avoid this wait.
The TX lock (transaction lock) is the most common enqueue wait. TX enqueue wait is usually the result of one of the following three problems.
The first problem is the duplicate index in the unique index. You need to execute the commit/rollback operation to release the enqueue.
The second problem is that the same bitmap index segment is updated multiple times. Because a single bitmap segment may contain multiple rowids, when multiple users attempt to update the same segment
The user locks the records of other user requests and waits for them to appear. Enqueue is released until the locked user is submitted or rolled back.
The third and most likely problem is that multiple users update the same block at the same time. If there are not enough ITL slots, block-level locks will occur. By increasing initrans and/or maxtrans to allow multiple ITL slots (for data tables that frequently perform DML operations concurrently, you should consider setting reasonable values for corresponding parameters at the beginning of the table creation, avoid online changes after the system runs. Before 8i, freelists and other parameters cannot be changed online, which is especially important during design), or increase the pctfree value of the table, this situation can be easily avoided.
The TM enqueue queue lock is obtained before the DML operation to prevent any DDL operations on the data table being operated (the structure of a data table cannot be changed when the DML operation is performed ).
From: http://www.eygle.com/statspack/statspack13.htm