Buffer cache Experiment 9-read data block parsing from buffer caceh-read from logic to physical

Source: Internet
Author: User

Let's take a big picture first:


 

SQL statement used:
BYS @ ocm1> select dbms_rowid.rowid_relative_fno (rowid) file #, dbms_rowid.rowid_block_number (rowid) block #, deptno from bys. test;
FILE # BLOCK # DEPTNO
------------------------------
4 391 10
For example, the text description analyzes the process of obtaining the required data block when the foreground process sends a query statement: Note:This article does not involve the parsing part of SQL statements, client-to-server interaction, and only involves buffer cache.
The physical reads here are non-direct path reads and non-large table full table scans-this point will be introduced at the end.
If an update Statement is issued, the lock added on the buffer pin is an X exclusive lock. The other steps are basically the same.
In this example, only one data block is read.
Reading a data block from the buffer cache usually requires about ns. It takes 10 ms to read a data block from a general storage hard disk, reading data blocks from the memory is nearly 100,000 times faster than reading data from the hard disk.
Therefore, when oracle reads data blocks, it first searches in the buffer cache. If so, it reads-logical read. If the data block does not exist, physical read occurs,Reads data from physical files into the buffer cache (ignore direct reading ).
Previously written logical reads: simple parsing of logical reads of Data Reading-about BUFFER CACHE
The following is the official start:-- The first is the logical read process. 1.The foreground process sends the query statement select deptno from bys. test;
2.Calculate the HASH value based on the DBA and find the corresponding HASH bucket based on the Hash value
3. Obtain the cbc latch. If the query fails, the following occurs: latch: cache buffers chains.
4.Under the Protection of cbc latch, the server process scans the hash chain to check whether the required BH exists.
5.If you find the required BH, the Buffer pin lock will be applied to the buffer Header (here the read operation is used, so it is a shared lock (the common locks when you find BH are: current read locks, consistent read locks, or modify locks). If the buffer pin fails to be obtained (for example, the buffer busy waits when S mode is being applied in X mode ), read the buffer according to the actual address of the block specified in BH in the memory, and return the result to the foreground process. After reading (in nanoseconds), you will obtain the cbc latch again, release the buffer pin lock, and then release the cbc latch.
----- The above is logical read. If no buffer is found, the following physical reads will occur: 6.If the required BH is not found, physical read will occur. The server process reads the required block from the corresponding data file on the disk and reads the block into buffer cche.
7.How to find a usable buffer when reading a block into the buffer cache? Next, perform step-by-step parsing.
8.First, find the available buffer at the end of the secondary LRU, And the TCH <2 block can be reused.
9.If the ending block of the secondary LRU is a TCH <2 block, this block is directly used and moved to the cold end header of the primary LRU.
At the same time, the DBA of this block will also perform HASH, find the corresponding hash bucket, and add this block to the corresponding hash chain, modify the corresponding information in BH (such as the specific values of the LRU_FLAG, NXT_HASH, BA, and other fields in X $ BH) -- this process also requires the release of the corresponding cbc latch/buffer pin lock.
Then, return the value of the data block to the foreground process, and the physical read is completed.
10.If the ending block of the secondary LRU is TCH> = 2, first move the block to the hot end of the primary LRU, and clear the TCH. Then, continue the forward search on the secondary LRU, until the available block --- TCH <2 is found. The subsequent process is the same as in step 9.. (When SMON scans idle BUFFER every three seconds, the server process will move the TCH greater than or equal to 2 in the auxiliary LRU to the master's hot end header)
11.If no usable block is found after searching for the secondary LRU, the search starts from the cold end of the primary LRU.
12.If the ending block of the primary LRU is a TCH <2 block, the block is directly used and moved to the cold end header of the primary LRU. The TCH is 1. If it is a TCH> = 2 block, it is moved to the hot end header, and TCH is cleared. If it is a dirty block, move it to the master LRUW. Search for available blocks according to this rule. (SMON searches for non-dirty blocks whose TCH is less than 2 from the master LRU cold end every 3 seconds to the secondary LRU to ensure that there is a BUFFER available in the secondary LRU)
13.If 40% (the implicit parameter _ db_block_max_scan_pct) has not been found, it will trigger DBWR to write dirty blocks on the LRUW -- (writing in the CKPTQ queue does not involve LRUW, Only DBWR writes dirty blocks on the LRUW and writes all dirty blocks on the LRUW. every three seconds, it also writes all the blocks on the LRUM to sleep.The steps for writing dirty blocks on the LRUW are as follows: when the DBWR process is written or the SMON process wakes up every three seconds (the LRUW process is not like the auxiliary LRUW, and non-DBWR processes are also allowed to access ), some dirty blocks on the primary LRUW will be moved to the secondary LRUW, and then sorted and written to the disk on the secondary LRUW. Then, the next batch will be moved from the primary LRUW until the writing is completed and sleep again. In addition, free buffer waits is generated during DBWR writing. The written buffer is remounted to the secondary LRU and becomes available.

Description of the full table scan for a large table and the _ small_table_threshold parameter: The size table has the following boundary: _ small_table_threshold. The VALUE in this parameter is the number of data blocks.
The full table scan of a large table only uses the auxiliary LRU, And the TCH of the block is 1. This will not impact the blocks on the primary LRU, but also facilitate the reuse of blocks in large tables. If other user statements need to find available buffer from the secondary LRU, they can be used directly to save time.
The full table scan of small tables is the same as that of common data blocks to find available buffer.


P_NAME P_DESCRIPTION P_VALUE ISDEFAULT ISMODIFIED ISADJ
------------------------------------------------------------------------------------------------------------------------------------------------
_ Small_table_threshold lower threshold level of table size for direct rea 89TRUE FALSE
Ds
Here 89 is the number of blocks, and the number of blocks used in the table is not the direct MB or KB ..
The value of _ small_table_threshold is automatically set to 2% of the number of buffers when the database is started. -- Modify the buffer cache size and restart the database for verification.
SYS @ bys3> select count (*) * 0.02 from x $ bh; --- obtain the number of buffers from X $ BH
COUNT (x) * 0.02
-------------
88.98


Description of direct path read: --- direct path read from Baidu usually occurs when Oracle directly reads data to the PGA process, this read does not need to go through SGA. The three parameters of the direct path read wait event are file number (absolute file number), first dba, and block cnt. In Oracle 10g/11g, this wait event is classified as User I/O. Db file sequential read, db file scattered read, and direct path read are common centralized data read methods.

In the data warehouse environment, a large number of direct path reads are normal. In OLTP, a large number of direct path reads means that an application has problems, resulting in a large number of disk sort reads.

The most common case is the first case. In the DSS System, it is normal to have a large number of direct path reads, but in the OLTP system, a significant direct path read usually means that the system application has problems, this results in a large number of disk sorting and reading operations. Direct paht write usually occurs in Oracle Directly Writing data from PGA to data files or temporary files. This write operation can bypass SGA. The three parameters of the direct path write wait event are: file number (absolute file number), first dba, and block cnt. in Oracle 10g/11g, this wait event is classified as User I/O in the same way as direct path read. This type of write operations are usually used in the following situations: · direct path loading; · parallel DML operations; · disk sorting; · writing to the "LOB" segment that is not cached, the record is direct path write (lob) Wait. Most common direct path writes are caused by disk sorting. For this write wait, we should find the data files with the most frequent I/O operations (if there are too many sort operations, it is likely to be temporary files), scattered load, accelerate write operations.

When inserting a direct path, the rollback information of the table block is not generated, but the rollback is implemented based on the high water level point.
However, if the table has an index, the index rollback information will be generated, and the index block will be read into the buffer cache.
Oracle official documentation suggests that if you use direct path insertion to transfer a large amount of data to a table, you can delete the index in the table first, and then re-create the index after the insertion.

In Oracle 11g version, full-Table serial scan may use direct path read instead of db file scattered read used in earlier versions, obviously direct path read has more advantages:

1. Reduced latch contention

2. The size of physical IO no longer depends on the block in buffer_cache;
Imagine that Part 1, 3, 5, and 7 of an extent of eight blocks are in the cache, while Block 2, 4, and 6 are not cached, when reading this extent, the traditional method performs four db file sequential read operations on blocks 2, 4, and 6, the efficiency is often much lower than reading all eight blocks in this interval at a time,
Direct path read can completely avoid such problems and try to read more physical blocks at a time.

Of course, direct path reading also introduces some disadvantages:
1. You need to perform A segment-level checkpoint (A segment checkpoint) on the object before reading A segment in the direct path ).
2. Duplicate delayed block clearing operations may occur.
Http://www.oracledatabase12g.com/archives/direct-read-impact-on-delayed-block-read.html


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.