Analysis of the complete process of Oracle transactions

Source: Internet
Author: User
How does an Oracle service process a user process? the server process completes the following seven tasks during the process of completing a user process request: 0. SQL statement parsing 1. Data Block

How does an Oracle service process a user process? the server process completes the following seven tasks during the process of completing a user process request: 0. SQL statement parsing 1. Data Block

How does an Oracle service process a user process? the server process completes the following seven tasks during the process of completing a user process request:

0. SQL statement Parsing

1. Read data blocks into db buffer

2. log recording

3. Create a rollback segment for the transaction

4. modify data blocks in this transaction

5. Add dirty list

6. User commit or rollback

How does an oracle service process a user process? the server process completes the following seven tasks during the process of completing a user process request:

0. SQL statement Parsing

1. Read data blocks into db buffer (write data cache)

2. log recording

3. Create a rollback segment for the transaction

4. modify data blocks in this transaction

5. Add dirty list

6. User commit or rollback

0. SQL statement Parsing

If an oracle Server process can process user process requests, update temp set a = a * 2 when a user process submits an SQL statement; first, after the oracle server process receives information from the user process, the process needs to allocate the required memory in the PGA to store relevant information, such as the login information stored in the session memory; the server process converts the character of this SQL statement into an ASCII equivalent digital code. Then, the ASCII code is passed to a HASH function and a hash value is returned, the server process then goes to the library cache in the shared pool to check whether the same hash value exists. If so, the server process uses the analyzed version of the statement that has been cached in the shared pool library cache for execution. If it does not exist, the server process will be in the CGA, And the UGA content will be used for the SQL statement, for syntax analysis, first check the correctness of the syntax, then parse the tables, indexes, views and other objects involved in the statement, and check the names and related information of these objects against the data dictionary. Structure, and generate an execution plan or select an execution plan from the storage outline based on the ORACLE optimization mode and the statistical data of the corresponding objects in the data dictionary and whether the storage outline is used, then, use the data dictionary to check the user's permission to execute the corresponding object, and finally generate a compilation code. ORACLE caches the actual text, HASH value, compiled code of the SQL statement, any statistical data associated with the language name, and the Execution Plan of the statement in the library cache of the SHARED POOL. The server process uses the shared pool latch to apply to which shared PL/SQL zones can cache this content, that is to say, blocks in the PL/SQL area locked by the SHARED POOL latches cannot be overwritten, because these blocks may be used by other processes. The library cache will be used in the SQL analysis phase. When the structure of the table and view is checked in the data dictionary, the data dictionary must be read from the disk into the LIBRARY CACHE. Therefore, before reading the data, you must use the library cache pin (library cache lock) to apply for caching the data dictionary.

So far, this SQL statement has been compiled into executable code, but it does not know which data to operate. Therefore, the server process must prepare preprocessing data for this SQL statement.

1. Read data blocks into db buffer

Oracle needs to read data to the memory (db buffer) for data processing. First, the server process must determine whether the required data exists in the db buffer. If yes, this data is obtained directly, and the access count is increased according to the LRU algorithm. If the buffer does not have the required data, it is read from the data file. First, the server process requests the tmlock In the table header (to ensure that other users cannot modify the table structure during the transaction execution). If the tmlock is successfully applied, some row-level locks (TX locks) are requested ), if the TM and TX locks are successfully locked, the system starts to read data from the data file. Before reading data, the buffer space must be prepared for the Read File. The server process needs to scan the LRU list to find the free db buffer. During the scan, the server process registers all the modified db buffer to the dirty list,

These dirty buffer will pass the dbwr trigger conditions, and then be written into the data file, find enough idle buffer, you can place the data block of the requested data row in the idle zone of the db buffer or overwrite the non-dirty data block buffer that has been squeezed out of the LRU list, and arrange it in the header of the LRU list, that is, before data blocks are placed in the db buffer, you must apply for the latches in the db buffer before you can read the data to the db buffer.

2. log recording

Now the data has been read into the db buffer, now, the server process writes the rowid of the row data affected by the statement and the row data in the db buffer, the original value to be updated, the new value, and the scn information from the PGA to the redo log buffer one by one.. Before writing the redo log buffer, you must first request the redo log buffer latches. After the locks are successfully applied, the write starts, when the number of writes reaches 1/3 of the redo log buffer size, or the number of writes reaches 1 MB or exceeds 3 seconds, or when the check point occurs or before dbwr, the lgwr process is triggered to write the redo log buffer data to the redo file on the disk (this will generate a log file sync wait event ), the latches held by the redo log buffer that has been written into the redo file will be released and can be overwritten by subsequent write information. redo log buffer is used cyclically. Redo file is also used cyclically. When a redo file is full, the lgwr process automatically switches to the next redo file (in this case, the log file switch (checkpoint complete) Wait event may occur ). In archive mode, the archive process also writes the content of the previous full redo file to the archive log file (this may cause log file switch (archiving needed )).

3. Create a rollback segment for the transaction

After completing the redo log buffer related to the firm, the server process begins to rewrite the transaction list of the block header of the db buffer and write it into the scn, copy the data copies that contain the transaction list and scn information of this block into the rollback segment. The information in the rollback segment is called the "front image" of the data block. this "pre-image" is used for subsequent rollback, recovery, and consistent read. (Rollback segments can be stored in dedicated rollback tablespace, which consists of one or more physical files and is dedicated to rollback tablespaces, rollback segments can also be opened in data files in other tablespaces .)

4. modify data blocks in this transaction

The preparation is complete. Now you can rewrite the data content of the db buffer block and write the rollback segment address in the block header.

5. Add dirty list

If a row of data is updated multiple times without commit, there will be multiple "front images" in the rollback segment, except that the first "front image" contains scn information, each Other "front image" header has the scn information and the "front image" rollback segment address. One update corresponds to only one scn, then the server process will create a pointer to the db buffer block in the dirty list (so that the dbwr process can find the db buffer data block in the dirty list and write it into the data file ).

Then, the server process will read the second data block from the data file and repeat the action of the previous data block, read and log data blocks, create rollback segments, modify data blocks, and add them to the dirty list. When the length of the dirty queue reaches the threshold (usually 25%), the server process will notify dbwr to write dirty data, that is, release the latches on the db buffer to free up more free db buffer. It has always been explained that oracle reads a data block at a time. In fact, oracle can read multiple data blocks at a time (db_file_multiblock_read_count to set the number of read blocks at a time)

Note:

When the pre-processed data has been cached in the db buffer or has just been read from the data file to the db buffer, it is necessary to determine the next operation based on the type of SQL statement.

1>. if it is a select statement, check whether there is a transaction in the header of the db buffer block. If there is a transaction, the data is read from the rollback segment. If there is no transaction, compare the scn of select with the scn of the db buffer block header. If the former is smaller than the latter, it still needs to read data from the rollback segment. If the former is greater than the latter, this indicates that it is a non-dirty cache and can directly read the content in this db buffer block.

2>. for DML operations, even if a non-dirty cache data block with no transaction is found in the db buffer, and the SCN is smaller than itself, the server process still needs to apply for a lock on this record in the table's header. Only after the lock is successful can subsequent actions be performed. If the lock fails, wait until the previous process is unlocked to perform the action (the tx lock is blocked at this time ).

6. User commit or rollback

So far, the data has been modified in the db buffer or data file, but whether to permanently write data to the data file is determined by the user (Save the change to the data file) and rollback (UNDO data changes). Let's take a look at what oracle is doing in commit and rollback.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.