How does the Oracle service process user process requests?

Source: Internet
Author: User

Oracle server processIn the process of completing a user process request, the following seven tasks are completed: 0. SQL statement parsing; 1. the data block is read into the database buffer and written to the data cache); 2. logs; 3.Create rollback segments for transactions4. modify data blocks in this transaction; 5.Add dirty list; 6. User commit or rollback. Next we will introduce the knowledge of these seven tasks separately, hoping to help you.

0. SQL statement Parsing

If an oracle Server process can process user process requests, update temp set a = a * 2 when a user process submits an SQL statement; first, after the oracle server process receives information from the user process, the process needs to allocate the required memory in the PGA to store relevant information, such as the login information stored in the session memory;

The server process converts the character of this SQL statement into an ASCII equivalent digital code. Then, the ASCII code is passed to a HASH function and a hash value is returned, the server process then goes to the library cache in the shared pool to check whether the same hash value exists. If so, the server process uses the analyzed version of the statement that has been cached in the shared pool library cache for execution. If it does not exist, the server process will be in the CGA, And the UGA content will be used for the SQL statement, for syntax analysis, first check the correctness of the syntax, then parse the tables, indexes, views and other objects involved in the statement, and check the names and structures of these objects against the data dictionary, and generate an execution plan or select an execution plan from the storage outline based on the optimization mode selected by ORACLE and the statistical data of the corresponding objects in the data dictionary and whether the storage outline is used, then, use the data dictionary to check the user's permission to execute the corresponding object, and finally generate a compilation code.

ORACLE caches the actual text, HASH value, compiled code of the SQL statement, any statistical data associated with the language name, and the Execution Plan of the statement in the library cache of the SHARED POOL. The server process uses the shared pool latch) to apply to which shared PL/SQL areas can be cached, that is to say, blocks in the PL/SQL area locked by the SHARED POOL latches cannot be overwritten, because these blocks may be used by other processes. The library cache will be used in the SQL analysis phase. When the structure of the table and view is checked in the data dictionary, the data dictionary must be read from the disk into the LIBRARY CACHE. Therefore, before reading data, you must also use the library cache lock library cache pin to apply for cache data dictionary.

So far, this SQL statement has been compiled into executable code, but it does not know which data to operate. Therefore, the server process must prepare preprocessing data for this SQL statement.

1. Read data blocks into db buffer

Oracle needs to read data to the memory (db buffer) to process data. First, the server process must determine whether the required data exists in the db buffer. If yes, this data is obtained directly, and the access count is increased according to the LRU algorithm. If the buffer does not have the required data, it is read from the data file. First, the server process requests the tmlock In the table header to ensure that other users cannot modify the table structure during the transaction execution.) If the tmlock is successfully applied, the server requests the row-Level Lock TX lock ), if the TM and TX locks are successfully locked, the system starts to read data from the data file. Before reading data, the buffer space must be prepared for the Read File. The server process needs to scan the LRU list to find the free db buffer. During the scan, the server process registers all the modified db buffer to the dirty list,

These dirty buffer will pass the dbwr trigger conditions, and then be written into the data file, find enough idle buffer, you can place the data block of the requested data row in the idle zone of the db buffer or overwrite the non-dirty data block buffer that has been squeezed out of the LRU list, and arrange it in the header of the LRU list, that is, before data blocks are placed in the db buffer, you must apply for the latches in the db buffer before you can read the data to the db buffer.

2. log recording

Now the data has been read into the db buffer, now, the server process writes the rowid of the row data affected by the statement and the row data in the db buffer, the original value to be updated, the new value, and the scn information from the PGA to the redo log buffer one by one.. Before writing the redo log buffer, you must first request the redo log buffer latches. After the locks are successfully applied, the write starts, when the number of writes reaches 1/3 of the redo log buffer size, or the number of writes reaches 1 MB or exceeds 3 seconds, or when the check point occurs or before dbwr, the lgwr process is triggered to write the redo log buffer data to the redo file on the disk. At this time, the log file sync wait event is generated ), the latches held by the redo log buffer that has been written into the redo file will be released and can be overwritten by subsequent write information. redo log buffer is used cyclically. Redo file is also used cyclically. When a redo file is full, the lgwr process automatically switches to the next redo file. This may cause a log file switchcheckpoint complete) Wait event ). If the archive mode is used, the archive process also writes the content of the previous full redo file to the archive log file. This may cause log file switcharchiving needed )).

3. Create a rollback segment for the transaction

After completing the redo log buffer related to the firm, the server process begins to rewrite the transaction list of the block header of the db buffer and write it into the scn, copy the data copies that contain the transaction list and scn information of this block into the rollback segment. The information in the rollback segment is called the "front image" of the data block. this "pre-image" is used for subsequent rollback, recovery, and consistent read. Rollback segments can be stored in dedicated rollback tablespace, which consists of one or more physical files and is dedicated to rollback tablespaces, rollback segments can also be opened in data files in other tablespaces .)

4. modify data blocks in this transaction

The preparation is complete. Now you can rewrite the data content of the db buffer block and write the rollback segment address in the block header.

5. Add dirty list

If a row of data is updated multiple times without commit, there will be multiple "front images" in the rollback segment, except that the first "front image" contains scn information, each Other "front image" header has the scn information and the "front image" rollback segment address. One update corresponds to only one scn, then, the server process will create a pointer to the db buffer block in the dirty list so that the dbwr process can find the db buffer data block in the dirty list and write it into the data file ).

Then, the server process will read the second data block from the data file and repeat the action of the previous data block, read and log data blocks, create rollback segments, modify data blocks, and add them to the dirty list. When the length of the dirty queue reaches the threshold value of 25%), the server process will notify dbwr to write dirty data, that is, release the latches on the db buffer to free up more free db buffer. It has always been explained that oracle reads a data block at a time. In fact, oracle can read multiple data blocks at a time db_file_multiblock_read_count to set the number of read blocks at a time)

Note: 

When the pre-processed data has been cached in the db buffer or has just been read from the data file to the db buffer, it is necessary to determine the next operation based on the type of SQL statement.

1> if it is a select statement, check whether there is a transaction in the header of the db buffer block. If there is a transaction, the data is read from the rollback segment. If there is no transaction, compare the scn of select with the scn of the db buffer block header. If the former is smaller than the latter, it still needs to read data from the rollback segment. If the former is greater than the latter, this indicates that it is a non-dirty cache and can directly read the content in this db buffer block.

2> for DML operations, even if a non-dirty cache data block with no transaction is found in the db buffer and the SCN is smaller than itself, the server process still needs to apply for a lock on this record in the table's header. Only after the lock is successful can subsequent actions be performed. If the lock fails, wait until the previous process is unlocked before the operation can be performed. At this time, the blocking is the tx lock blocking ).

6. User commit or rollback

So far, the data has been modified in the db buffer or data file, but whether to permanently write data to the data file is determined by the user to save the changes to the data file) and rollback to undo the data changes). Let's take a look at what oracle is doing in commit and rollback.

The user executes the commit command.

Only when the last block of all rows affected by the SQL statement is read into the db buffer and the redo log buffer is written into the redo log buffer, which only refers to the log buffer, not the log file, you can send the commit command to trigger the lgwr process, but do not force dbwr immediately to release the locks of all corresponding db buffer blocks, that is, no-force-at-commit, that is to say, although the SQL statement has been committed, dbwr is still writing the data blocks involved in this SQL statement in the following period. The row lock in the table header is not released immediately after commit, but is released only after the completion of the dbwr process, this may result in the failure of one user to request resources that have been committed by another user.

A. the time between the end of the Commit and dbwr process is very short. If a power failure occurs after the commit, before the dbwr ends, because the data after the commit already belongs to the content of the data file, however, these files are not completely written into the data file. Therefore, roll back. Because the commit has triggered lgwr, all the changes to the data files that have not yet been written will be rolled forward by the smon process based on the redo log file after the instance is restarted, write the changes to the data file ).

B. if no commit is available, the power is down because the data has been changed in the db buffer and no commit exists. This indicates that this part of the data is not a data file. Because lgwr is triggered before dbwr, as long as the data is changed, you must first have a log). All modifications made to the DBWR data file are first recorded in the redo log file. After the instance is restarted, the SMON process then rolls back and forth based on the redo log file. In fact, smon rollback is completed based on the checkpoint. When all the checkpoints occur, the LGWR process first includes uncommitted redo information in all the buffers in the redo log buffer) write the redo log file, and then let the dbwr process write the buffer submitted by the db buffer into the data file without forcing the uncommitted write ). Then update the SCN in the control file and the data file header, indicating that the current database is consistent. There are many transactions between the adjacent two checkpoints, including committed and uncommitted transactions. The following describes the complete concepts of roll-back:

A. A power failure occurred before the checkpoint, and an uncommitted change was in progress. After the instance was restarted, the SMON process will check the checkpoint from the previous checkpoint and record the submitted and uncommitted changes in the redo log file, because lgwr will be triggered before dbwr, therefore, dbwr's modifications to the data file will be first recorded in the redo log file. Therefore, changes written into the data file by DBWN before power-off will be restored by redoing the records in the log file, called rollback,

B. if a change has been submitted but the dbwr action has not been completely completed, the commit will trigger the lgwr process, no matter whether the dbwr action is completed or not, the row to be affected by this statement and the result must have been recorded in the redo log file. After the instance is restarted, the SMON process rolls forward according to the redo log file.

The recovery time after an instance fails is determined by the interval between two checkpoints. You can set the checkpoint execution frequency through four parameters:

Log_checkpoint_interval: determines the size of the system physical block (redo blocks) written to the redo log file between two checkpoints. The default value is 0, and there is no limit.

Log_checkpoint_timeout: determines the length of time between two checkpoints in seconds), the default value is 1800 s.

Fast_start_io_target: determines the number of blocks to be processed during restoration. The default value is 0, which is unlimited.

Fast_start_mttr_target: determines the length of time for recovery. The default value is 0. The unlimitedly executed SMON process rollback and rollback are different from the user rollback, SMON performs rollback or rollback Based on the redo log file, and the user's rollback must be based on the content of the rollback segment. Here we will talk about the data stored in the rollback segment. If it is a delete operation, the rollback segment will record the data of the entire row, if it is an update, the rollback segment only records the pre-data image before the changed field), that is, the fields not modified will not be recorded. If it is insert, the rollback segment only records the rowid of the inserted record.

In this way, if the transaction is committed, the rollback segment simply indicates that the transaction has been committed; if it is a rollback, if the operation is delete, during rollback, the data in the rollback segment is re-written back to the data block. If the operation is update, the data before the change is modified back. If the operation is insert, delete the record based on its rowid .)

The user executes rollback.

If the user rollback is used, the server process will find the corresponding copy before modification in the rollback segment based on the transaction list and SCN of the block header in the data file block and db buffer and the rollback segment address, the original values are used to restore the changes that have been modified but not submitted in the current data file. If there are multiple "front images", the server process will find the rollback segment address of the "front image" in the header of a "front image, until the earliest "front image" of the same transaction is found. Once a COMMIT is issued, the user cannot roll back, which ensures that the subsequent actions of the DBWR process after the COMMIT are not completed completely. Now, for example, a transaction has ended.

Note:

Tmlock: compliant with the lock mechanism, used to protect the definition of objects from being modified.

TX lock: This lock represents a transaction, which is a row-Level lock. It is represented by some fields in the data block header and data record header. It also complies with the lock mechanism and has the resource structure, lock structure, and enqueue algorithms.

This article introduces the complete process analysis of Oracle transactions!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.