Oracle SQL statements perform the complete process:

Source: Internet
Author: User

The SQL statement executes the complete process:

1. The user process submits an SQL statement: Update temp set A=A*2 to the server process.

2. After the server process receives the information from the user process, the process allocates the required memory in the PGA, storing the relevant information, such as the login information associated with the session memory store.

3. The server process converts the character of this SQL statement into an ASCII equivalent digital code, which is then passed to a hash function and returns a hash value, and the server process will go to the library cache in the shared pool to find out if the In the same hash value, if present, the server process will use this statement to cache the parsed version in the library cache of the shared POOL to execute.

4. If it does not exist, the server process will be in the CGA, with the UGA content of SQL, syntax analysis, first check the correctness of the syntax, then the statements involved in the table, index, view and other objects are parsed, and against the data dictionary to check the names of these objects and related structures, and according to the Oracle The selected optimization mode and the data dictionary for the existence of the corresponding object statistics and whether to use the storage outline to generate an execution plan or to choose an execution plan from the storage outline, and then use the data dictionary to check the user's execution permissions on the corresponding object, and finally generate a compiled code.

5.ORACLE caches the actual text of the SQL statement, the HASH value, the compiled code, any statistics associated with the name, and the execution plan of the statement in the library cache of the shared POOL. The server process requests this content through the shared pool latch (Shared pool latch) to which shares of the PL/SQL area can be cached, meaning that blocks in the PL/SQL area locked by the shared pool latch are not overwritten. Because these blocks may be used by other processes.

6. The library cache is used during the SQL analysis phase to check the structure of tables and views from the data dictionary, and the data dictionary needs to be read from disk into the library cache, so the library cache latch is used before the read-in (library cache Pin,library cache Lock) to request a dictionary of data for caching. So far, the SQL statement has been compiled into executable code, but it is not yet known what data to manipulate, so the server process also prepares the preprocessing data for that SQL.

7. First, the server process to determine whether the required data in DB buffer exists, if present and available, then directly obtain the data, and based on the LRU algorithm to increase its access count; If the required data is not present in buffer, the first server process will request a TM lock on the table header (to ensure that other users cannot modify the structure of the table) if the TM Lock is successfully added, and then request some row-level locks (TX locks) If the TM and TX locks are successfully locked. Then start reading data from the data file, before reading the data, you must first prepare the buffer space for the read file. The server process needs to sweep the LRU list to find free db buffer, during the scanning process, the server process will be found all the modified DB buffer registered to the dirty list, these dirty buffer through the DBWR trigger conditions, It will then be written out to the data file, find enough free buffer, you can put the data block of the requested data row in the free area of db buffer, or overwrite the non-dirty block buffer that has been extruded to the LRU list, and arranged in the head of the LRU list, that is, the data block into D Before B buffer, it is also necessary to apply the latch in DB buffer before the data can be read to DB buffer after the lock is successfully added.

8. Log now that the data has been read into the DB buffer, the server process will now be affected by the statement and read into the DB buffer of these rows of data rowid and to update the original value and the new value and the SCN and other information from the PGA to write redo log buffer 。 The latch of redo log buffer must be requested prior to writing to the redo log buffer, and the write begins after a successful lock, when the write reaches One-third of the size of the redo log buffer or when the write volume reaches 1M or exceeds three seconds or if a checkpoint occurs or before DBWR. Occurs, the LGWR process is triggered to write data from redo log buffer to the redo file on disk (this time a log file sync wait event) has been written to the lock held by redo log buffer of Redofile The register is released and can be overwritten with subsequent write messages, and redo log buffer is used for recycling. Redo file is also used for recycling, and when a Redo file is full, the LGWR process automatically switches to the next Redo file (this time the log fileswitch (checkpoint complete) Wait event may occur). In the case of archive mode, the archive process also writes the contents of the previous redo file to the archive log file (which may appear as log fileswitch (archiving needed).

9. Creating a rollback segment for a transaction after completing the relevant redo log buffer in the firm, the server process begins to overwrite the block header transaction list for this DB buffer and writes the SCN, then copy the data containing the header transaction list and the SCN information that contains the block into the rollback segment, The information in the rollback segment "" is referred to as the pre-image of the block, and "This preview" is used for future rollback, recovery, and consistent reads. (The rollback segment can be stored in a dedicated rollback table space, which consists of one or more physical files and is designed to roll back the table space, and the rollback segment can also be opened in data files in other tablespaces.)

10. This transaction is ready to modify the block of data, it is now possible to rewrite the data contents of the DB buffer block, and write the address of the rollback segment to the head of the block.

11. Put dirty list  If a row of data is update  multiple times without commit, "there will be more than one pre-image in the rollback segment, except for the" "one pre-image contains SCN information," "and the head of each of the other front images has an SCN" Information and pre-image rollback segment addresses. An update only corresponds to one SCN, and then the server process will establish a   pointer to this DB buffer block in the dirty list (a convenient dbwr  process can find the DB buffer data block of dirty list  and Write to the data file). The server process then continues to read the second block of data from the data file, repeating the previous block's actions, reading the data block, logging, setting up the rollback segment, modifying the data block, and placing the dirty list. When the length of the dirty queue reaches the threshold (typically 25%), the server process notifies DBWR to write out the dirty data, releasing the latch on the DB buffer and freeing up more free db buffer. It's always been a description. Oracle reads a block of data at a time, in fact, Oracle can read multiple blocks at a time (Db_file_multiblock_read_count to set the number of read-in blocks)   Description:  When the preprocessed data has been cached in db buffer or has just been read from the data file into DB buffer, it depends on the type of SQL statement to decide what to do next.   1> If it is a SELECT statement, see if there is a transaction in the header of the DB buffer block, read the data from the rollback segment if there is a transaction, and compare the select  scn  and DB buffer  blocks if there are no transactions The SCN of the head, if the former is smaller than the latter, still reads the data from the rollback segment, if the former is greater than the latter, it is a non-dirty cache that can read the contents of the DB buffer block directly. 2> if it is a DML operation, even if there is no transaction in the DB buffer, and the SCN is smaller than its own small non-dirty cache data block, the server process still to the table to the head of the record to apply for lock, lock successfully for subsequent action, if unsuccessful, Wait for the previous process to unlock before the action can be performed (this time the block is a TX lock blocking).   User commit  or rollback until now, the data has been repaired in DB buffer or data file.To complete, but whether to permanently write to a number of files, it is up to the user to decide commit (save changes to data file) Rollback undo data changes). 1. The user executes the commit command   only when the last block of all rows affected by the sql  statement is read into DB buffer  and the redo information is written to redo log buffer (only the log buffers, not the log files), The user can send a commit command,commit  trigger the LGWR process, but does not force an immediate DBWR to release all the corresponding DB buffer blocks (that is, no-force-at-commit, commits are not forced to write), which means that there is Although it may have been a commit, dbwr  is still writing the block of data involved in the SQL statement for a later period of time. The row lock of the table header is not released immediately after the commit, but is not released until the DBWR process is complete, which may occur when a user requests that another user has failed to commit the resource. A. The time between the end of the commit  and the DBWR process is very short, and if it happens after commit, the DBWR does not end before the power outage, because the data after commit is already part of the data file, but this part of the file is not fully written to the data file. So you need to roll forward. Since commit has triggered LGWR, these changes that have not yet been written to the data file will be rolled forward by the Smon process based on the Redo log file after the instance is restarted, completing the unfinished work of the commit (i.e. writing the changes to the data file). B. If the power is lost without a commit, because the data has been changed in db buffer, there is no commit, this part of the data is not a data file, because dbwr  triggered before LGWR that is, as long as the data changes, (must first have log)   all DBW R, the changes on the data file are recorded in the Redo log file first, and the Smon process is rolled back and forth according to the redo log file after the instance restarts.   In fact, Smon roll-forward rollback is done according to Checkpoint, when a full checkpoint occurs, let the lgwr  process first write all buffers in the log buffer (containing uncommitted redo information) to the Redo log file, and then let DBWR. The   process writes the buffered buffer that was committed in DB buffer to the data file (does not force write uncommitted). The SCN on the head of the control file and data file is then updated to indicate that the current dataThe library is consistent, with many transactions between the adjacent two checkpoints, both committed and uncommitted. Like the front roll rollback The more complete statement is the following description of:   A. Power outage before the checkpoint occurred, and an uncommitted change was in progress, after the instance was restarted, Smon The process will record the committed and uncommitted changes in the Redo log file after it has been checked from the previous checkpoint, because LGWR will be triggered before dbwr , so DBWR changes to the data file will be recorded in the Redo log file first. Therefore, the change in the data file before the power outage is dbwn by the record in the Redo log file to restore, called rollback, B. If a power outage has been committed, but the DBWR action has not yet fully completed the change exists, because has been committed, the commit will trigger the LGWR process, so no matter DBWR Whether the action is completed, the rows that the statement will affect, and the resulting results must already be recorded in the Redo log file, the Smon process rolls forward according to the redo log file after the instance restarts. The time used for recovery after an instance failure is determined by the size of the interval between two checkpoints, and you can set the frequency of the checkpoint execution by four parameters:  Log_checkpoint_interval: Determines the system physical block (redo) that writes redo log files between two checkpoints. Blocks) size, the default value is 0, unlimited. log_checkpoint_timeout:                  length of time between two checkpoints (seconds) The default value is 1800s. Fast_start_io_target: Determines the number of blocks that need to be processed for recovery, and the default value is 0,  unrestricted. Fast_start_mttr_target: Directly determines the length of time for recovery, the default value is 0, no limit (the Smon process does not roll forward and rollback is different from the user's rollback, Smon is based on the redo log file to roll forward or rollback, and the user's rollback must be based on the rollback segment Content to be rolled back. Here to say a rollback segment stored data, if it is a delete operation, the rollback segment will record the entire row of data, if it is update, the rollback segment only records the modified field before the change of the data (pre-image), that is not modified field is not recorded, if it is insert, The rollback segment records only the rowid of the inserted record. So if the transaction commits, the rollback segment simply marks theThe transaction has been committed, if it is a fallback, if the operation is a delete, rollback when the data in the rollback segment is re-written back to the data block, the operation if the update, the change before the data changes back, the operation if it is an insert, according to the record rowID the record deleted. 2. If the user rollback.   The server process finds the corresponding pre-modified copy in the rollback segment based on the transaction list and the SCN and the rollback segment address of the block's header in the data File block and DB BUFFER, and uses these values to restore the modified but uncommitted changes in the current data file. If there is more than one "pre-image," The server process finds the rollback segment address of the front-end image at the head of a front-reflection "" ", until it finds the oldest" "pre-image under the same transaction. Once a commit is issued, the user can not rollback, which makes the commit  after the DBWR process has not completed all the follow-up actions are guaranteed. Now, for example, a transaction has ended. Description:  TM Lock:  is compliant with the lock mechanism, and the definition used to protect the object is not modified. TX Lock: This lock represents a transaction, is a row-level lock, with data block, data record header of some field representation, is also in accordance with the lock mechanism, there are resource structure, lock structure, enqueue algorithm.

Just a SQL statement contains such a complex process, if the hardware interaction (keyboard and mouse operation), operating system processing, network transmission and so on, just click on a query button backstage involved in the extremely complex aspects of the processing process

Oracle SQL statements perform the complete process:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.