How SQL statements are executed

Source: Internet
Author: User

An explanation of the SQL statement execution Process 2014-07-17 01:15:43

Category: Oracle

An explanation of the SQL statement execution process
a pieceHow is the execution of Sql,plsql executed?
First,SQL statement Execution principle:
The first step: Client sends statement to server side execution
when we execute on the clientSelect statement, the client sends this SQL statement to the server side, allowing the server-side
process to process this statement. Which means, the Oracle client is not doing anything, and his main task is to generate the client
of someThe SQL statement is sent to the server side. Although there is a database process on the client side, the role of this process is the same as the server
The process on the action is not the same. The database process on the server will notSQL statements for related processing. However, there is a problem that requires
to illustrate, that is, the client's process is one by one corresponding to the server's process. In other words, after the client connects to the server, the customer
and the server side will form a process, we are called the client process on the client, and we are called the server process on the server.
Step Two: Statement parsing
when the clientAfter the SQL statement is delivered to the server, the statement is parsed by the server process. In the same vein, this analytic work,
is also done on the server side. Although this is just an analytic action, however, it will do a lot of "little tricks."
1. Query the cache (library cache). When the server process receives the SQL statement that the client sends over, it does not
will go directly to the database query. Instead, it looks in the database cache first ., whether there is an execution plan for the same statement. If the
in the data cache, the server process executes the SQL statement directly, eliminating the subsequent work. Therefore, the use of high-speed data slow
Save the words, you can improve the query efficiency of SQL statements. On the one hand, reading data from memory is better than reading from a data file on a hard disk.
High Data efficiency, on the other hand, is also the reason for this statement parsing.
But here's a little bit more attention., this data cache is not the same as the data cache of some client software. Some client software in order to
Improve query efficiency, the data cache is set on the client of the application software. Due to the existence of these data caches, it is possible to improve client application software
the query efficiency of the components. But, if the other person in the server made the relevant changes, due to the existence of the application data cache, resulting in modified
data cannot be reflected on the client in a timely manner. It can also be seen from this, the data cache of application software and the high-speed data cache of database server
not matter.
2. Check the validity of the statement (data dict cache). When the corresponding SQL statement is not found in the cache, the server
the service process will begin to check the validity of this statement. This is mainly forThe syntax of the SQL statement is checked to see if it conforms to the
grammar rules. If the server process considers this articleWhen the SQL statement does not conform to the grammar rules, the error message is fed back to the customer.
The user side. In the course of this grammar check, does not SQL for table names, column names, etc. contained in SQL statements he's just a grammar.
on the check.
3. Language meaning check (data dict cache). If the SQL statement conforms to the syntax definition, the server process
Next, the fields in the statement, the table, and so on, are checked. Look at the fields and whether the tables are in the database. If the table name and column name do not
Accurate Words, the database will feed back the error message to the client. So, sometimes when we write a SELECT statement, if the syntax
if the table name or column name is wrong, then the system is prompted to say syntax error, wait until the syntax is correct, then prompt to say the column name or table name
error.
4. Get the object resolution lock (Control Structer). When the syntax and semantics are correct, the system will need to query
Lock the object. This is primarily to ensure data consistency.To prevent us from querying the process, other users of the structure of this object
Change of life.
5. Check the data access permissions (Dict cache). When syntax and semantics pass through the check, the client is not necessarily
be able to obtain data. The server process also checks, whether the user you are connecting to has permission to access this data. If you connect to the server
users do not have access to data., the client is not able to obtain the data. Sometimes when we query the data, it's hard
Bitter to putSQL statements are written and compiled, but finally the system returns a "No Access data" error message, let us gas
half dead. This is in the process of front-end application software development debugging, may come across. Therefore, to pay attention to this problem, the database server process first
check syntax and semanticsBefore you check access permissions.
6. Determine the best execution plan? When there is no problem with the syntax and the permissions, the server process does not directly
database file for querying. The server process will follow certain rules, which is optimized for this statement. However, it is important to note that this optimization is
the limit. Generally in the process of application software development, the SQL language of the database needs to be optimized, the role of this optimization is big
self-optimization of the server process. So, generally in the application software development, the database optimization is indispensable. When the server process
after the optimizer determines the best execution plan for this query statement, the SQL statement is saved with the execution plan to the data cache
(library cache). In this case, when this query is later omitted, the syntax, semantics, and permission check steps
and direct executionSQL statements to improve the efficiency of SQL statement processing.
Step three: Statement execution
Statement parsing is just theThe syntax of the SQL statement to make sure that the server knows exactly what the statement is meant to be
The . Wait until the statement resolution is complete, the database server process will actually execute the SQL statement. The execution of this statement is also divided into two
situation.
one is if the data block where the selected row resides has been read into the data buffer., the server process will pass the data directly
to the clientInstead of querying the data from the database file.
If the data is not in the buffer, the server process queries the data from the database file and puts the data into the data buffer
area in(buffer cache).
Fourth Step: Extracting Data
when the statement execution is complete, the queried data is still in the server process and has not been routed to the client's user process. So
In the server-side process, there is a section of code that is dedicated to extracting data. His role is to return the results of the queried data to
user-side processes, which completes the entire query action. From this entire query processing process, we have developed in the database development or application software
in the process, the following points need to be noted:
one is to understand that the database cache is not the same as applying the software cache. Database cache exists only on the database server sideIn
the client is not present. only soTo ensure that the contents of the database cache are consistent with the contents of the database file. In order to be able to
related RulesTo prevent the occurrence of data dirty reading and wrong reading. And the data cache involved in the application is not matter with the database cache.
sentiment, the data cache of application software can improve the efficiency of data query, but it breaks the requirement of data consistency, sometimes
the occurrence of dirty reading, wrong reading and so on. So, sometimes there is a special feature on the application that is used to clean up when necessary
data cache. But, the purge of this data cache, also just clears the data cache on this computer, or simply clears this application
the data cache, and the data cache for the database is not purged.
The second is the vast majoritySQL statements are handled according to this process. Our DBA or Oracle database-based
developers understand the process of these statements, it is very helpful for us to develop and debug the SQL statements involved. Yes
Time, mastering these principles of treatment can reduce the time we get out of the wrong line. In particular, it is important to note that the database is a review of data query permissions
The syntax semantics are later checked. So, sometimes it may not meet the application's permissions if the database's permission control principle is used
the need for control. At this, it needs the foreground setting of application software to realize the requirement of permission management. Also, sometimes the permissions of the application database
Management, it also seems cumbersome, will increase the workload of server processing. Therefore, for the query permission control of records, fields, etc., most of the process
people like to implement in the applicationInstead of being implemented on the database.
DBCC dropcleanbuffers
removes all purge buffers from the buffer pool.
DBCC Freeproccache
removes all elements from the procedure cache.
DBCC Freesystemcache
Free all unused cache entries from all caches
The execution order of functions, keywords, sorting, and so on in SQL statements:
1. The FROM clause returns the initial result set.
2. The WHERE clause excludes rows that do not meet the search criteria.
3. The GROUP BY clause collects the selected rows into groups of individual unique values in the GROUP BY clause.
4. Select the aggregation function specified in the list to calculate the summary values for each group.
5. In addition, the HAVING clause excludes rows that do not meet the search criteria.
6. Calculate all expressions;
7. Use order by to sort the result set.
8. Find the field you want to search.
Second,The SQL statement executes the complete process:
1. The user process submits an SQL statement:
Update temp Set A=A*2 to the server process.
2. After the server process receives the information from the user process, the process allocates the required memory in the PGA and stores the relevant information, such as
memory storage related to login information, and so on.
3. The server process converts the character of this SQL statement into an ASCII equivalent digital code, which is then passed to an ASCII code
hash function and returns a hash value, then the server process will go to the library cache in the shared pool to find out if there is a
the samehash value, if present, the server process will use this statement cached in the library cache of the SHARED POOL
the parsed version to execute.
4. If it does not exist, the server process will be in CGA, with the UGA content of SQL, syntax analysis, first check the correctness of the syntax, then
The table that is involved in the statement, indexes, views, and other objects are parsed, and the names of these objects and their associated structures are checked against the data dictionary, and are based on
ORACLE chooses the optimization mode and whether the data dictionary has statistics for the corresponding objects and whether the storage outline is used to generate a
execute a plan or choose an execution plan from the storage outline, and then use the data dictionary to check the user's execution permissions on the object, and finally generate
a compiled code.
5.ORACLE the actual text, HASH value, compiled code, and any statistics associated with the name of the SQL statement
and the execution plan for the statement is cached in theSHARED POOL in the library cache. The server process is latched through the SHARED POOL
Device(Shared pool latch) to request which shared PL/SQL areas can be cached, that is, locked by a shared pool
the device is lockedblocks in the PL/SQL area cannot be overwritten because these blocks may be used by other processes.
6. The LIBRARY will be used during the SQL analysis phase
CACHE, when checking the structure of tables, views, etc. from a data dictionary, you need to
Dictionary read-in from diskLIBRARY
CACHE, so use the library before you read it
Cache Latch (Library cache
Pin,library cache Lock) to request a dictionary of data for caching. So far, this SQL statement has been compiled into executable code,
but I don't know what data to manipulate., so the server process also prepares the preprocessing data for this SQL.
7. First, the server process to determine whether the required data in DB buffer exists, if present and available, the data is directly obtained, and according to
The LRU algorithm increases its access count, and if the required data is not present in buffer, the first server process will be read from the data file on the table header
RequestTM locks (which ensure that other users cannot modify the structure of the table) if the TM Lock is successfully added, and then request some row-level locks (TX
Lock), if the TM, TX locks are successfully locked, then start reading data from the data file, before reading the data, you must first prepare the Read file
Buffer space. The server process needs to sweep the surface of the LRU list to find free db buffer, during the scan process, the server process will put all the found
that has been modified.DB buffer is registered in the dirty list, and these dirty buffer passes the DBWR trigger condition, which is then written to
Data Files, if enough free buffer is found, the data block in which the requested data row resides can be placed in an idle area of db buffer or
The overlay has been extrudedThe non-dirty block buffer of the LRU list is arranged in the head of the LRU list, i.e., the data block is placed in DB
Before buffer, it is also necessary to apply the latch in DB buffer before the data can be read to DB buffer after the lock is successfully added.
8. Log now that the data has been read into the DB buffer, the server process will now affect the statement and read
intoThe rowid of these row data in db buffer and the original and new values to be updated and the SCN are written redo log from the PGA.
In buffer. The latch of redo log buffer must be requested prior to writing to the redo log buffer, and only after a successful lock is started, when
write to reachRedo Log buffer size One-third or write amount to 1M or more than three seconds or before checkpoint or DBWR
happen, the LGWR process is triggered to write data from redo log buffer to the redo file on disk (this time generates a log file
Sync wait event)
has been writtenThe latches held by Redofile's redo log buffer are released and can be overwritten with subsequent write messages,
Redo log buffer is used for recycling. Redo file is also used for recycling, and when a Redo file is full, the LGWR process automatically switches to
Next OneRedo file (This time there may be a log fileswitch (checkpoint complete) wait event). In the case of archive mode,
The first one to fill theThe contents of the redo file are written to the archive log file (this may occur when the log file
Switch (archiving needed).
9. Creating a rollback segment for a transaction after the redo log buffer has been completed, the server process begins to overwrite this db buffer
The block header transaction list and writesSCN, and copy the data containing the header transaction list and SCN information for this block into the rollback segment,
the information in the rollback segment is then called the data Block's"Front image", this "pre-image" for future rollback, recovery, and consistent reads. (rollback segments can be
stored in a dedicated rollback table space, this table space consists of one or more physical files and is designed to roll back the table space, and the rollback segment can also be used in other
the data files in the tablespace are open.
10. This transaction modification data block preparation has been done, can now rewrite the data content of the DB buffer block, and write on the head of the block
the address of the rollback segment into.
11. Put dirty list If a row of data is update multiple times without a commit, there will be more than one "pre-image" in the rollback segment, in addition to the
a"Front image" contains SCN information, and each "front image" header has SCN information and a "forward-front image" rollback segment address. One
Update only corresponds to one SCN, then the server process will create a dirty list in the
Bar points to thisA pointer to the DB buffer block (a convenient DBWR process can find the DB buffer data block of the dirty list and write to the data file).
then the server process continues to read the second block of data from the data file, repeating the action of the previous data block, reading the data block, logging, building
rollback segments, modify data blocks, putDirty list. When the length of the dirty queue reaches the threshold (typically 25%), the server process notifies
Dbwr writes out the dirty data by releasing the latch on the DB buffer and freeing up more free db buffer. Always in front of the description
Oracle reads one chunk at a time, in fact Oracle can read multiple blocks of data at a time (Db_file_multiblock_read_count to set a
number of read-in blocks)
Description:
The preprocessed data is already cached in theDB buffer or has just been read from the data file into DB buffer, based on the SQL statement
type to determine what to do next.
1> If it is a SELECT statement, you want to see if there is a transaction in the head of the DB buffer block, and if there is a transaction, read the data from the rollback segment;
No business in the fruit, the SCN for select and the SCN for the head of DB buffer are compared, and if the former is less than the latter, the data is still read from the rollback segment;
if the former is greater than the latter, which indicates that this is a non-dirty cache that can read the contents of this DB buffer block directly.
2> if it is a DML operation, even if a transaction is found in DB buffer, and the SCN is smaller than its own, the non-dirty
Cache Data Blocks, the server process still needs to go to the table to the head of the record to apply for lock, lock successfully for subsequent actions, if not successful, you want to
wait for the previous process to unlock before you can move(This time the block is a TX lock blocking).
UserCommit or rollback until now, the data has been modified in DB buffer or data file
into, but whether you want to permanently write to a number of files, it is up to the user to decide commit (save changes to data file) Rollback undo data changes).
1. The user executes the commit command
only whenThe last block in which all rows affected by the SQL statement are read into DB buffer and the redo information is written to redo log
Buffer (only the log buffer, not the log file), the user can send a commit command, commit triggers the LGWR process, but does not
Force ImmediateDBWR to release all the corresponding DB buffer blocks (i.e., no-force-at-commit, commits are not forced), which means
perhaps although alreadyCommit, but for a later period of time DBWR is still writing the block of data involved in the SQL statement. Row lock for table header
not inThe commit is released immediately after the DBWR process is completed, which may occur when a user requests another user
have beenA commit resource is not a successful phenomenon.
A. The time between the end of the commit and the DBWR process is short, and if it happens after commit, the DBWR does not end before the power outage, because
The data after commit is already part of the data file, but this part of the file is not fully written to the data file. So you need to roll forward. By
inCommit has triggered LGWR that all changes that have not yet been written to the data file will be restarted by the Smon process based on the redo date
log file to roll forwardTo complete the unfinished work of the previous commit (that is, write the changes to the data file).
B. If no commit is lost, because the data has been changed in db buffer, there is no commit, this part of the data does not belong to the number
according to the document, as long as the data changes, (must have log first) all DBWR, the modification on the data file because DBWR is triggered before LGWR
will be logged in the Redo log file one step ahead., after the instance restarts, the Smon process is then rolled back and forth according to the Redo log file.
actuallySmon Roll-forward rollback is done based on checkpoints, and when a full checkpoint occurs, first let the LGWR process
Redo all buffers in the log buffer (containing uncommitted redo information) to the Redo log file, and then let the DBWR process set the DB buffer
submitted buffers write to data file(Do not force write uncommitted). The SCN on the head of the control file and data file is then updated to indicate that the current database
it's consistent., there are many transactions between the adjacent two checkpoints, both committed and uncommitted.
like the front roll rollback The more complete statement is the following description:

A. A power outage occurred prior to the checkpoint, and an uncommitted change was in progress, and after the instance was restarted, the Smon process will be removed from the previous
The checkpoint begins to check this checkpoint and records the committed and uncommitted changes in the Redo log fileBecause
LGWR will be triggered before DBWR, so DBWR changes to the data file will be recorded in the Redo log file first. Therefore, before the power loss is
Dbwn the changes that are written into the data file will be restored by the record in the Redo log file, called Rollback,
B. If a power outage has been committed, but the DBWR action has not yet fully completed the change exists, because the commit has been committed, the commit will trigger LGWR
Process, so regardless of whether the DBWR action is complete, the rows that the statement will affect and the resulting results must already be recorded in the Redo log file
the, the Smon process rolls forward based on the redo log file after the instance restarts.
The time used for recovery after an instance failure is determined by the size of the interval between two checkpoints, you can set the frequency of the checkpoint execution by a four parameter
Rate:

Log_checkpoint_interval:
determines the system physical block that writes redo log files between two checkpoints(Redo blocks)
the size, the default value is 0, unlimited.
Log_checkpoint_timeout:
The length of time (in seconds) between two checkpoints is the default value of 1800s.
Fast_start_io_target:
determines the number of blocks that need to be processed for recovery, the default value is 0, unlimited.
Fast_start_mttr_target:
directly determines the length of time to recover ., the default value is 0, unlimited (roll forward of smon process execution
and rollback is different from the user's rollback, Smon is rolled forward or rolled back based on the redo log file, and the user's rollback must be based on the rollback segment
content to be rolled back.
Here's what we're going to say. Data stored in the roll segment, if it is a delete operation, the rollback segment will record the entire row of data, if it is an update,
The rollback segment only records the data before the change of the field that was modified(Pre-image), that is, fields that are not modified are not recorded, if
Insert, the rollback segment records only the rowid of the inserted record. In this case, if the transaction commits, the rollback segment simply marks that the transaction has been committed;
Fallback, if the operation is delete, the data in the rollback segment is re-written back to the data block when it is rolled back, and if the operation is update, the pre-change data
Change Back, if the operation is insert, the record is deleted according to the rowid of the record.
2. If the user rollback.
the server process will be based on the data file block andThe transaction list and SCN and the rollback segment address of the block's header in DB BUFFER are found
Roll back the corresponding pre-modified copy in the segment, and use these original values to restore the modified but uncommitted changes in the current data file. If there are multiple
"Pre-image", the server process finds the rollback segment address of the "pre-image" in the head of a "pre-image" and finds the oldest one under the same transaction.
a"Pre-image". Once a commit is issued, the user cannot rollback, which makes the commit DBWR process not
all the follow-up actions are guaranteed. Now, for example, a transaction has ended.
Description:
TM Lock:
MeetLock mechanism, the definition used to protect the object is not modified. TX Lock:
This lock represents a transaction, is the line
level lock, with data block, data record header of some fields, is also in accordance with the lock mechanism, there are resource structure, lock
Structure, Enqueue algorithm.

How SQL statements are executed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.