Oracle data written by a senior

Source: Internet
Author: User
Tags lock queue dedicated server

I. Oracle connection method:

Starting with a user request,What is the complete working mechanism of Oracle? First, a user process sends a connection request, if the host name or host name hit by the local service is used ), the request will be parsed by the service name of the DNS server or host file and then transmitted to the Oracle listening process. After receiving the user request, the listening process will process the user request in two ways,Here we will talk about the two methods used by dedicated servers and shared servers respectively:

In dedicated server mode:One way is to listen to a process that receives a user process request, generate a new dedicated server process, and send all the control information of the user process to this server process, that is to say, the new server process inherits the information of the listening process, and then the server process sends a resend packet to the user process to notify the user process to start sending messages to it, the user process sends a connect packet to the newly created server process, and the server process then responds to the user process with an accept packet. As a result, the user process is formally connected to the server process.We call this connection a hand-off connection, also called a conversion connection.Another method is to listen to a process that generates a new dedicated server process after receiving a request from the user process. This server process selects a TCP/IP Port to control interaction with the user process, then, the information is sent back to the listening process. The listening process then sends the information to the user process. The user process uses this port to send a connect packet to the server process, the server process sends an accept packet to the user process. Therefore, the user process can send information to the server process.This method is called redirection connection.The hand-off connection requires the system platform to have the process inheritance capability. To enable hand-off support for Windows NT/2000, you must set use_shared_socket in HKEY_LOCAL_MACHINE> Software> Oracle> homex.

In Shared Server Mode: only redirect connections are available,The working method is to listen to a process to generate a new scheduling process after receiving a request from the user process. This scheduling process selects a TCP/IP Port to control interaction with the user process, then, the information is sent back to the listening process, and the listening process sends the information to the user process. The user process uses this port to send a connect packet to the scheduling process, the scheduling process sends an accept package to the user process. Therefore, the user process can send information to the scheduling process. You can set the max_dispiatchers parameter to determine the maximum number of scheduling processes. If the number of scheduling processes has reached the maximum, or the existing scheduling process is not full, the listening process does not create a new scheduling process, but allows one of the scheduling processes to use a TCP/IP Port to interact with the user process. Each time a user process request is received by a scheduling process, the scheduling process registers at the listening process so that the listening process can balance the load of each scheduling process, all user process requests will be queued in a limited number of scheduling processes, and all scheduling processes will put some user process requests in their respective queues in the same request queue in sequence, wait for multiple Oracle Shared Server Processes to process (the number of shared server processes can be set through the shared_servers parameter). That is to say, all scheduling processes share the same request queue, in Shared Server mode, there is only one request queue for the next instance. After the Shared Server process processes the requests from the user process, it takes the requests from different scheduling processes and puts the returned results into different response queues, that is to say, the number of scheduling processes is the number of response queues, and each scheduling process extracts the results from their respective response queues and returns them to the user process.

2. If an ORACLE Server process can process user process requests

We have finished the connection between the user and Oracle. Next we will talk about the Oracle server process. For example, when a user process sends an SQL statement:

Update tabblea set salary = salary * 2;

First, the server process converts the character of this statementReplace it with an ASCII equivalent number,Then the ASCII code is passed to a hash function andReturns a hash value.The server process will go to the shared PL/SQL area of the Shared Pool to check whether the same hash value exists. If yes, the server process uses the analyzed version of this statement that has been cached in the shared pool for execution. If it does not exist, the server process performs syntax analysis on this statement,First, check the correctness of the statement syntax., Parse the tables, indexes, views, and other objects involved in the statement, and check the names and structures of these objects against the data dictionary, and generate an execution plan or select an execution plan from the storage outline based on the optimization mode selected by Oracle and the statistical data of the corresponding objects in the data dictionary and whether the storage outline is used, then, use the data dictionary to check the user's permission to execute the corresponding object, and finally generate a compilation code.Oracle caches the actual text, hash value, compiled code of the language name, any statistics associated with the language name, and the Execution Plan of the statement in the shared PL/SQL zone of the Shared Pool.The server process uses the Shared Pool latches to apply to which shared PL/SQL zones can cache this content. That is to say, blocks in the PL/SQL zone locked by the Shared Pool latches cannot be overwritten, because these blocks may be used by other processes. The library cache will be used in the SQL analysis phase. When the structure of the table and view is checked in the data dictionary, the data dictionary must be read from the disk into the library cache. Therefore, before reading data, you must use the library cache latches to apply for caching data dictionaries.

After the compiled code is generated, the next step is to prepare the server process to start updating the data,The server process will go to the DB buffer to check whether the cache data of related objects exists., There are two possible explanations:

If no, the server process will request some row locks in the table header. If the lock is successfully applied, the server process places the data blocks of the rows read from the data files in the idle zone of the DB buffer or overwrites the non-dirty data block buffer that has been squeezed out of the LRU list, and arranged in the header of the LRU list. If these non-dirty data buffers cannot meet the requirements of new data, the dbwn process is immediately triggered to write the buffer blocks pointed to in the dirty data list to the data file and clean these buffers to free up space for buffering the newly read data, that is to say, before putting the database buffer into it, you must first apply for the latches in the DB buffer, and then write the latches into the DB buffer, then, the server program writes the rowid of the rows affected by this statement into the DB buffer block, the original value to be updated, the new value, and the SCN one by one, into the redo log buffer, before writing the redo log buffer, it also requests the redo log buffer block latches. After the block is locked, the write starts. When the write size reaches the redo log buffer size 1/3 or when the write volume reaches 1 m or more than three seconds, or when the check point or dbwn occurs, lgwr will write the data in the redo log buffer to the redo log file on the disk, the block in the redo log buffer that has been written to the redo log file is released and can be overwritten by the information written later. The redo log buffer works cyclically. When a redo log file is full, lgwr switches to the next redo log file. In archive mode, the archiving process also writes the previous fully-filled redo log process to the archive log file, redo log files also work cyclically. After writing all the redo log buffer, the server process begins to rewrite the transaction list in the header of the DB buffer block and write it into the SCN, copy the data copies that contain the transaction list and SCN information of this block into the rollback segment. We call the copies in the rollback segment as the "front image" of the data block ". (Rollback segments can be stored in dedicated rollback tablespace, which consists of one or more physical files and is dedicated to rollback tablespaces, rollback segments can also be opened in data files in other tablespaces .) Then rewrite the data of this dB buffer block and write the corresponding rollback segment address in its header, if a row of data is updated multiple times without commit, there will be multiple "front images" in the rollback segment, except that the first "front image" contains SCN information, the header of each other "front image" also contains the SCN information and the rollback segment address of the "front image. Each update operation corresponds to only one SCN. Then, the server process creates a pointer to the buffer block in the dirty data list. Then, the server process reads the second part from the data file and repeats the reading, logs, creates a rollback segment, modifies it, and places it in the dirty list. When the dirty data list reaches a certain length, the dbwn process writes all the buffer blocks in the dirty data list to the data file, that is, the latches added to these dB bufer blocks. In fact, Oracle can read several blocks from the data file at a time into the DB buffer. You can set the number of blocks read at a time by using the db_file_multiblock_read_count parameter.

If the data to be searched is cached, the operation is determined based on the user's SQL operation type. If select is used, check whether there is a transaction in the header of the DB buffer block. If yes, read from the rollback segment. If not, compare the SCN of select with the SCN of the DB buffer block header. If it is larger than itself, it is still read from the rollback segment, if it is smaller than itself, it is considered a non-dirty cache and can be directly read from this dB buffer block. If it is Update, even if a non-dirty cache data block with no transaction is found in the DB buffer and the SCN is smaller than itself, the server process still needs to apply for a lock on this record in the table header, if the lock is successful, the subsequent action is performed. If the lock is unsuccessful, the action can only be performed after the previous process is unlocked.

Only when the last block of all rows affected by the SQL statement is read into the DB buffer and the redo log buffer is written to the redo log buffer (only the redo log buffer, not the redo log file) then, the user can issue a commit and commit trigger lgrw, but it does not force dbwn to release all the corresponding dB buffer block locks immediately, that is, there may be commit, however, in the subsequent period of time when dbwn was still writing the data block involved in this statement, the row lock in the table header was not released immediately after the commit was issued, in fact, it will not be released until the corresponding dbwn process ends. A user requests to lock resources that have been committed by another user and the chances of failure exist. The period from commit to the end of the dbwn process is very short. If a power failure occurs at this time, because commit has triggered the lgwr process, the changes to the data files that have not yet been written will be rolled forward by the SMON process based on the redo log file after the instance is restarted. If no commit is triggered, a power failure occurs. Because lgwr is triggered before dbwn, all modifications made to the data file of dbwn are first recorded in the redo log file. After the instance is restarted, the SMON process then rolls back and forth based on the redo log file.

If roolback is used, the server process will find the corresponding copy before modification in the rollback segment based on the transaction list and SCN of the block header in the data file block and DB buffer and the rollback segment address, the original values are used to restore the changes that have been modified but not submitted in the current data file. If there are multiple "front images", the server process will find the rollback segment address of the "front image" in the header of a "front image, until the earliest "front image" of the same transaction is found. Once a commit is issued, the user cannot roolback, which ensures that the subsequent actions of the dbwn process after the commit are not completely completed.

The purpose of the checkpoint is described below. When all the checkpoints occur, first let the lgwr process write all the buffers in the redo log buffer (including uncommitted redo information) into the redo log file, then let the dbwn process write all submitted buffer data files in the DB buffer (do not force write uncommitted ). Then, update the SCN in the control file and the data file header, indicating that the current database is consistent. If the power is down before the checkpoint occurs and an uncommitted change is in progress, after the instance restarts, the SMON process will check the checkpoint from the previous checkpoint and record the submitted and uncommitted changes in the redo log file, because lgwr will be triggered before dbwn, therefore, dbwn's modifications to the data file will be first recorded in the redo log file. Therefore, changes written into the data file by dbwn before the power-off will be restored by redoing the records in the log file, called rollback. If a data file has been submitted when the power-off occurs, however, a change to the dbwn action has not yet been completely completed exists. Because the submitted action triggers the lgwr process, no matter whether the dbwn action is completed or not, the row to be affected by this statement and the result must have been recorded in the redo log file. After the instance is restarted, the SMON process rolls forward based on the redo log file. It can be seen that the recovery time after an instance failure is determined by the interval between two checkpoints. We can set the checkpoint execution frequency through four parameters, log_checkpoint_imterval determines the size of the system physical block that is written to the redo log file between two checkpoints. log_checkpoint_timeout determines the length of time between two checkpoints. fast_start_io_target determines the size of the block to be processed during restoration, fast_start_mttr_target directly determines the duration of recovery. The rollback and rollback of SMON processes are different from those of users. SMON performs rollback or rollback Based on the redone log files, user rollback must be performed based on the content of the rollback segment. Here we will talk about the data stored in the rollback segment. If it is a delete operation, the rollback segment will record the data of the entire row, if it is an update, the rollback segment only records the data before the changed field changes (the former image), that is, the fields not modified will not be recorded, if it is insert, the rollback segment only records the rowid of the inserted record. In this way, if the transaction is committed, the rollback segment simply indicates that the transaction has been committed; if it is a rollback, if the operation is delete, during rollback, the data in the rollback segment is re-written back to the data block. If the operation is update, the data before the change is modified back. If the operation is insert, delete the record based on its rowid.

The following describes how to write data files in dbwn. before writing data files, you must first find writable idle data blocks. Free data blocks in Oracle can be maintained through freelist or bitmap, they are located in the header of a segment to identify which data blocks in the current segment can be inserted. In the local tablespace, Oracle automatically manages the size of the partition allocated to the segment. The partition allocation information is stored in the header of the data file that makes up the tablespace, the tablespace users managed by the data dictionary can determine the partition size at the time of creation, and the partition allocation information is stored in the data dictionary, segments can be automatically managed only in the tablespace managed locally, the information of idle data blocks in the segments in the tablespace is stored in the segment header and managed using bitmap, the management of idle data blocks in the segments in the tablespace managed by manual local management and data dictionary management are all managed by the idle list located in the segment header. The idle list works as follows: first, an empty data block is added to the idle list. When the idle space is smaller than the value set by pctfree, this block is deleted from the idle list, when the content in this block falls below the value set by pctused, this data block is added to the idle list again. Data blocks in the idle list are all blocks that can be inserted into it, when a block is removed from the idle list, you can update it as long as there is still space in it. For big data, if the current block cannot completely put down the entire row, it will only migrate the entire row to a new data block and leave a pointer pointing to the new block at the original block location, this is called row migration. If a data block can be inserted, when a row that cannot be loaded with the current block is inserted, the row will overflow into two or two blocks. This is called a row link. If the user's action is insert, the server process will first lock freelist, find the address of the idle block, and then release freelist, when multiple server processes want to lock freelist at the same time, the freelist contention occurs. You can specify the number of freelist when creating a table in a tablespace that does not use automatic segment space management. The default value is 1, if you create a table in a tablespace using automatic segment space management, even if you specify freelist, it will be ignored because bitmap instead of freelist will be used to manage the free space in the segment. If the user action is to update the server process, freelist and bitmap will not be used, because do not look for an idle block, but use the lock queue.

Iii. Oracle Lock Mechanism

The following describes the Oracle lock mechanism, which is divided into two types: Oracle latches and locks. Latches are used to protect access to the memory structure. For example, when applying for a block lock in dB buffer, these dB buffer blocks are unlocked only after dbwn is completed. Then it is used for other applications. The latches cannot be shared between processes. The applications for latches either succeed or fail, and no latches apply for a queue. The main latches include shared pool latches, library cache latches, cache buffers LRU chain latches, cache buffers chains latches, redo allocation latches, and redo copy latches. Oracle locks are used to protect data access. The lock restrictions are looser than the latches. For example, multiple users can share a lock on a table when modifying different rows in the same table, the lock application can be queued and applied in sequence according to the application order. This queuing mechanism is called the queue (enpueue). If two server processes attempt to lock the same row of the same table, all of them enter the lock application queue. The first lock is successful, and the subsequent process will wait until the previous process is unlocked. This is called lock contention. Once the lock is successful, the lock will remain until the user issues the commit or roolback command. If two users lock their own row and request the other party to lock the row, they will wait for an indefinite period of time, that is, the deadlock occurs due to lock contention rather than lock contention, in case of a deadlock, Oracle Automatically releases the Lock of one of the users and rolls back the changes of the user. In normal cases, the final storage result of the data is determined by the SCN to determine which process changes will be saved. When two users' server processes apply for multiple row locks in the same table, they can enter the lock application queue in a staggered manner. Wait only when competition arises. The maxtrans parameter specified during table creation determines that a data block in a table can be locked by a maximum of several transactions at the same time.

The following are examples of rollback segments and deadlocks:

Table: Test (ID number (10) has 1000000 records

1. Large select and small update
Session a ---- select * from test; ---- set SCN = 101 ---- execution time 09:10:11
B session ----- update Test Set ID = 9999999 where id = 1000000 ---- set SCN = 102 ----- execution time 09:10:12

We will find that session B is completed before session A. The id = 100000 displayed in session A is read from the rollback segment, because session a finds transaction information on the block when reading the block where id = 1000000 is located, it must be read from the rollback segment. If the update statement has been committed before the SELECT statement reads this block, when the SELECT statement reads this block, it finds that there is no transaction information on the block, but it will find that its blick SCN is larger than the SELECT statement's own SCN, so it will also read from the rollback segment. Therefore, whether to read data from the rollback segment is to check whether transaction information exists, and whether to compare the SCN size. If session B updates the same record multiple times before session a ends and commit
, Multiple "front images" will be recorded in the rollback segment ", each "front image" not only contains the data of the original block and SCN, but also records the rollback segment address of the "front image, therefore, when session a queries the updated block, it finds the "pre-image" in the rollback segment based on the address of the rollback segment recorded in the block ", it is found that the SCN of this "front image" is larger than its own. Therefore, based on the rollback segment address of the "front image" recorded in this "front image, find the "pre-image" in the rollback segment and compare the SCN with the "pre-image". If it is smaller than yourself, read the image. If it is larger than yourself, repeat the preceding steps, until you find the "front…" that is smaller than your own SCN... Until the pre-image is located, if not found, the ORA-01555 snapshot is too old.

Ii. Large update and small select

Session a ---- update Test Set ID = 1; ---- set SCN = 101 ---- execution time 09:10:11
B session ----- select * from test where id = 1000000 ---- set SCN = 102 ----- execution time 09:10:12

We will find that session B is completed before session A. The id = 1000000 displayed in session B is directly read from the block, when session B reads the block where id = 1000000 is located, session A has not been able to lock it. Therefore, session B does not find transaction information on the block, it will not find that the SCN on the block is larger than the select, so it will be directly read from the block. If the SELECT statement is issued only after the update locks the block, when session B reads this block, it finds that the block contains transaction information, which is read from the rollback segment.

3. Large update and small update

Session a ---- update Test Set ID = 1; ---- set SCN = 101 ---- execution time 09:10:11
B Session 1 ----- update Test Set ID = 999999 where id = 1000000 ---- set SCN = 102 ----- execution time 09:10:12
B Session 2 ----- select * from test where id = 2 ---- set SCN = 103 ----- execution time 09:10:14
B Session 3 ----- update Test Set ID = 3 where id = 2 ---- set SCN = 104 ----- execution time 09:10:15

We will find that session B 1 will be completed, and session a will remain waiting, because session B 1 will lock the block where id = 1000000 is located prior to session A, and rewrite the transaction information in the header, when session a tries to lock this block and finds transaction information on it, it will wait until session B 1 finishes the transaction and then lock again, the id = 2 found in session B 2 is read from the rollback segment rather than from the block. Because session A has locked the block with ID = 2 and written it into the rollback segment, this can be proved from session B 3. After session B 3 is issued, session B 3 receives the deadlock message because session A is waiting for session B to unlock the block where id = 1000000 is located, now session B is waiting for session A to unlock the block where id = 2 is located, so a deadlock is formed. This proves that the block where id = 2 is located has been locked by session, session A also receives the deadlock information.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.