Oracle Database management operation description

Source: Internet
Author: User
Tags lock queue dedicated server

The following articles mainly start with the requests of relevant users. How does the complete working mechanism of Oracle Database work? First, if a user-related process sends a connection request, if you use the host name or the host name hit by the local service, the machine name (non-IP address) is used ).

The request will be parsed by the service name of the DNS server or HOST file and then transmitted to the Oracle listening process. After receiving the user request, the listening process will process the user request in two ways, here we will talk about the two methods used by dedicated servers and shared servers respectively:

In dedicated server mode:

One way is to listen to a process that receives a user process request, generate a new dedicated server process, and send all the control information of the user process to this server process, that is to say, the new server process inherits the information of the listening process, and then the server process sends a RESEND packet to the user process to notify the user process to start sending messages to it, the user process sends a CONNECT packet to the newly created server process, and the server process then responds to the user process with an ACCEPT packet. As a result, the user process is formally connected to the server process.

We call this connection a HAND-OFF connection, also called a conversion connection. Another method is to listen to a process that generates a new dedicated server process after receiving a request from the user process. This server process selects a TCP/IP Port to control interaction with the user process, then, the information is sent back to the listening process, and the listening process sends the information to the user process.

The user process uses this port to send a CONNECT packet to the server process, and the server process sends an ACCEPT packet to the user process. Therefore, the user process can send information to the server process.

This method is called redirection connection. The HAND-OFF connection requires the system platform to have the process inheritance capability. To enable HAND-OFF support for windows nt/2000, you must set USE_SHARED_SOCKET in HKEY_LOCAL_MACHINE> SOFTWARE> Oracle> HOMEX.

In Shared Server Mode:

Only the redirection connection method works by listening to the process to generate a new scheduling process after receiving the request from the user process, this scheduling process selects a TCP/IP Port to control interaction with the user process, and then transmits this information to the listening process. The listening process then transmits this information to the user process, the user process uses this port to send a CONNECT packet to the scheduling process, and the scheduling process sends an ACCEPT packet to the user process. As a result, the user process can officially send information to the scheduling process.

You can set the MAX_DISPIATCHERS parameter to determine the maximum number of scheduling processes. If the number of scheduling processes has reached the maximum, or the existing scheduling process is not full, the listening process does not create a new scheduling process, but allows one of the scheduling processes to use a TCP/IP Port to interact with the user process.

Each time a user process request is received by a scheduling process, the scheduling process registers at the listening process so that the listening process can balance the load of each scheduling process, all user process requests will be queued in a limited number of scheduling processes, and all scheduling processes will put some user process requests in their respective queues in the same request queue in sequence.

Wait for the sharing server process of multiple Oracle databases to process. You can use the SHARED_SERVERS parameter to set the number of sharing server processes). That is to say, all scheduling processes share the same request queue, in Shared Server mode, there is only one request queue for the next instance. After the Shared Server process processes the requests from the user process, it takes the requests from different scheduling processes and puts the returned results into different response queues, that is to say, the number of scheduling processes is the number of response queues, and each scheduling process extracts the results from their respective response queues and returns them to the user process.

We have finished the connection between the user and Oracle. Next we will talk about the Oracle server process. For example, when a user process sends an SQL statement: update tabblea set salary = SALARY * 2; first, the server process converts the character of this statement into an ASCII equivalent digital code, and then the ASCII code is passed to a HASH function, return a HASH value. The server process will go to the shared pl/SQL area of the SHARED POOL to check whether the same HASH value exists.

If the statement exists, the server process uses the analyzed version cached in the shared pool to execute the statement. If the statement does not exist, the server process performs syntax analysis on the statement, first, check the correctness of the statement syntax. Then, parse the tables, indexes, views, and other objects involved in the statement, and check the names and structures of these objects against the data dictionary, and generate an execution plan or select an execution plan from the storage outline based on the optimization mode selected by Oracle and the statistical data of the corresponding objects in the data dictionary and whether the storage outline is used, then, use the data dictionary to check the user's permission to execute the corresponding object, and finally generate a compilation code.

The Oracle Database caches the actual text, HASH value, compiled code of the language name, any statistical data associated with the language name, and the Execution Plan of the statement in the shared pl/SQL area of the SHARED POOL.. The server process uses the shared pool latches to apply to which shared pl/SQL zones can cache this content. That is to say, blocks in the PL/SQL zone locked by the SHARED POOL latches cannot be overwritten, because these blocks may be used by other processes.

The library cache will be used in the SQL analysis phase. When the structure of the table and view is checked in the data dictionary, the data dictionary must be read from the disk into the LIBRARY CACHE. Therefore, before reading data, you must use the library cache latches to apply for caching data dictionaries. After the compiled code is generated, the next step is to prepare the server process to start updating the data. The server process will go to the db buffer to check whether the cache data of related objects exists,

There are two possible explanations:

If no, the server process will request some row locks in the table header. If the lock is successfully applied, the server process places the data blocks of the rows read from the data files in the idle zone of the db buffer or overwrites the non-dirty data block BUFFER that has been squeezed out of the LRU list, and arranged in the header of the LRU list. If these non-dirty data buffers cannot meet the requirements of new data, the DBWN process is immediately triggered to write the buffer blocks pointed to in the dirty data list to the data file and clean these buffers to free up space for buffering the newly read data, that is to say, before placing the database BUFFER, you must first apply for the latches in the database BUFFER.

After the database is locked, write the database BUFFER, then, the server program writes the ROWID of the rows affected by this statement into the db buffer block, the original value to be updated, the new value, and the SCN one by one, into the redo log buffer, before writing the redo log buffer, it also requests the redo log buffer block latches. After the block is locked, the write starts, when the number of writes reaches 1/3 of the redo log buffer size, or the number of writes reaches 1 MB or exceeds 3 seconds, or when the check point occurs or before DBWN, LGWR writes the data in the redo log buffer to the redo log file on the disk.

The block in the redo log buffer that has been written to the redo log file is released and can be overwritten by the information written later. The redo log buffer works cyclically. When a redo log file is full, LGWR switches to the next redo log file. In archive mode, the archiving process also writes the previous fully-filled redo log process to the archive log file, redo log files also work cyclically. After writing all the redo log buffer, the server process begins to rewrite the transaction list in the header of the db buffer block and write it into the SCN, COPY the data copies that contain the transaction list and SCN information of this block into the rollback segment. We call the copies in the rollback segment as the "front image" of the data block ".

Rollback segments can be stored in dedicated rollback tablespace, which consists of one or more physical files and is dedicated to rollback tablespaces, rollback segments can also be opened in data files in other tablespaces .) Then rewrite the data of this db buffer block and write the corresponding rollback segment address in its header, if a row of data is updated multiple times without COMMIT, there will be multiple "front images" in the rollback segment, except that the first "front image" contains SCN information, the header of each other "front image" also contains the SCN information and the rollback segment address of the "front image.

Each UPDATE operation corresponds to only one SCN. Then, the server process creates a pointer to the buffer block in the dirty data list. Then, the server process reads the second part from the data file and repeats the reading, logs, creates a rollback segment, modifies it, and places it in the dirty list. When the dirty data list reaches a certain length, the DBWN process writes all the buffer blocks in the dirty data list to the data file, that is, the latches added to these db bufer blocks. In fact, the Oracle database can read several blocks from the data file at a time into the db buffer. You can set the number of blocks read at a time by using the DB_FILE_MULTIBLOCK_READ_COUNT parameter.

If the data to be searched is cached, the operation is determined based on the user's SQL operation type. If SELECT is used, check whether there is a transaction in the header of the DB BUFFER block. If yes, read from the rollback segment. If not, compare the SCN of SELECT with the SCN of the db buffer block header. If it is larger than itself, it is still read from the rollback segment, if it is smaller than itself, it is considered a non-dirty cache and can be directly read from this db buffer block. If it is UPDATE, even if a non-dirty cache data block with no transaction is found in the db buffer and the SCN is smaller than itself, the server process still needs to apply for a lock on this record in the table header, if the lock is successful, the subsequent action is performed. If the lock is unsuccessful, the action can only be performed after the previous process is unlocked.

Only when the last block of all rows affected by the SQL statement is read into the DB BUFFER and the REDO LOG BUFFER is written into the REDO LOG BUFFER is only used to REDO the LOG BUFFER, rather than REDO the LOG file, the user can issue a COMMIT. The COMMIT triggers the LGRW, but does not force DBWN to release the locks on all corresponding db buffer blocks immediately. That is to say, a COMMIT may have occurred, however, in the subsequent period of time when DBWN was still writing the data block involved in this statement, the row lock in the table header was not released immediately after the COMMIT was issued, in fact, it will not be released until the corresponding DBWN process ends.

A user requests to lock resources that have been committed by another user and the chances of failure exist. The period from COMMIT to the end of the DBWN process is very short. If a power failure occurs at this time, because COMMIT has triggered the LGWR process, the changes to the data files that have not yet been written will be rolled forward by the SMON process based on the redo log file after the instance is restarted. If no COMMIT is triggered, a power failure occurs. Because LGWR is triggered before DBWN, all modifications made to the data file of DBWN are first recorded in the redo log file. After the instance is restarted, the SMON process then rolls back and forth based on the redo log file.

If ROOLBACK is used, the server process will find the corresponding copy before modification in the rollback segment based on the transaction list and SCN of the block header in the data file block and db buffer and the rollback segment address, the original values are used to restore the changes that have been modified but not submitted in the current data file. If there are multiple "front images", the server process will find the rollback segment address of the "front image" in the header of a "front image, until the earliest "front image" of the same transaction is found. Once a COMMIT is issued, the user cannot ROOLBACK, which ensures that the subsequent actions of the DBWN process after the COMMIT are not completely completed.

The purpose of the checkpoint is described below. When all the checkpoints occur, the LGWR process first writes all the buffers in the redo log buffer including uncommitted REDO information to the redo log file, then let the DBWN process write all submitted BUFFER data files in the db buffer without force writing uncommitted ).

Then, update the SCN in the control file and the data file header, indicating that the current database is consistent. If the power is down before the checkpoint occurs and an uncommitted change is in progress, after the instance restarts, the SMON process will check the checkpoint from the previous checkpoint and record the submitted and uncommitted changes in the redo log file, because LGWR will be triggered before DBWN, therefore, DBWN's modifications to the data file will be first recorded in the redo log file.

Therefore, changes written into the data file by DBWN before the power-off will be restored by redoing the records in the log file, called rollback. If a data file has been submitted when the power-off occurs, however, a change to the DBWN action has not yet been completely completed exists. Because the submitted action triggers the LGWR process, no matter whether the DBWN action is completed or not, the row to be affected by this statement and the result must have been recorded in the redo log file. After the instance is restarted, the SMON process rolls forward based on the redo log file.

It can be seen that the recovery time after an instance failure is determined by the interval between two checkpoints. We can set the checkpoint execution frequency through four parameters, LOG_CHECKPOINT_INTERVAL determines the size of the system physical block that is written to the redo log file between two checkpoints. LOG_CHECKPOINT_TIMEOUT determines the length of time between two checkpoints. FAST_START_IO_TARGET determines the size of the block to be processed during restoration, FAST_START_MTTR_TARGET directly determines the duration of recovery.

The rollback and rollback of SMON processes are different from those of users. SMON performs rollback or rollback Based on the redone log files, user rollback must be performed based on the content of the rollback segment. Here we will talk about the data stored in the rollback segment. If it is a delete operation, the rollback segment will record the data of the entire row, if it is an update, the rollback segment only records the pre-data image before the changed field), that is, the fields not modified will not be recorded. If it is insert, the rollback segment only records the rowid of the inserted record.

In this way, if the transaction is committed, the rollback segment simply indicates that the transaction has been committed; if it is a rollback, if the operation is delete, during rollback, the data in the rollback segment is re-written back to the data block. If the operation is update, the data before the change is modified back. If the operation is insert, delete the record based on its rowid.

The following describes how to write data files in DBWN. before writing data files, you must first find writable idle data blocks. Free data blocks in Oracle can be maintained through FREELIST or BITMAP, they are located in the header of a segment to identify which data blocks in the current segment can be inserted.

In the local tablespace management, the Oracle database automatically manages the size of the partition allocated to the segment. The partition allocation information is stored in the header of the data file that makes up the tablespace, the tablespace users managed by the data dictionary can determine the partition size at the time of creation, and the partition allocation information is stored in the data dictionary, segments can be automatically managed only in the tablespace managed locally, the information of idle data blocks in the segments in the tablespace is stored in the segment header and managed using bitmap, the management of idle data blocks in the segments in the tablespace managed by manual local management and data dictionary is all managed by the idle list located in the segment header,

How the idle list works:

First, an empty data block is added to the idle list. When the idle space is smaller than the value set by PCTFREE, this block is deleted from the idle list, when the content in this block falls below the value set by PCTUSED, this data block is added to the idle list again. Data blocks in the idle list are all blocks that can be inserted into it, when a block is removed from the Free List, but there is still space in it to UPDATE it, when one row updates a large data, if the current block cannot completely put down the entire row, only the entire row is migrated to a new data block, and a pointer pointing to the new block is left at the original block location. This is called row migration.

If a data block can be inserted, when a row that cannot be loaded with the current block is inserted, the row will overflow into two or two blocks. This is called a row link. If the user's action is INSERT, the server process will first lock FREELIST, find the address of the idle block, and then release FREELIST, when multiple server processes want to lock FREELIST at the same time, FREELIST contention occurs. You can specify the number of FREELIST when creating a table in a tablespace that does not use automatic segment space management.

The default value is 1. If a table is created in a tablespace managed by automatic segment space, it is ignored even if FREELIST is specified, in this case, BITMAP instead of FREELIST is used to manage the free space in the segment. If the user action is to UPDATE the server process, FREELIST and BITMAP will not be used, because do not look for an idle block, but use the lock queue.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.