The story of Oracle-1 Updata

Source: Internet
Author: User
Tags reserved rollback truncated

The client SQL plus requests the connection, listens to the TCP connection that accepts the client, and obtains the TNS packet sent by the client. The listening process opens the pipeline used to communicate with the child process, while fork a child process, called a child process that listens to child process 1, and then the listener process waits until the "listener subprocess 1" ends. Listen for sub-process 1 fork out child process 2. Completing the previous step, child process 1 exits immediately and ends child process 1. Child Process 2 collects the host name, IP address and process number of the process, and the process 2 is the same as server processes (we also call the server process called the foreground process or the server processes), the request occupies a small block of PGA memory. The foreground process sends the host name, IP address, and process number to the listening process. The listener process receives information about the foreground process and returns information about the client (such as user password environment variables, etc.) to the foreground process. Foreground process Query user$, profile$ and other data dictionary, verify that the user name password is legitimate, if the user password error is incorrect user name password is invalid, otherwise interact with the client. The client receives information from the foreground process interacting with it, and the entire connection is created. The client initiates a connection, the server-side listens to create a shadow process and assigns the shadow process to the user, and all subsequent user actions are submitted to Shadow,shadow to complete the operation within the database for the user and return the results to the user. (middleware a bit different) the client initiates an UPDATE statement, which is converted to the corresponding encoding according to the client environment character set, to the server side, the database-side server process, and the user process to form a session. The SQL request is then processed on the server side of the PGA Zone. Server-side conversion based on server-side character set. The process acceptance statement, placed in the PGA, places the pre-modified value on the PGA in the user's private SQL area, each character including a space converted to ASCII code, and then take this heap of ASCII code through the hash function to generate a sql_ The hash value determines whether the execution plan for this statement searches the current user's session cache (UGA in the PGA) for the presence of the same hash version. If present, the direct link to the PGA's private SQL area, directly via the cursor, becomes soft and soft parsing. If it does not exist, check that the initialization parameter session_cached_cursors is set, and if it is set, and has the same SQL, it can also point to the private SQL area located in the PGA through the cursor, otherwise, end soft soft parsing, try soft parsing. If it does not exist, create a cursor. Take this value to the shared pool to find the execution plan, according to hasH value to determine which bucket the block should be in. Request a lock structure from memory, in which to record "lock mode, Process ID" and other important information and then see if you can get access to resources immediately, if not, then put this lock structure to resource structure waiter linked list, If available, the lock structure is attached to the owner list of the resource structure. Lock the list of data headers, traverse the block above, and see if there is a hash value identical to this statement if there is a statement in the shared library that has the same hash value, the database performs semantic and environmental checks to determine whether the meaning is the same. Even if two statements are semantically identical, an environment difference may force them to hard parse. In this case, the environment can affect all session settings that are generated by the execution plan, such as the workspace size or optimizer settings. Take out, perform a soft parse otherwise, perform a hard parse check SQL syntax, semantics, permissions, query related data dictionary based on CBO or RBO generate execution plan CBO, histogram, dynamic sampling is there enough space in Freelist for Oracle to first scan the shared pool? If one is used, if there is not enough space, then determine if the memory request is a large request or a small request. If large is requested, the reserved pool is searched for available space, if the available memory (chunk) is found, a size check is done, and the memory (chunk) is truncated and the desired memory size is used, if the reserved pool Still not found available memory (chunk), will repeat the previous step, if still not found, the object in the reserved pool is an age-out operation of the LRU algorithm, age out some objects in the reserved pool, To satisfy this memory (chunk) request operation, if the available memory is still not found, repeat the LRU algorithm until the available memory (chunk) is found. If the small request, find out if there is memory available in the free list in the shared pool (chunk), if the available memory (chunk) is found, the size is checked, and the memory (chunk) is truncated, the required memory size is used, if not found, The object in the shared pool is done with an age-out operation of the LRU algorithm, and again to find out if there is memory available, to find the available chunkshared pool    to find free memory, and if no idle chunk are found that are exactly the right size, Just findFor larger chunk, if you find an idle chunk that is larger than the size of the request, split it, write the execution plan, hang it on the library cache's chain, and the extra part continues on the free list. Because too much hard parsing exacerbates the need for memory segment allocation, this creates fragmentation problems. Once the system has been running for a long time, it generates a lot of small memory fragments. When the request is allocated a large block of memory, the total free space of the shared pool is still large, but there is no single contiguous free block to meet the needs. At this point, ORA-4031 errors can be generated. Completion of hard parse completion resolution if used, bind the variable, substituting the actual variable value into the SQL statement. Reads the block of data to be modified. In Dbbuffer to modify the block query seg$ such as data dictionary, find to modify the header read out from the segment header extent Map, according to the execution plan to start scanning data block The block is located in the file#,block# hash calculation, find the corresponding hash bucket, The CBC latch that secures the bucket traverses all CBC chains in the bucket. Get the cache buffers chains latch, and walk through the buffer chain until you find the buffer header you need. It is important to note that if the execution plan is a full-table scan, or an index with a unique constraint (update), the CBC chain is acquired in exclusive mode, which causes the CBC chain to Busy. Compare the address of the data block recorded on the buffer header (RDBA), or skip the buffer header if it does not match.   skips the buffer header with the status Cr. (It is explained that the other process is in a consistent read, so it constructs the CR block, if I also want to find the block of the original block, I need to re-construct a new CR block, do not use this old CR block, if I am not looking for this block of the original block, then I do not need to construct, so both cases are skipped CR   If you encounter a buffer header with a state of reading (a chunk of data being read from disk), wait until the state of the buffer header changes to compare the address of the recorded block of data. (Perhaps the previous query, it may be the SQL statement, it may be the previous (own user or other user's SQL) statement, just need to read the data in this block, is going to the memory to read, I can directly with the efforts of predecessors can be) block in memory. If you find that the block address matches the buffer header, check to see if the buffer header is on the list you are using, and if it is on the list you are using, then determine the existing lock mode with the currentis compatible, returns the block address recorded by the buffer header, and places the current process number on the list of operations that the buffer header is in, depending on the type of operation (read or write) that it needs in the buffer The header gets a shared or exclusive mode buffer pin or buffer lock if the process obtains a buffer header pin, it releases the obtained cache buffers chains latch and then executes the buffer Block operation, if the process cannot obtain the buffer header pin, it will wait on the buffer busy waits event. (The process cannot get the buffer header pin because a block can only be accessed by a single process pin at the same time to ensure data consistency, so when a process needs to access a block that is used by another process in the buffer cache, This process generates a buffer busy waits event for the block. ) on the segment header block or the target block? Add TM, TX lock, get latch of target block, find block in memory address, if there is a conflict of lock, it will cause blocking. Wait mode pin: When a session fails to get the required latch, it will continue to use the CPU (CPU idling) and after reaching an interval, try to request latch again until the maximum number of retries is reached. Sleep: When a session fails to get the required latch, it waits for a period of time (sleep), after reaching an interval, tries again to apply for latch, so repeatedly until the maximum number of retries is reached. No wait mode does not occur sleep or spin., go to get other available latch if it is a clean block, after the modification, hang to the ckpt chain, the number of uses on the LRU chain +1, (after certain conditions) to move to Lruw, and add to the checkpoint chain block on the hard disk. If you have not found the buffer header you are looking for after a complete hash chain, read the data file from disk. The session produces a shadow process that reads the block into memory, and before reading the data, the server process needs to scan the secondary LRU list for the free buffer, and during the scan the server process registers all the modified buffer found in the LRUW List on if the server process scan LRU more than one threshold can still not find enough free Buffer, will stop looking, instead of notifying dbwn to write dirty data, freeing up memory space. This processAfter you have found enough buffer in free buffer wait (not full table scan), the server process can read buffer from the data file into buffer Cache and move the buffer to the primary LRU, if the non-full table scan is more, Auxiliary LRU in the block will be less, in order to maintain the proportion (auxiliary LRU accounted for the entire total number of LRU block of about 20% to 25%), the Smon process will hold a lrulatch 3 seconds, the main LRU cold end of the block moved to the auxiliary LRU. The full table scan also first looks for the available blocks in the auxiliary LRU, but the full-table scanned block will remain in the auxiliary LRU, and will not be transferred to the main LRU cold end head. As a result, the full-table scanned block will be overwritten quickly. The full-table scan operation will use only the auxiliary LRU (which will also use the primary LRU, only a very small number of primary LRU), a large full sweep operation, which can cover all the blocks of the auxiliary LRU one or more times. When the database is first started, or just flush buffer_cache, all blocks are placed in the auxiliary LRU. When the foreground process (server process) scans the primary and secondary LRU, it will encounter a dirty block of tch below 2 and move to the main lruw. After calculating the file#,block# hash of the block, find the corresponding hash bucket and get the CBC latch to protect the bucket from the buffer header of the data block to be read to the CBC (cache buffer chain) chain. The block is placed on the buffer of the corresponding chain, and the address of the header points to the corresponding buffer block on the chain. Physical/Logical consistency check. (to calculate the checksum of this chunk, and compare it to the field of size, if there is a difference, Oracle will know that the block has errors, will report a ORA-1578 error, the block will be cache recovery, if the data block can not be restored to a consistent state, Oracle will software this data block flag bit to corrupt. If it is a different error, you will need to use the Dbms_repair package bar to identify the block as "software corrupt", and the checksum field for the header will be recalculated before it is written back to disk. The execution is affected by the parameter db_block_checksum) DBV only check the header/footer of the data block, do the logic verification; Db_block_checking: Replace the 10210/10211/10212 event for block integrity checking, such as the free slot list/line position/Lock number, the check will copy the block, if there is an error block marked as soft CORRUPTION;DB_BLOCK_CHECKSUM:DBWR and direct Loader compute checksum coexist in cache layer CH when writing data blockKval, recalculated on re-read and compared to existing checksum, Dbms_repair fixes errors in cache/transaction layer, marks the block as soft corruption, and logical consistency check of data blocks. Checking for failures throws a ORA-1578 internal error. When Oracle checks the logical consistency of the block, the data block is cache recovery, and if the data block cannot be restored to a consistent state, Oracle will software the block flag bit corrupt, and when a query accesses this block of data, ora-1578 errors are also thrown. If the ORA-1578 is not thrown, the Dbms_repair package needs to be used first. Read first, find the line to be modified in the cold end of the LRU chain to find a clean block, the data block into which, if the search reaches a certain number of blocks, has not found a clean block, the LRU chain cold end of the dirty block, migrated to the Lruw chain,? Release CBC latch Check that there are no lock marks on the line to be modified if there are, depending on the corresponding transaction slot, find the corresponding rollback segment, see if the transaction table records the transaction is committed if committed, the destination is marked, will correspond to modify the transaction slot transaction status of U, proceed to the next step. (c=commited;u=commited Upper bound;t=active at CSC;---is uncommitted) if the transaction information in the Rollback Segment transaction table is overwritten, the transaction is assumed to be committed, the lock tag is changed, and the corresponding modification of the transaction slot transaction state is U for the next step. If not committed, wait for its submission, resulting in a buffer busy wait event if not, proceed to the next step. Set the BH's tag to pinning (ping) to check that there are no before type triggers if there is execution (in this procedure: the value of new may be re-assigned). Oracle then reads the block in its current mode, if the (where) condition column of the lookup row has been modified. Because the condition column is used to locate the record, and the condition column has been modified, the database restarts the query. A chunk of memory allocated from the shared pool is partitioned to a private stand and protected by the redo allocation latch, which is generated by the redo stored in private stand when flush private stand or commit, private stand is written to the log file in bulk, for transactions that use private strand, you do not need to apply for redo Copy Latch First, and you do not need to apply for the Strand redo of gkfx allocation Latch, but flush or COmmit is bulk-written to disk, reducing the number of requests/releases for redo Copy latch and redo allocation latch, as well as reducing the wait for these latch, thus reducing the CPU load. (Redo log was first generated in the PGA UGA (the database is generally proprietary), and Oracle copies it to the Log_buffer in the SGA, if Log_buffer is too small, or LGWR cannot quickly write redo to a log file, Then there will be a log buffer space wait event, encounter such problems, you can increase the log_buffer size, adjust the log file to the bare device, i/0 fast disk) in the PGA to generate a datablock block of the post image (11.9). Generate a post-image (5.2) of the Undo Segment Header Transaction table in the PGA. The post image of the undo block generated in the PGA (5.1) makes the first three redo vectors a redo recorder written to the private strand in the shared pool. Writes the pre-image value in the DataBlock to the IMU pool in the shared pool. Modifies the transaction table for the undo segment header. Modify the undo block to write the pre-image of the datablock. Write each restore change vector to the corresponding IMU pool, or follow the old mechanism, write each redo vector to the private redo, and if the new transaction does not apply for the private stand redo allocation latch, it will continue to follow the old redo buffer mechanism. The application is written to the shared strand. Build change vectors in the PGA and combine them into redo The record generates a post image of the Undo Segment Header Transaction table in the PGA (5.2) The post image of the undo block generated in the PGA (5.1) generates a post image of the DataBlock block in the PGA (11.9) makes the first three redo vectors as a redo recorder writes the log Buffer modifies the transaction table for the undo segment header, and the transaction begins formally. Modify the undo block to write the pre-image of the datablock. Change this block to remove the pin tag of block to modify DataBlock and write the new value to buffer cache. Call KCRFWR () to write the record to log buffer: Calculate the amount of space occupied by the record, allocate the SCN, get copy latch, verify the SCN, get allocation latch, verify that log buffer/file has enough space , there is the release of allocation latch willRedo writes to log buffer, otherwise releases allocation/copy latch and notifies LGWR to log flush/switch, (to prevent multiple processes from simultaneously notifying LGWR refresh redo or switching log files, introduce write latch ( Only 1), the next operation can be performed only after obtaining this latch, redo record is written to log buffer and then the copy latch is released to check if the trigger LGWR threshold is reached, and the contents of the memory data block corresponding to BH are updated. Updates the contents of the memory data block corresponding to BH. Do logs are first recorded in the PGA and then written back to Undocommit; generates a SCN,LGWR for the transaction writes all remaining cache redo log entries to the disk and logs the SCN to the online redo log file. The transaction entries are removed from v$transaction and the locks held by the session are recorded in V$lock, all released, and each transaction waiting to be queued for these locks will be woken up to continue with their work if some of the blocks modified by the transaction are still in the buffer cache, they will be accessed in a fast mode and "cleaned up" (refers to clearing the lock-related information stored in the database block header.) Single block read on full table scan: When there are some blocks in the table, the full table scan skips the blocks, which is why the block reads when the full table is scanned, and one case is that a single block read is also possible at the boundary of the zone. To become a Buddha, first into MO XID rollback segment, section of the file number, block number, line number, number of use  

Update slow factor: frequently update columns should not be indexed (architecture P237) Try not to use the trigger, will make the update slow in the trigger try not to use before for each row, there is no after for each row Efficient (architecture P238)

The story of Oracle-1 Updata

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.