Principles of transactions and database recovery

Source: Internet
Author: User
Tags savepoint

Transaction and database recovery principles the logical unit of data storage is data blocks, and the logical unit of data operations is transactions. A transaction is a user-defined sequence of operations. It consists of one or more SQL statements and is the basic logical unit of a database application. Transaction Management technology mainly includes database Restoration Technology and concurrency control technology. Transaction processing logic process 1. The server process retrieves the SQL statement of the transaction, then checks the sharing pool to check whether the shared SQL zone contains the SQL statement. 2. If yes, check whether the user has the permission to access the data involved. If yes, use the shared SQL area to process the SQL statement. If the shared SQL partition is not found, a new SQL partition is allocated to analyze and process the SQL statement. If you do not have the permission, the system returns a message indicating that the permission is insufficient. 3. Lock the operation object involved in the SQL statement. 4. The server process operates data in SGA or reads the winning data from the data file to SGA and then operates the data. 5. When appropriate, the LGWR background process writes the Statement Buffer redo logs to the online redo log file, the DBWR background process writes modified data blocks in the data cache to the data file. 6. generate an incremental SCN for the transaction. SCN is written to the control file, data file header, data block header, and redo record. The SCN determines the consistency of the database at a certain time point for concurrent control and database recovery. 7. The LGWR background process writes all the remaining buffered redo logs and the current SCN to the online redo log file. 8. Release the system resources occupied by the SQL statements in the transaction and unlock the operation objects involved. 9. If the transaction is successful, a success prompt is returned, otherwise, an error message is returned. 10. at an appropriate time, the DBWR background process will retain the changed data blocks (dirty data blocks) in the data cache) write to the data file. Basic statements and functions of transaction control 1. commit transaction (commit) 2. Roll Back transaction (rollback) 3. Set savepoint 4. Roll Back to the storage point (rolbackto savepoint) 5. settransaction 6. setconstrants: 1. automatically generates an incremental SCN for the transaction. 2. The LGWR background process writes all the remaining buffered redo logs and the current SCN to the online redo log file. 3. Release the system resources occupied by each SQL statement in the transaction and unlock the involved objects. 4. Return the corresponding prompt code to the user. 5. Mark the transaction as completed. 6. at a proper time, the DBWR background process will write the changed data blocks that are still in the data cache to the data file. Tasks for transaction rollback: 1. Undo All executed changes. Read from the generated UNDO information. If an insert operation is performed, delete the operation. If yes, insert the operation. If it is changed, it is changed to the original data. 2. Release the system resources occupied by the SQL statements in the transaction and unlock the operation objects involved. 3. Return the prompt code to the user. 4. Mark the transaction as completed. A save point is an intermediate sign in a transaction. It can divide a large transaction into several short parts. In this way, some transactions can be rolled back. Some transactions can be rolled back due to the storage point. Syntax: rollback to savepoint spname or rollback to spname. 2. Release the system resources occupied by each SQL statement after the storage point and unlock the operation objects involved, however, the system resources occupied by each SQL statement and the operation objects involved are retained. 3. Return a success prompt code for the user to roll back to the Save point. 4. You can continue to execute the current transaction. TRANSACTION properties 1. READ/WRITE; set transaction read write is the default TRANSACTION settings; 2. READ-ONLY; set transactionread only; executed in this state: SELECT, LOCKTABLE, set role, alter system/SESSION; 3. isolation level A, read committed: set transaction isolationlevel read committed is the data operated by each DML statement in the TRANSACTION, the data that has been submitted before the statement starts. It provides statement-level consistency. There will be no dirty reads under this isolation level, however, it may cause non-repeated reads and Phantom reads (because at this isolation level, ORACLE does not prohibit other transactions from performing operations on the data operated by the current transaction (add, delete, and modify) ). Is the default isolation level. B. serialization: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; that is, the data operated by each DML statement in the TRANSACTION is the data committed by the statement before the TRANSACTION starts, you can also execute DML statements to change the data in the database and view the change results. Under this isolation level, there will be no dirty reads, non-repeated reads, and phantom (similar to READ-ONLY ). It provides transaction-level consistency. The set constraints statement can be used to specify the time of the test for the latency constraint involved by a firm. Syntax: SETCONSTRAINTS {constraint_name1 [, constraint_name2]… | ALL} {IMMEDIATE | DEFERRED}; IMMEDIATE indicates that the verification is performed immediately after each DML statement; DEFERRED indicates that the verification is performed only when the transaction is committed; the status and nature of a transaction. The statuses that a transaction experiences from the beginning to the end are: the initial state, activity status, failure status, stop status, and commit status. Four major features: 1. Atomicity (Atomicity) means that a transaction is an inseparable logical unit. All operations in a transaction are either successful or failed. 2. Consistency (Consistency) is the result of a statement and a transaction operation. All data in the database must be in a logical Consistency state. 3. Isolation is the execution of a transaction and cannot be disturbed by other transactions. That is to say, the operations and data used in a transaction are isolated from other transactions, and the transactions executed concurrently cannot interfere with each other. Before the transaction is committed, only the user of the transaction can view the data being modified, while the user of other transactions can only view the data before the modification. Isolation has been solved: dirty reads (data that has been read by one transaction to another, uncommitted, and changed); Non-repeated reads (when a transaction reads some data, another transaction modified the data and committed it. When the data is read again, it is found that the data has been modified .) Phantom (when a transaction reads some data, another transaction inserts or deletes some rows that meet the query conditions, when you query again, you find that there are more or fewer records that meet the query conditions .) 4. Durability: Once a transaction is committed successfully, its modifications to the data in the database are permanently saved. ORACLE uses the information stored in the withdrawal segment to provide a consistency view for statements or transactions. The undo segment stores all uncommitted transactions and data before the latest transaction modifications, that is, the "pre-image ". Read consistency Implementation Mechanism (Execution principle): When a query statement enters the execution state, ORACLE allocates it with the current SCN. When you search for the required data blocks in the data buffer, the query statement only reads data blocks whose SCN is smaller than or equal to the SCN. If the SCN of the required data block is greater than the SCN, the query statement obtains the original version of the corresponding data block from the revocation segment, in addition, the SCN of this version must be smaller than or equal to the SCN. By comparing the SCN of the data block, the query statement will only return the data submitted by the statement before it starts. This ensures that no dirty data blocks generated by uncommitted transactions are read, or data blocks submitted after the query statement is executed. Principles of transactions and database recovery the basic unit of database recovery is transactions. The database recovery mechanism includes a recovery subsystem and a specific data structure. The basic principle of recoverability is to store data repeatedly, that is, data redundancy ). a Data dump is a process in which you can manually export or copy part of the database to a specified drive letter or media by using a utility. The files obtained after the dump are called backups or copies. The status can be divided into static dump and dynamic dump. Static dump is performed when no transaction is running, that is, operations are performed when the database is in a consistent state. The obtained copy must be consistent. However, the database availability is reduced. Because no transaction can be run during the dump. Dynamic dump: You can perform the dump operation without waiting for the transaction to stop. The resulting copy does not guarantee consistency. Only a copy of the dump and a transaction log file after the dump can restore the data to a consistent state at a certain time point. The dump mode is divided into: massive dump and incremental dump: the whole database is dumped every time. Incremental dump: Only data modified since the last dump is dumped at a time. A log file is a file used to record transaction modification operations on the database outside the data file. The data file records the operation results. The log file records transactions and their operations. Log Files generally record the following important information: 1. transaction identification. (Start tag, end tag, or automatically generated unique ID); 2. Operation Type (add, delete, or modify); 3. Operation object; 4. Data (Block) before modification (this item is blank for insertion); 5. Data (Block) after modification (this item is blank for deletion ); fault Types and recovery strategies fault classification: basic strategies for recovering databases from transaction faults, system faults, and media faults: regularly dump databases, register files on a daily basis, adopt different methods for different faults, and consider the impact of data loss on services. Transaction fault and recovery strategy transaction fault refers to the fault in which a transaction is terminated before its operation ends normally. The recovery method is automatically restored by the ORACLE system. That is, the reverse scan is performed on the file to find a modification operation for the faulty transaction. Perform a inverse operation on the Modification Operation of the transaction until the start mark of the transaction. System faults and recovery policies refer to any events that cause the DBMS to stop running, so that the DBMS needs to be restarted. The recovery method is automatically restored by the ORACLE system. That is, to scan the log file, find the transactions that have been committed before the fault occurs, mark the transactions as redo queues, and find the transactions that have not been completed when the fault occurs, mark the transaction as the undo queue, and redo the transaction in the redo Queue (that is, the log file is scanned forward, re-execute the registration operation in the log file for each transaction to be redone, and then write the operation result to the data file ). At last, the transaction in the revocation queue is canceled (that is, the log file is scanned in reverse order, and the modification operations of each transaction are reversed .). Medium faults and recovery strategies media faults refer to hardware faults (Disk damage, circuit faults, etc ). Recovery Method: First reinstall the database, and then redo all the completed transactions (that is, 1. Load the dumped copy closest to the time when the fault occurred. In the case of dynamic dump, you also need to load the log file copy at the beginning of the dump and restore the database consistency at the beginning of the dump by restoring the system fault. 2. Load a copy of the log file closest to the time when the fault occurred, and restore the database to the consistent state when the log file is dumped by restoring the system fault. 3. re-run all the transactions that modify data from the time when the log file was recently dumped to the time when the fault occurred. The database can be restored to the consistent state when a fault occurs ). Database recovery technology with checkpoints database recovery technology is to add a new type of record (checkpoint record) to the log file and add a new start file. DBMS dynamically maintains log files. The main tasks are as follows: 1. temporarily stop the execution of existing transactions; 2. Write the logs in the current log buffer in the memory to the log file; 3. Write the data in the current data buffer in the memory to the data file; 4. Write the checkpoint record in the log file (content: list of all ongoing transactions when a checkpoint event occurs; Address of the last log record of these transactions ;), write the checkpoint record in the log file to the re-start file; 5. Re-execute the existing transaction. Database recovery steps with checkpoints; 1. Find the address of the last checkpoint recorded in the log file from the re-start file, and then find the last checkpoint record in the log file; 2. the checkpoint records the list of all ongoing transactions at the checkpoint time. 3. The log files are scanned forward from the checkpoint time, if there is a new transaction (that is, there are still transactions not committed at the checkpoint, or the transaction started after the checkpoint), it will be put into the revocation queue. If the new transaction has been committed before the fault occurs, put it from the Undo queue into the redo queue. 4. Obtain the address of the latest transaction log record from the checkpoint record to obtain the log record content. 5. Perform the UNDO operation on the transaction in the cancel Queue according to the log record content, perform the REDO operation on the transactions in the REDO queue. ORACLE database backup and recovery Overview Physical backup (Physical Backups ): the files to be backed up, such as data files, log files, initialization parameter files, redo log files, and control files, are copied from one location to another. Logical backup (Logical Backups): reads data from the database using tools or commands such as the Data Pump (EXPDP), and then writes it into a file. Database recovery includes database repair and database recovery (database restore): Used to replace damaged files with backup files. Database recovery: Uses redo log files or data files to recreate lost data. In addition to media faults in ORACLE, DBAs must be involved in database repair and recovery. When a transaction or system fault occurs, ORACLE will automatically perform crash recovery (crash recovery) and instance revovery ).

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.