How to Use REDO in Oracle for fault recovery

Source: Internet
Author: User

In databases, the Redo function is mainly implemented through three components: Redo Log Buffer, LGWR background process, and Redo Log File in archive mode, the Redo Log File will eventually be written as an archive Log File ).

There is a shared memory in the SGA of Oracle, called Redo Log Buffer, as shown in 1.

Oracle Instance

The Redo Log Buffer is located in SGA and is a memory area used cyclically. It stores information about database changes. This information is stored in the form of Redo Entries (Redo Records ). Redo Entries contains important information for restructuring and redoing database changes, including INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP. When necessary, Redo Entries is used for database recovery.

The Redo Entries content is copied from the user's memory space by the Oracle database process to the Redo Log Buffer in SGA. Redo Entries occupies sequential space in the memory. Because Redo Log Buffer is used cyclically, oracle constantly writes the Redo Log Buffer content to the Redo Log File through a background process LGWR.

When you modify data in the Buffer Cache, Oracle does not immediately write the modified data to the data file, because the efficiency will be very low, so far, the busiest part of the computer system is the disk I/O operations. Oracle aims to reduce the number of I/O operations. after a certain amount of modified data is reached, you can efficiently write data in batches.

Most traditional databases, including Oracle, follow the no-force-at-commit policy when processing data modifications. That is to say, it is not mandatory to write when submitting. In order to ensure that data can be restored when a database fails, such as a power failure, Oracle introduces a Redo mechanism, write random and scattered data blocks by sequential log entries. This delay improves the performance of writing data in batches.

Similar to Redo Log Buffer, Redo Log File is also used cyclically. Oracle allows at least two Log groups. By default, three log groups are created during database creation.

SQL> select group#,members,status from v$log; 

GROUP# MEMBERS STATUS

---------- ---------- ----------------

1 1 INACTIVE

2 1 CURRENT

3 1 INACTIVE

When a Log file is fully written, it is switched to another Log file, which is called a Log Switch. Log Switch triggers a checkpoint, prompting the DBWR process to write the change data protected by full Log files back to the database. Before the checkpoint is completed, log files cannot be reused.

Because the Redo mechanism protects data, Oracle can repeat the Redo mechanism to recover data when the database fails. A very important question is, where should the recovery start?

If too many Redo logs are read, the recovery time is too long. In the production environment, we must ensure that the recovery time is as short as possible. Oracle uses Checkpoint) to reduce the recovery time. Review the content mentioned in chapter 1st: a checkpoint is only a database event. Its fundamental significance lies in reducing the recovery time.

When a Checkpoint occurs, the SCN is called the Checkpoint SCN. Oracle notifies the DBWR process to change the modified data, that is, the Dirty data before the Checkpoint SCN, Dirty Buffer) write Data from the Buffer Cache to the disk. After the checkpoint is completed, the CKPT process updates the control file and Data File Header accordingly, records the checkpoint information, and identifies the change.

After the checkpoint is completed, all the data modified before the checkpoint has been written back to the disk. The corresponding redo records in the redo log file are no longer useful for crash/instance recovery. If the database crashes afterwards, you only need to recover from the last completed checkpoint. If the database runs in archive mode, all production databases are recommended to run in archive mode), the log file must be written to the archive log file before reuse, archive logs can be used to recover database faults during media recovery.

I. UNDO instructions

Ii. Undo segment

Store old data during database modification by a process, including the location and value before the data is modified. The header of the Undo segment contains a transaction processing table, which stores the information currently processed using the undo segment. A series of transaction processing uses only one undo segment to store undo data. Multiple Parallel transaction processing can be written into one undo segment at the same time.

Iii. Role of Undo segments

1. Transaction Processing rollback

When a transaction modifies a row in the table, the old data image is stored in the undo segment. If the transaction is rollback, the Oracle Server uses the undo segment to write back the modified row.

2. Transaction processing recovery

When a routine fails and some other things are not completed, the Oracle Server needs to undo all the uncommitted modifications when the database is opened again. This rollback is part of the transaction processing restoration. The undo segment modification will also be protected by the redo log file, so recovery can be performed.

3. read consistency

When a transaction is being processed, other users cannot see any modifications not submitted for the transaction. In addition, data changes during the execution of a query should not be displayed in the query results. The old undo data in the Undo segment) also provides a consistent image for user-specific queries.

Iv. read consistency

1. Even if other things process and modify data, the Oracle Server ensures that the data displayed in a query is consistent. When Oracle Server starts executing a select query, it will record the current system change number SCN) and make sure that any data modification after this SCN will not affect the query result. For example, a query statement with a long running time has several modifications during execution. If a row of data is not submitted for modification before the query declaration, oracle Server extracts unmodified previous images from the undo segment and builds a "read consistent" Row image.

2. Transaction Processing read consistency

Read consistency usually refers to SQL query statements. However, you can also use the following command after the transaction processing declaration to make the transaction processing read consistent:

SET TRANSACTION READ ONLY;

Alternatively, you can use the following command before processing a transaction when the transaction processing uses the DML statement:

SET TRANSACTION ISOLATION LELVEL SERIALIZABLE;

Both methods can ensure the consistency of transaction processing during data reading, but using the SERIALIZABLE parameter will affect the performance.

  1. Database optimization greatly improves Oracle Performance
  2. Possible Oracle Performance Optimization Problems
  3. Comparison between Oracle and SQL Server database images

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.