Oracle Log Buffer Detailed

Source: Internet
Author: User

In the process of running the database, it is unavoidable to encounter all kinds of situations which can cause the database to be corrupted. For example, sudden power outages, Oracle or operating system bugs caused the internal logical structure of the database, disk media damage, etc., can cause the database crash, resulting in the phenomenon of data loss.

To avoid, or to fix, the data loss caused by these conditions, Oracle introduces the concepts of log buffers and log files. The so-called log, is the database of all the changes in the data block operations, are recorded in the literal. These changes to the data block include not only the DML command on the data table or the DDL commands that cause the data dictionary content to change, but also the changes to the index, the change of the rollback block data, etc. Only by recording all the changes in the database, and when database corruption occurs, can you recover the database by reapply these changes.

Since it is to be recorded, there must be a question of how to record these changes. There are two ways to think more easily.

The first is the use of logical recording, that is, descriptive statements to record the entire process of change. For example, for an update update operation, you can record two statements: delete old values and insert new values. The advantage of this approach is that it saves a lot of space, because for each operation, only a few logical statements can be recorded. But the downside is also obvious: once you need to recover, you're consuming resources very much. Imagine that an update operation has updated a lot of data blocks, and because buffer cache memory is limited, many dirty blocks have been written to the data file. But at the end of the update, a sudden power outage occurs, and the updates are lost. When the instance is restarted, Oracle needs to apply the records in the log file, and then issue a statement that deletes the old value and inserts the new value. This process needs to find the data blocks that match the criteria in the file and then pick them up for updating. This process will be time-consuming and will consume a lot of buffer cache.

The second way is to use the physical recording method, which is to record each block of data before the mirror and the changed mirror. The advantage of this approach is that the recovery is very fast, directly based on the log file in the block address and content of the data file to update the corresponding block of data. But the downside is also obvious, is that it takes up disk space very much.

Oracle uses a combination of logic and physics in how it logs. That is, Oracle records a descriptive statement that inserts a value or deletes a value for each block of data. If an update updates 100 blocks of data, Oracle records a pair of delete old values and insert new values for each block, with a total of 100 of such statements. In each pair of description statements, the physical address of the associated data block is recorded. With this combination of logic and physics, Oracle is able to save as much space as possible while recording changes, and it can be faster when applied.

To temporarily store the resulting log information, Oracle opens a memory area in the SGA. This area is called the log buffer, and when certain conditions are met, Oracle writes log information in log buffer to the online log file using a background process named LGWR.

You can use the initialization parameter Log_buffer to set the size of the log buffer, in bytes. The log buffer is further subdivided into blocks, each of which is the same size as one block of the operating system, and is essentially 512 bytes. We can get the block size of the log buffer in the following way.

Sql> SELECT distinct Lebsz as redo_block_size from X$kccle;

Redo_block_size

---------------

512

You can also calculate the block size of the log buffer in the following way.

Sql> Select Round ((a.redosize+b.redowast)/c.redoblks) + as

Redo_block_size from

2 (select Value redosize from V$sysstat where name= ' redo

Size ') A,

3 (select Value Redowast from V$sysstat where name= ' redo

Wastage ') B,

4 (select Value redoblks from V$sysstat where name= ' redo

Blocks written ') C;

Redo_block_size

---------------

512

See more highlights of this column: http://www.bianceng.cnhttp://www.bianceng.cn/database/Oracle/

The log buffer is only the area where the log information is temporarily stored, the area is limited, and each block is recyclable. This also means that the contents of the log buffer must be written to a file in the disk for permanent retention in order to be recoverable when the database crashes. This file is called an online log file. Before the log block in each log buffer is reused, its contents must have been written to an online log file on the disk.

The online log file is a full copy of the log buffer, and the contents of each log block that make up the log file are from the log block of the log buffer. Each log block in the log buffer corresponds to a log block in the log file. Log blocks in the log buffer are placed in the online log file in the order in which they occurred.

Because of the importance of log files in recovery, it is recommended that you use at least two log files to form a group of log files. The contents of the log file in the same log filegroup are identical because the log block in the log buffer is also written to each log file in the log file group. Each database must have at least two log file groups. This is because as long as the database does not stop running every day, it will constantly generate log information, will continue to write the online log files, the online log files are always full time. We cannot make the online log file infinitely large, and it is not possible to put an unlimited number of online log files, so the online log files must be recycled and written in rotation in several log files. The process of converting a log file to another log file after it has been written is called log switching.

When an online log file is full, you can choose to archive it as an offline log file, usually called an archive log file. Archiving is a copy, and the process of archiving is the process of copying a fully-written online log file to a predefined directory. The online log file can be recycled again only after an online log file has been archived. It is highly recommended that you select this type of archive in your production library and that you can choose not to archive in a test environment.

It can be said that the only purpose of the log buffer and log file existence is to ensure that the modified data is not lost. Conversely, in order to be able to crash the database, can be used to restore the database to the point at which the crash. This means that the modified block of data can be said to be safe only after the log information of the modified block is written to the online log file. If the log information crashes when it is not written to the log file, modifications to the data will still be lost. As we can see, writing log information in the log buffer to the log file is a very important process that is done by a background process named LGWR. LGWR undertakes the task of maintaining system data integrity, which ensures that the data is not lost under any circumstances.

Triggers the LGWR process to write log information in the log buffer to the online log file conditions include the following.

The foreground process is triggered, including two different scenarios. The most obvious scenario is when a commit or ROLLBACK statement is committed by a user, triggering a LGWR to write log information in memory to an online log file because the submitted data must be protected and not lost The other is that when enough memory is not found in the log buffer to put log information, it also triggers the LGWR process to write some log information to the online log file, freeing up some space.

Every three seconds, LGWR starts once.

When DBWN is started, if a dirty block is found that the corresponding redo entry has not been written to the online log file, DBWN triggers the LGWR process and waits for LRWR to finish writing before continuing.

When the number of log messages reaches 1/3 of the entire log buffer, the LGWR is triggered.

When the number of log messages reaches 1MB, the LGWR is triggered.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.