Analysis on basic maintenance of redo log files

Source: Internet
Author: User


The most important purpose of redo log files is to restore data (you can also use logminer). It records the database buffer cache (high-speed cache area) in the system global area (sga) all change information is recorded in the log buffer. However, in some special cases, such as sqlldr direct and other direct write operations, these exceptions will not be recorded. Therefore, when the instance crashes, the redo log is used to restore the data that has not been written to the data file. Www.2cto.com
For each instance, there is only one write process LGWR. We recommend that you redo the log files in at least two groups, each of which has at least two Member files, and the Members in the same group have copies of the same content, the size of members in the same group is consistent. Of course, we recommend that all Members be consistent. To avoid the impact of LGWR wait on database performance, A useful suggestion is to store member files on a physical disk separately.
When it comes to redo log files, we have to mention redo log buffer in the log cache, and an important LGWR log write process. This process is responsible for writing the content of the log buffer to the redo logfile, the trigger conditions are as follows:
◆ When the commit command is issued ◆ when the space of log buffer is full to 1/3 or when the space of log buffer is full of 1 MB of records ◆ when every 3 seconds times out ◆ when before DBWn writes data files, ◆ when switching log files
0. view the log file SELECT group #, sequence #, bytes, members, status FROM v $ log; -- group number, serial number, size, number of members, status [UNUSED-never used; CURRENT-currently in use; ACTIVE-ACTIVE, recovery will be used; INACTIVE-INACTIVE; CLEARING_CURRENT-clearing the closed threads in the CURRENT log file; CLEARING-Similarly, after alter database clear logfile is executed, logs are cleared as null and converted to UNUSED]
Select * from v $ logfile; -- status [INVALID-this file is not accessible; STALE-the file content is incomplete, such as adding a file member; DELETED-this file is no longer in use; blank-in use] SELECT groups, current_group #, sequence # FROM v $ thread; -- number of groups, current group, serial number
1. Check whether the log file is archived. Databases can run in archivelog and noarchivelog. 2. view archive log list in archive mode; select log_mode from v $ database; select archiver from v $ instance; 3. Switch archive mode to simple. Here we use oracle 10 Gb for demonstration. SQL * Plus: Release 10.2.0.5.0-Production on Tue May 8 15:49:39 2012 Copyright (c) 1982,201 0, Oracle. all Rights Reserved. connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0-64bit ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> archive log list; database log mode No Archive ModeAutomatic archival DisabledArchive destination/u01/oracle/10g/archOldest online log sequence 158 Current log sequence 160 -- not archived currently, we need to switch to Archive, first, set the archive directory SQL> alter system set log_archive_dest_1 = 'location =/u01/oracle/10g/arch 'scope = spfile; System altered. -- disable Database SQL> shutdown immediate; Database closed. database dismounted. ORACLE instance shut down. -- start instance mount status SQL> startup mount; ORACLE instance started. total System Global Area 1610612736 bytesFixed Size 2096632 bytesVariable Size 469762568 bytesDatabase Buffers 1124073472 bytesRedo Buffers 14680064 bytesDatabase mounted. -- open archive mode SQL> alter database archivelog; www.2cto.com Database altered. -- open database SQL> alter Database open; database altered. -- check SQL> archive log list in archive mode; database log mode Archive ModeAutomatic archival EnabledArchive destination/u01/oracle/10g/archOldest online log sequence 158 Next log sequence to archive 160 Current log sequence 160 -- manually switch the log file SQL> archive log list; database log mode Archive ModeAutomatic archival EnabledArchive destination/u01/oracle/10g/archOldest online log sequence 159 Next log sequence to archive 161 Current log sequence 161SQL> alter system switch logfile; System altered. SQL> archive log list; database log mode Archive ModeAutomatic archival EnabledArchive destination/u01/oracle/10g/archOldest online log sequence 160 Next log sequence to archive 162 Current log sequence 162 SQL> alter system archive log current; system altered. SQL> archive log list; database log mode Archive ModeAutomatic archival EnabledArchive destination/u01/oracle/10g/archOldest online log sequence 161 Next log sequence to archive 163 Current log sequence 163 -- switch to non-Archive mode SQL> shutdown immediate; database closed. database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. total System Global Area 1610612736 bytesFixed Size 2096632 bytesVariable Size 469762568 bytesDatabase Buffers 1124073472 bytesRedo Buffers 14680064 bytesDatabase mounted. SQL> alter database noarchivelog; Database altered. SQL> alter database open; www.2cto.com Database altered. SQL> archive log list; Database log mode No Archive ModeAutomatic archival DisabledArchive destination/u01/oracle/10g/archOldest online log sequence 158 Current log sequence 160
4. log File Maintenance 4.1. add the log group sys @ GT9I> select GROUP # from v $ log; group # ---------- 1 2 3 sys @ GT9I> select group #, member from v $ logfile order by 1; GROUP # MEMBER ---------- bytes 1/u01/oracle/9i/oradata/gt9i/redo01.log 2/u01/oracle/9i/oradata/gt9i/redo02.log 3/u01/oracle/9i/oradata /gt9i/redo03.log now adds a group of logs with two members: sys @ GT9I> alter database add logfile group 4 ('/u0 1/oracle/9i/oradata/gt9i/redo04a. log', '/u01/oracle/9i/oradata/gt9i/redo04b. log') size 20 M; Database altered. sys @ GT9I> select group # from v $ log; GROUP # ---------- 1 2 3 4 sys @ GT9I> select group #, member from v $ logfile order by 1; GROUP # MEMBER ---------- bytes 1/u01/oracle/9i/oradata/gt9i/redo01.log 2/u01/oracle/9i/oradata/gt9i/redo02.log 3/u01/oracle/9 I/oradata/gt9i/redo03.log 4/u01/oracle/9i/oradata/gt9i/redo04a. log 4/u01/oracle/9i/oradata/gt9i/redo04b. log 4.2. add log member sys @ GT9I> alter database add logfile member '/u01/oracle/9i/oradata/gt9i/redo04c. log' to group 4; Database altered. sys @ GT9I> select group #, member from v $ logfile order by 1; GROUP # MEMBER ---------- ---------------------------------------------- 1/u01/oracle/9i/oradata/gt9i/ Redo01.log 2/u01/oracle/9i/oradata/gt9i/redo02.log 3/u01/oracle/9i/oradata/gt9i/IIS4/u01/oracle/9i/oradata/gt9i/redo04a. log 4/u01/oracle/9i/oradata/gt9i/redo04b. log 4/u01/oracle/9i/oradata/gt9i/redo04c. log www.2cto.com shows that the file size attribute option is not used when adding members, because the Members in the same group must be consistent in size. 4.3. delete log member sys @ GT9I> alter database drop logfile member '/u01/oracle/9i/oradata/gt9i/redo04c. log'; Database altered. sys @ GT9I> select group #, member from v $ logfile order by 1; GROUP # MEMBER ---------- bytes 1/u01/oracle/9i/oradata/gt9i/redo01.log 2/u01/oracle/9i/oradata/gt9i/redo02.log 3/u01/oracle/9i/oradata /gt9i/redo03.log 4/u01/oracle/9i/oradata/gt9 I/redo04a. log 4/u01/oracle/9i/oradata/gt9i/redo04b. log 4.4. delete the log group sys @ GT9I> alter database drop logfile group 4 2; Database altered. sys @ GT9I> select group # from v $ log; GROUP # ---------- 1 2 3 sys @ GT9I> select group #, member from v $ logfile order by 1; GROUP # MEMBER ---------- bytes 1/u01/oracle/9i/oradata/gt9i/redo01.log 2/u01/oracle/9i/oradata/gt9i/red O02.log 3/u01/oracle/9i/oradata/gt9i/redo03.log5. log mining is also called logminer. First, create a dictionary file. Sys @ GT9I> alter system set utl_file_dir = '/u01/oracle/9i/oradata/gt9i/dict' scope = spfile; System altered. sys @ GT9I> startup force; ORACLE instance started. total System Global Area 286752136 bytesFixed Size 740744 bytesVariable Size 150994944 bytesDatabase Buffers 134217728 bytesRedo Buffers 798720 bytesDatabase mounted. database opened. sys @ GT9I> 1 BEGIN2 DBMS_LOGMNR_D.build (3 dictionary_filename => 'Mydict1. ora ', 4 dictionary_location =>'/u01/oracle/9i/oradata/gt9i/dict '); 5 END; www.2cto.com 6/PL/SQL procedure successfully completed. in some tests, I used another user to create, insert, and delete tables. I also created a process, and finally deleted tables and processes. Add a log file to view the current log file: sys @ GT9I> select status, member from v $ logfile; status member ---------- users/u01/oracle/9i/oradata/gt9i/users/u01/oracle/9i/oradata/gt9i/users/u01/oracle/9i/oradata/gt9i/redo01.log sys @ GT9I> select * from v $ log; GROUP # THREAD # SEQUENCE # bytes members arc status FIRST_CHANGE # FIRST_TIM ---------------------------- -- ---------- --- ---------- ------------- --------- 1 1 12 104857600 1 no inactive 3389635 16-MAY-12 2 1 10 104857600 1 no inactive 3099675 14-MAY-12 3 1 13 104857600 1 NO CURRENT 3414320 16-MAY-12 the group with group = 3 is the current group. Export (options => DBMS_LOGMNR.new, logfilename => '/u01/oracle/9i/oradata/gt9i/redo03.log');/* sort (options => DBMS_LOGMNR.addfile, logfilename => '/u01/oracle/9i/oradata/gt9i/redo01.log'); DBMS_LOGMNR.add_logfile (options => DBMS_LOGMNR.addfile, logfilename => '/u01/oracle/9i/oradata/gt9i/redo02.log'); */END;/start mining by enabling parameters first; otherwise, ddl statements cannot be mined. Alter database add supplemental log data; values (dictfilename => '/u01/oracle/9i/oradata/gt9i/dict/mydict1.ora', starttime => TO_DATE ('2017 16:20:00 ', 'yyyymmdhh24: mi: ss'), endtime => TO_DATE ('2017 16:28:00 ', 'yyyymmdd hh24: mi: ss'); END; /Then you can query the mined content. Note that you must query in the same session: Select Scn, Operation, SQL _Redo, SQL _Undo From V $ logmnr_Contents; and finally stop mining: BEGIN www.2cto.com DBMS_LOGMNR.end_logmnr; END;/the above tests passed the 9i test, but DDL statements cannot be found in 9i. On the contrary, DDL statements such as insert and delete can be seen in 10 Gb, however, the start mining statement for 10 Gb is different from that for 9i. The start mining statement for 9i reports an error in 10 GB: the log file is missing. You can use the following statement to start mining: declareBegin dbms_logmnr.start_logmnr (dictfilename => '/u01/oracle/9i/oradata/gt9i/dict/mydict2.ora'); End;-The End-author gtlions

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.