Deep understanding of Database Logminer

Source: Internet
Author: User
Keywords Logminer
Tags analysis archive archive log content example files information name

Logminer is an actual, useful analysis tool that Oracle has provided since 8i, which makes it easy to obtain specific content in Oracle Redo log files (archived log files), Logminer analysis tools are actually made up of a set of PL SQL package and a number of dynamic views that can be used to analyze online logs and archive logs to obtain a database of past detailed, specific operations, very useful.

Why would

use Logminer?

Mainly for the following reasons:

when the database has been mistakenly operated, the need for incomplete recovery, to confirm the wrong operation of the exact point of time or the SCN number, this time required to Logminer. Traditional recovery of a TB or a small table in a hundreds of GB table space, the standard operation is to restore the entire tablespace to the previous state, and then apply the archive log, plus the time to build the recovery environment, the whole time will be very long. The Logminer can be changed to a recovery thinking, through which you can resolve all the DML operations on this table, you can do reverse repair. Reasonable use can significantly reduce the complexity and time of recovery. Logminer can be regarded as the most powerful database audit tool. In the past some time the database is very busy or generated a large number of archived logs, through Logminer can know what is the operation, which table accounted for a larger, the main contradiction is.

Several concepts

Source Library: Libraries that generate archive logs and online logs

Target libraries: Libraries performing logminer for log analysis

Dictionary: English name catalog, used to translate the internal information in the log into the actual table name, column name and other valuable information

The source and destination libraries can be the same or different. If different, the target library database version is higher or equal to the source library; The character set is the same; the operating system and hardware platform are the same.

Source Library needs to be adjusted

Source Library to run in archive mode the source library needs to enable the Append log

ALTER DATABASE ADD supplemental LOG DATA; Check out the results:sql> SELECT supplemental_log_data_min from V$database; Supplemental_log_data_min--------------------------------Yes objective to say that the Append log is not required, if you do not enable the Append log session_info, and so many useful information parsing is not, The display will be "UNKNOWN".


Logminer Dictionary mode

Use an online dictionary

This approach is recommended by Oracle and is the easiest way to do logminer in the source library.

Example:

EXECUTE DBMS_LOGMNR. START_LOGMNR (-options => DBMS_LOGMNR. Dict_from_online_catalog);

2. Put the dictionary in the online log file

Apply Yu Yuancu to the target library in the same way

Example:

EXECUTE dbms_logmnr_d.build (-options=> dbms_logmnr_d.store_in_redo_logs); Look at which archive log the dictionary is stored in: SELECT NAME from V$archived_log WHERE dictionary_begin= ' YES '; SELECT NAME from V$archived_log WHERE dictionary_end= ' YES ';

3. Make the dictionary a file on the OS

This way is to compatible with 9i and previous versions, the actual use of such a way in the source library to set Utl_file_dir parameters, inconvenient.

EXECUTE dbms_logmnr_d.build (' Dictionary.ora ',-'/oracle/database/',-dbms_logmnr_d.store_in_flat_file); Racle has not recommended such a way.


An example to illustrate the Logminer process

1. Restore required logs from tape library

run{ALLOCATE channel T1 type ' sbt_tape ' parms ' env= (nsr_server=linuxbak,nsr_client=jfdb1,nsr_data_volume_pool= Jfdb) '; Set Archivelog destination to '/jfdb_arch/yang '; Restore Archivelog from Logseq 247761 loop logseq 247763 thread 1; Release channel T1; 2. Use online dictionary to parse log


Sql> EXECUTE DBMS_LOGMNR. Add_logfile (logfilename => '/zhjf02_dlbk/yang/zhjf2_2_202749_570073317.arc ', OPTIONS => DBMS_LOGMNR. NEW) PL procedure successfully completed. Sql> EXECUTE DBMS_LOGMNR. Add_logfile (logfilename => '/zhjf02_dlbk/yang/zhjf2_2_202750_570073317.arc ', OPTIONS => DBMS_LOGMNR. addfile) PL procedure successfully completed. Sql> EXECUTE DBMS_LOGMNR. START_LOGMNR (OPTIONS => DBMS_LOGMNR. Dict_from_online_catalog + dbms_logmnr.committed_data_only);P L/sql procedure successfully Completed.3. Query resolution Results


sql> CREATE table qiuyb.logcontents_2_202749 nologging tablespace tbsi_rat_32m as SELECT * from V$logmnr_contents; Table created. What needs to be explained is that the v$logmnr_contents is present during the session, and other sessions are not found. So this example stores the results in an entity table. In addition, each query v$logmnr_contents this view will actually trigger a log resolution, which means that the content of this view is not START_LOGMNR generated, but every time the production of a physical table can also significantly reduce the system overhead.


4. End Logminer Resolution

Sql> EXECUTE DBMS_LOGMNR. END_LOGMNR ();P L/sql procedure successfully completed. The entire Logminer session ended.


Some useful options

1. committed_data_only

As the name suggests, only the ones that have been submitted, those in progress and Oracle internal operations are ignored, and 5 of the examples use this option.

2.print_pretty_sql

SQL in Sql_redo and Sql_undo is displayed in an easy to read way, for example:

Sql> EXECUTE DBMS_LOGMNR. START_LOGMNR (OPTIONS => DBMS_LOGMNR. Dict_from_online_catalog + dbms_logmnr.committed_data_only+dbms_logmnr.print_pretty_sql);P L/sql procedure successfully completed. Sql> SELECT USERNAME, Seg_owner, Seg_name, Sql_redo, Sql_undo 2 from v$logmnr_contents; 3 WHERE Seg_owner not in (' SYS ', ' SYSTEM ') 4 and seg_name = ' T2 ' 5/username seg_owner seg_name sql_redo sql_undo-------------- ---------------------------------------------------------------------------------qiuyb qiuyb T2 insert INTO "Qiuyb" ." T2 "Delete from" Qiuyb "." T2 "values where" id = n, "id" = "name" = "Test", "name" = ' Test ' and ' FLAG ' = 5; "FLAG" = 5 and ROWID = ' aaaucraafaaabj1aaa '; Sql>

4.ddl_dict_tracking

Suitable for online log storage Logminer dictionary, when the table has been added fields, and so on, the dictionary is automatically updated.

Example:

EXECUTE DBMS_LOGMNR. START_LOGMNR (OPTIONS =>-DBMS_LOGMNR. Ddl_dict_tracking + DBMS_LOGMNR. Dict_from_redo_logs); 4. No_sql_delimiter


Removing the last semicolon from the SQL statements in Sql_redo and Sql_undo makes it easy and quick to iterate through the parsed SQL in a cursor manner.


Example:


EXECUTE DBMS_LOGMNR. START_LOGMNR (OPTIONS =>-dbms_logmnr.no_sql_delimiter + DBMS_LOGMNR. Dict_from_redo_logs); 5. No_rowid_in_stmt


removes rowID from Sql_redo and Sql_undo column statements. Example:

EXECUTE DBMS_LOGMNR. START_LOGMNR (OPTIONS =>-dbms_logmnr.no_rowid_in_stmt + DBMS_LOGMNR. Dict_from_redo_logs);

Applies to re-execute parsed SQL from a library other than the source library because the corresponding ROWID does not exist in the target library.

Useful views

V$logmnr_logs added parsing log list v$logmnr_contents parsing result

The author Yang Baoqiu, Oracle ACE, more than 10 years of TB database design, construction, management, operation maintenance, tuning experience, but also for 7 years HP and IBM Rs6000 system administrator, but also obtained BCFP Certified San Administrator, is currently China Unicom Heilongjiang Branch Database Manager.

Original link: http://www.qiuyb.com/archives/171 (Zebian/Liu Yajon)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.