Logminer is an actual, useful analysis tool that Oracle has provided since 8i, which makes it easy to obtain specific content in Oracle Redo log files (archived log files), Logminer analysis tools are actually made up of a set of PL SQL package and a number of dynamic views that can be used to analyze online logs and archive logs to obtain a database of past detailed, specific operations, very useful.
Why do you use Logminer?
Mainly for the following reasons:
When the database has been misoperation, the need for incomplete recovery, in order to confirm the wrong operation of the exact point of time or the SCN number, this time required to Logminer.
Traditional recovery of a TB or a small table in a hundreds of GB table space, the standard operation is to restore the entire tablespace to the previous state, and then apply the archive log, plus the time to build the recovery environment, the whole time will be very long. The Logminer can be changed to a recovery thinking, through which you can resolve all the DML operations on this table, you can do reverse repair. Reasonable use can significantly reduce the complexity and time of recovery.
Logminer can be regarded as the most powerful database audit tool.
In the past some time the database is very busy or generated a large number of archived logs, through Logminer can know what is the operation, which table accounted for a larger, the main contradiction is.
Several concepts
Source Library: Libraries that generate archive logs and online logs
Target libraries: Libraries performing logminer for log analysis
Dictionary: English name catalog, used to translate the internal information in the log into the actual table name, column name and other valuable information
The source and destination libraries can be the same or different. If different, the target library database version is higher or equal to the source library; The character set is the same; the operating system and hardware platform are the same.
Source Library adjustments Required
The source library needs to run in archive mode
Source Library needs to enable append log
[JS] View plaincopyalter DATABASE ADD supplemental LOG DATA;
Check out the results:
Sql> SELECT supplemental_log_data_min from V$database;
Supplemental_log_data_min
--------------------------------
YES
Objectively speaking, the append log is not required, if you do not enable the Append log session_info, and so many useful information parsing is not, the display will be "UNKNOWN".
Logminer Dictionary Mode
Use the online dictionary
This approach is recommended by Oracle and is the easiest way to do logminer in the source library.
Example:
[JS] View Plaincopyexecute DBMS_LOGMNR. START_LOGMNR (-
OPTIONS => DBMS_LOGMNR. Dict_from_online_catalog);
2. Put the dictionary in the online log file
Apply Yu Yuancu to the target library in the same way
Example:
[JS] View Plaincopyexecute dbms_logmnr_d.build (-
Options=> dbms_logmnr_d.store_in_redo_logs);
Look at which archive log the dictionary is stored in:
SELECT NAME from V$archived_log WHERE dictionary_begin= ' YES ';
SELECT NAME from V$archived_log WHERE dictionary_end= ' YES ';
3. Make the dictionary a file on the OS
This way is to compatible with 9i and previous versions, the actual use of such a way in the source library to set Utl_file_dir parameters, inconvenient.
[JS] View Plaincopyexecute dbms_logmnr_d.build (' Dictionary.ora ',-
'/oracle/database/',-
Dbms_logmnr_d.store_in_flat_file);
Oracle has not recommended this approach.
Take an example to illustrate the Logminer process
1. Restore required logs from tape library
[JS] View plaincopy Run
{
ALLOCATE Channel T1 type ' sbt_tape ' parms ' env= (nsr_server=linuxbak,nsr_client=jfdb1,nsr_data_volume_pool=jfdb) ';
Set Archivelog destination to '/jfdb_arch/yang ';
Restore Archivelog from Logseq 247761 loop logseq 247763 thread 1;
Release channel T1;
}
2. Using an online dictionary to parse logs
[JS] View plaincopysql> EXECUTE DBMS_LOGMNR. Add_logfile (logfilename => '/zhjf02_dlbk/yang/zhjf2_2_202749_570073317.arc ', OPTIONS => DBMS_LOGMNR. NEW)
PL procedure successfully completed.
Sql> EXECUTE DBMS_LOGMNR. Add_logfile (logfilename => '/zhjf02_dlbk/yang/zhjf2_2_202750_570073317.arc ', OPTIONS => DBMS_LOGMNR. AddFile)
PL procedure successfully completed.
Sql> EXECUTE DBMS_LOGMNR. START_LOGMNR (OPTIONS => DBMS_LOGMNR. Dict_from_online_catalog + dbms_logmnr.committed_data_only);
PL procedure successfully completed.
3. Query resolution Results
[JS] View plaincopysql> CREATE table qiuyb.logcontents_2_202749 nologging tablespace tbsi_rat_32m as SELECT * from v$ logmnr_contents;
Table created.
What needs to be explained is that the V$logmnr_contents is a session period, other sessions are not found results. So this example stores the results in an entity table. In addition, each query v$logmnr_contents this view will actually trigger a log resolution, which means that the content of this view is not START_LOGMNR generated, but every time the production of a physical table can also significantly reduce the system overhead.
4. End Logminer Resolution
[JS] View plaincopysql> EXECUTE DBMS_LOGMNR. END_LOGMNR ();
PL procedure successfully completed.
The entire Logminer session ended.
Some useful options
1. committed_data_only
As the name suggests, only the ones that have been submitted, those in progress and Oracle internal operations are ignored, and 5 of the examples use this option.
2.print_pretty_sql
SQL in Sql_redo and Sql_undo is displayed in an easy to read way, for example:
[JS] View plaincopysql> EXECUTE DBMS_LOGMNR. START_LOGMNR (OPTIONS => DBMS_LOGMNR. Dict_from_online_catalog + dbms_logmnr.committed_data_only+dbms_logmnr.print_pretty_sql);
PL procedure successfully completed.
Sql> SELECT USERNAME, Seg_owner, Seg_name, Sql_redo, Sql_undo
2 from V$logmnr_contents;
3 WHERE Seg_owner not in (' SYS ', ' SYSTEM ')
4 and Seg_name = ' T2 '
5/
USERNAME Seg_owner seg_name Sql_redo Sql_undo
--------- ----------- ---------- ----------------------------- ------------------------------------
Qiuyb qiuyb T2 INSERT INTO "Qiuyb". T2 "Delete from" Qiuyb "." T2 "
Values where
"id" =, "id" =
' Name ' = ' Test ', ' name ' = ' Test ' and '
"FLAG" = 5; "FLAG" = 5 and
ROWID = ' aaaucraafaaabj1aaa ';
Sql>
4.ddl_dict_tracking
Suitable for online log storage Logminer dictionary, when the table has been added fields, and so on, the dictionary is automatically updated.
Example:
[JS] View Plaincopyexecute DBMS_LOGMNR. START_LOGMNR (OPTIONS =>-
DBMS_LOGMNR. Ddl_dict_tracking + DBMS_LOGMNR. Dict_from_redo_logs);
4. No_sql_delimiter
Removing the last semicolon from the SQL statements in Sql_redo and Sql_undo makes it easy and quick to iterate through the parsed SQL in a cursor manner.
Example:
[JS] View Plaincopyexecute DBMS_LOGMNR. START_LOGMNR (OPTIONS =>-
Dbms_logmnr.no_sql_delimiter + DBMS_LOGMNR. Dict_from_redo_logs);
5. no_rowid_in_stmt
Remove rowID from Sql_redo and Sql_undo column statements. Example:
[JS] View Plaincopyexecute DBMS_LOGMNR. START_LOGMNR (OPTIONS =>-
Dbms_logmnr.no_rowid_in_stmt + DBMS_LOGMNR. Dict_from_redo_logs);
Applies to re-execute parsed SQL from a library other than the source library because the corresponding ROWID does not exist in the target library.
Useful views
V$logmnr_logs added log list to resolve
V$logmnr_contents resolution Results
The author Yang Baoqiu, Oracle ACE, more than 10 years of TB database design, construction, management, operation maintenance, tuning experience, but also for 7 years HP and IBM Rs6000 system administrator, but also obtained BCFP Certified San Administrator, is currently China Unicom Heilongjiang Branch Database Manager.