Analyze Oracle's redo logs and archives using Logminer

Source: Internet
Author: User
Tags dba

Oracle Logminer is a practical and useful analysis tool that Oracle offers from 8i onwards, which makes it easy to get specific content in Oracle's online/archived log files, especially if the tool can parse all DML and DDL statements for database operations. This tool is especially useful for debugging, auditing, or fallback a particular transaction.

The Logminer analysis tool is actually comprised of a set of PL/SQL packages and some dynamic views (part of the oracle8i built-in package), which is released as part of the Oracle database as a completely free tool provided by 8i products. However, this tool is somewhat more complex to use than other Oracle built-in tools, mainly because the tool does not provide any graphical user interface (GUI).

1, installation Logminer

Before using Logminer, you need to confirm that Oracle has a Logminer analysis package, which is generally included by the Windows operating system oracle10g above. If you cannot confirm, you can log in to the system as DBA, see if there are DBMS_LOGMNR, dbms_logmnr_d packages required to run Logminer on the system, and if you do not need to install the Logminer tool, you must first run one of the following two scripts:

A. $ORACLE _home/rdbms/admin/dbmslm.sql

B. $ORACLE _home/rdbms/admin/dbmslmd.sql

Both scripts must be run as DBA users. The first of these scripts is used to create a DBMS_LOGMNR package that parses the log file. The second script is used to create a dbms_logmnr_d package that is used to create a data dictionary file.

2. Modify Database Parameters

--Database version [email protected]>select * from v$version; BANNER-----------------------------------------------------------core11.2.0.4.0productiontns for  linux: version 11.2.0.4.0 - productionnlsrtl version 11.2.0.4.0 -  production--adding minimal additional logs to the database [Email protected]>alter database add supplemental log  data;database altered. [email protected]>select supplemental_log_data_min from v$database; Supplemental_log_data_mi------------------------yes--setting Utl_file_dir parameters [email protected]>alter  System set utl_file_dir= '/home/oracle/logminer '  scope=spfile; system altered.--Restart Database [email protected]>shutdown immediate[email protected]>startup[ email protected]>show parameter utl_file_dirname     type        value------------------------------------ --------------------------------- ------------------------------utl_file_dir      String       /home/oracle/logminer

3. Prepare test data

[Email Protected]>conn zx/zxconnected. [Email protected]>create table Log_miner (ID number,name varchar2 (10)); Table created. [email protected]>insert into log_miner values (1, ' ZX '); 1 row created. [email protected]>insert into log_miner values (1, ' LX '); 1 row created. [email protected]>insert into log_miner values (1, ' xx '); 1 row created. [Email protected]>commit; Commit complete.

4. Create a data dictionary

[Email protected]>conn / as sysdbaconnected. [email protected]>desc dbms_logmnr_dprocedure build argument nametypein/out  default? ------------------------------ ----------------------- ------ --------  dictionary_filenamevarchar2in     default dictionary_locationvarchar2in      default optionsnumberin     defaultprocedure set_ tablespace argument nametypein/out default? ------------------------------ ---------- ------------- ------ -------- new_tablespace varchar2in[email protected]>exec  dbms_logmnr_d.build (dictionary_filename=> ' Dictionary.ora ',dictionary_location=> '/home/oracle/ Logminer '); logmnr dictionary procedure startedlogmnr dictionary file openedprocedure  Executed successfully - logmnr dictionary&nbSp Createdpl/sql procedure successfully completed. [Email protected]>!ls -l /home/oracle/logminer/dictionary.ora-rw-r--r-- 1 oracle  oinstall 41483316 nov 11 21:08 /home/oracle/logminer/dictionary.ora

The Logminer tool is actually composed of two new PL/SQL built-in packages ((DBMS_LOGMNR and Dbms_ logmnr_d) and four v$ dynamic performance views (views are used in the process DBMS_LOGMNR. START_LOGMNR is created when Logminer is started). You can use the Dbms_logmnr_d package to export a data dictionary to a text file before you use the Logminer tool to parse the redo log file. The dictionary file is optional, but without it, the statements in the Logminer explanation of the data dictionary (such as table name, column name, etc.) and values will be 16 binary forms, which we cannot directly understand. For example, the following SQL statement:

INSERT into Dm_dj_swry (RYDM, RYMC) VALUES (00005, ' Zhang San ');

The result that Logminer explains will look like this:

Insert into object#308 (col#1, col#2) VALUES (Hextoraw (' c30rte567e436 '), Hextoraw (' 4a6f686e20446f65 '));

The purpose of creating a data dictionary is to have the Logminer reference refer to parts of the internal data dictionary as their actual name, rather than the 16 binary inside the system. A data dictionary file is a text file that is created using the package dbms_logmnr_d. If the table in the database that we are analyzing changes and the data dictionary that affects the library changes, then the dictionary file needs to be recreated. Another scenario is when you are analyzing a redo log of another database file, you must also regenerate the data dictionary file for the parsed database again.

5. Confirm the redo log that is currently processing online status

[Email protected]>select group#,status from V$log; group# STATUS----------------------------------------------------------1 INACTIVE 2 current 3 Inactive[email    Protected]>select Group#,member from V$logfile; group# MEMBER-------------------------------------------------------------------------------------------------- ------------3/u02/app/oracle/oradata/orcl/redo03.log 2/u02/app/oracle/oradata/orcl/redo02.log 1/u02/app/oracle/ Oradata/orcl/redo01.log

You can see that redo02 is in the current state

6. Join the log that needs analysis

--The first log file uses Dbms_logmnr.new[email protected]>exec dbms_logmnr.add_logfile (logfilename=> '/u02/app/oracle/ Oradata/orcl/redo02.log ', options=>dbms_logmnr.new);P L/sql procedure successfully completed.--if you need to join another log using the following statement exec dbms_logmnr.add_logfile (logfilename=> '/u02/app/oracle/oradata/orcl/ Redo03.log ', options=>dbms_logmnr.addfile);

7, using Logminer for analysis

[Email protected]>exec dbms_logmnr.start_logmnr (dictfilename=> '/home/oracle/logminer/ Dictionary.ora ');P l/sql procedure successfully completed. [Email protected]>col seg_name for a15[email protected]>col seg_owner  for a15[email protected]>col operation for a20[email protected]>col  sql_redo for a180[email protected]>set linesize 200[email protected] >select seg_owner,seg_name,operation,sql_redo from v$logmnr_contents where seg_ Owner= ' ZX '  and seg_name= ' Log_miner '; Seg_ownerseg_nameoperation     sql_redo--------------- --------------- ---- ---------------- --------------------------------------------------------------------------------Zxlog_ minerddl     create table log_miner  (id number,name varchar2 (10)); Zxlog_minerinsert     insert into  "ZX". " Log_miner "(" ID "," NAME ")  values  (' 1 ', ' ZX '); zxlog_minerinsert     insert into  "ZX". " Log_miner "(" ID "," NAME ")  values  (' 1 ', ' lx '); zxlog_minerinsert     insert into  "ZX". " Log_miner "(" ID "," NAME ")  values  (' 1 ', ' xx ');

The above is to analyze the process of online redo log, the steps to analyze the archive log also, just add the log file when the online redo log to the archive log. Archived logs can also be transferred to other database servers for data analysis, but the source library's dictionary files are required for analysis.

Logminer Precautions for use:

    1. The Source Library and the mining database must be running on the same hardware platform, Windows cannot be uploaded to Linux for analysis

    2. The version of the mining database is greater than or equal to the database version of the source library

    3. The character set of the mining database needs to be consistent with the source library or a superset of the source Library

Reference Document: Http://www.cnblogs.com/shishanyuan/p/3140440.html

Official Document: http://docs.oracle.com/cd/E11882_01/server.112/e22490/logminer.htm#SUTIL1557

This article is from the "DBA fighting!" blog, so be sure to keep this source http://hbxztc.blog.51cto.com/1587495/1871934

Analyze Oracle's redo logs and archives using Logminer

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.