OracleLogminer configuration and usage

Source: Internet
Author: User

OracleLogminer configuration and usage
LogMiner configuration User Manual 1 Logminer Introduction 1.1 LogMiner Introduction

Oracle LogMiner is a very useful analysis tool provided by Oracle company after 8i. It can be used to easily obtain the specific content in Oracle online/archive log files, in particular, this tool can analyze all DML and DDL statements for database operations. This tool is especially suitable for debugging, auditing, or rolling back a specific transaction.

LogMiner analysis tools are actually composed of a set of PL/SQL packages and some dynamic views (part of the Oracle8i built-in package, it is released as part of the Oracle database as a completely free tool provided by the 8i product. However, this tool is somewhat complex than other built-in Oracle tools, mainly because it does not provide any graphical user interface (GUI ).

1.2 LogMiner role

Before Oracle 8i, Oracle did not provide any tools to assist database administrators in reading and interpreting the contents of duplicated log files. A system problem occurs. For a Common Data Administrator, the only task that can be done is to package all log files and send them to the technical support of Oracle, then quietly wait for Oracle technical support to give us the final answer. However, since 8i, Oracle has provided such a powerful tool, LogMiner.

The LogMiner tool can be used to analyze online logs or offline log files. It can be used to analyze duplicate log files in its own database, or to analyze duplicate log files in other databases.

In general, the main use of the LogMiner tool is as follows:

1. Tracking Database changes: You can track database changes offline without affecting the performance of the online system.

2. Roll Back database changes: Roll back specific changes to reduce the execution of point-in-time recovery.

3. Optimization and expansion plan: You can analyze the data in the log file to analyze the data growth mode.

1.3 Usage Details 1.3.1 install LogMiner

Before using LogMiner, check whether Oracle has a LogMiner analysis package. Generally, Oracle contains more than 10 Gb of data in Windows. If you cannot confirm, log on to the system as a DBA to check whether the dbms_logmnr and dbms_logmnr_d packages required to run LogMiner exist in the system. If you do not need to install the LogMiner tool, you must first run the following two scripts:

1. $ ORACLE_HOME/rdbms/admin/dbmslm. SQL

2. $ ORACLE_HOME/rdbms/admin/dbmslmd. SQL.

Both scripts must run as DBA users. The first script is used to create the DBMS_LOGMNR package, which is used to analyze log files. The second script is used to create the DBMS_LOGMNR_D package, which is used to create a data dictionary file.

The process and view are as follows:

Type

Process name

Purpose

Process

Dbms_logmnr_d.build

Create a data dictionary file

Process

Dbms_logmnr.add_logfile

Add a log file to the category table for analysis.

Process

Dbms_logmnr.start_logmnr

Start LogMiner with an optional dictionary file and the log file to be analyzed.

Process

Dbms_logmnr.end_logmnr

Stop LogMiner Analysis

View

V $ logmnr_dictionary

Display the information of the dictionary file used to determine the Object ID

View

V $ logmnr_logs

Displays the list of analyzed logs when LogMiner is started.

View

V $ logmnr_contents

After LogMiner is started, you can use this view to enter an SQL statement at the SQL prompt to query the redo log Content.

1.3.2 create a data dictionary file

The LogMiner tool is actually created by two new PL/SQL built-in packages (DBMS_LOGMNR and DBMS _ LOGMNR_D) and four V $ dynamic performance views (views are created when LogMiner is started using DBMS_LOGMNR.START_LOGMNR). Before using the LogMiner tool to analyze the redo log file, you can use the DBMS_LOGMNR_D package to export the data dictionary as a text file. This dictionary file is optional, but if it is not available, the part of the data dictionary (such as the table name and column name) in the statement interpreted by LogMiner) and numeric values are in hexadecimal format, which we cannot directly understand. For example, the following SQL statement:

Insert into dm_dj_swry (rydm, rymc) VALUES (00005, 'Zhang san ');

The result explained by LogMiner is as follows:

insert into Object#308(col#1, col#2) values (hextoraw('c30rte567e436'), hextoraw('4a6f686e20446f65')); 

The purpose of creating a data dictionary is to allow LogMiner to reference parts of the internal data dictionary for their actual names, rather than the hexadecimal system. A data dictionary file is a text file created using the package DBMS_LOGMNR_D. If the tables in the database to be analyzed change, and the data dictionary of the database also changes, you need to recreate the dictionary file. Another case is that when you analyze the duplicate logs of another database file, you must regenerate the data dictionary file of the analyzed database.

Before creating a data dictionary file, configure the LogMiner Folder:

1 CREATE DIRECTORY utlfile AS 'D:\oracle\oradata\practice\LOGMNR';2 alter system set utl_file_dir='D:\oracle\oradata\practice\LOGMNR' scope=spfile;

To create a dictionary file, log on to the DBA user and create it in the configured LogMiner Folder:

1 CONN LOGMINER/ LOGMINER@PRACTICE AS SYSDBA2 EXECUTE dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora', dictionary_location =>'D:\oracle\oradata\practice\LOGMNR');
1.3.3 Add log files to be analyzed

Oracle LogMiner can analyze online and offline log files, and add them to the analysis log file using the dbms_logmnr.add_logfile process. The first file uses the dbms_logmnr.NEW parameter, and the subsequent file uses the dbms_logmnr.ADDFILE parameter.

1. Create a list

1 BEGIN2 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\REDO03.LOG',options=>dbms_logmnr.NEW);3 END;4 /

2. Add other log files to the list

1 BEGIN2 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\ARCHIVE\ARC00002_0817639922.001',options=>dbms_logmnr.ADDFILE);3 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\ARCHIVE\ARC00003_0817639922.001',options=>dbms_logmnr.ADDFILE);4 END;5 /
1.3.4 use LogMiner for log analysis

Oracle LogMiner analyzes all log files added to the analysis list without restriction conditions. The restriction conditions are used to analyze the specified range of log files according to the restriction conditions.

 

1. Unrestricted Conditions

EXECUTE dbms_logmnr.start_logmnr (dictfilename => 'd: \ oracle \ oradata \ practice \ LOGMNR \ dictionary. ora ');

2. Restrictions

You can narrow the scope of the log file to be analyzed by setting different parameters in DBMS _ LOGMNR. START_LOGMNR (see table 1. You can set the start time and end time parameters to analyze logs within a certain time range.

Parameters

Parameter type

Default Value

Description

StartScn

Number Type

0

Analyze the SCN ≥startscn log file in the duplicate log

EndScn

Number Type

0

Analyze and recreate the SCN ≤endscn log file in the log

StartTime

Date type

1998-01-01

Analyze the log file with the timestamp ≥starttime In the duplicate log

EndTime

Date type

2988-01-01

Analyze the log file with the timestamp ≤endtime In the duplicate log

DictFileName

Character Type

 

Dictionary file. This file contains a snapshot of the database directory.

In the following example, we only analyze the logs generated in June 8, 2013 ,:

EXECUTE dbms_logmnr.start_logmnr (

DictFileName => dictfilename => 'd: \ .. \ practice \ LOGMNR \ dictionary. ora ',

StartTime => to_date ('2017-6-8 00:00:00 ', 'yyyy-MM-DD HH24: MI: ss ')

EndTime => to_date ('2017-6-8 23:59:59 ', 'yyyy-MM-DD HH24: MI: ss '));

You can also set the starting SCN and ending SCN to limit the range of logs to be analyzed:

EXECUTE dbms_logmnr.start_logmnr (

DictFileName => 'd: \ .. \ practice \ LOGMNR \ dictionary. ora ',

StartScn => 20,

EndScn => 50 );

1.3.5 observing analysis results (v $ logmnr_contents)

So far, we have analyzed the contents in the re-log file. Dynamic Performance view v $ logmnr_contents contains all the information obtained by LogMiner analysis.

SELECT SQL _redo FROM v $ logmnr_contents;

If you only want to know the operations performed by a user on a table, you can use the following SQL query to obtain all the operations performed by the user LOGMINER on the table EMP.

SELECT SQL _redo FROM v $ logmnr_contents WHERE username = 'logminer' AND tablename = 'emp ';

Serial number

Name

Description

1

SCN

System Change number for specific data changes

2

TIMESTAM

Time when data changes occurred

3

COMMIT_TIMESTAMP

Data Change submission time

4

SEG_OWNER

Segment name when data changes

5

SEG_NAME

Segment owner name

6

SEG_TYPE

Data segment type changed

7

SEG_TYPE_NAME

The name of the segment type in which the data is changed

8

TABLE_SPACE

Tablespace of the change segment

9

ROW_ID

ID of the specific data change row

10

SESSION_INFO

User process information when data changes

11

OPERATION

Redo record operations (such as INSERT)

12

SQL _REDO

You can specify the SQL statement for modifying rows for redo record Redo (forward operation)

13

SQL _UNDO

SQL statements that can be used to roll back a redo record or restore changes to a specified row (reverse operation)

It should be emphasized that the analysis results in view v $ logmnr_contents only exist in the lifecycle of the session 'dbms _ logmrn. start_logmnr 'during the running process. This is because all LogMiner storage is in the PGA memory, and all other processes cannot see it. At the same time, as the process ends, the analysis results also disappear.

Finally, the process DBMS_LOGMNR.END_LOGMNR is used to terminate the log analysis transaction. At this time, the PGA memory region is cleared and the analysis results no longer exist.

2 Data Synchronization Oracle Database settings

Use LogMiner to view and execute SQL statements for Oracle data. The following four steps are required:

1. Set the database to archive mode;

2. Set the path of the LogMiner dictionary file;

3. Create a Data Synchronization user (if the user name is LOGMINER, the user has the DBA permission );

4. Verify that the configuration is successful;

2.1 set the database to archive mode 2.1.1 check whether the database is in archive Mode

Use SqlPlus or the command line interface to connect to the database (the following operations are performed on the command line interface)

-- Enter the SqlPlus Program

sqlplus /nolog

-- Use the DBA user to log on to the source database

conn system/system@practic as sysdba

-- Check whether the PRACTICE database is in archive Mode

1 SELECT dbid, name, log_mode FROM v $ database; 2 or 3 archive log list;

If the displayed database is in archive mode, you can skip the setting when the database is in archive mode. If the displayed database is in non-archive mode, you need to perform the following settings.

The database is not archived and archive settings are required.

2.1.2 set archiving Mode

Create an ARCHIVE folder. Set the path of the ARCHIVE folder to "D: \ oracle \ oradata \ practice \ ARCHIVE" in the following operations"

-- Set the path of the archive log file

ALTER SYSTEM SET log_archive_dest="D:\oracle\oradata\practice\ARCHIVE";

-- Log file name format:

ALTER SYSTEM SET log_archive_format="ARC%S_%R.%T" SCOPE=SPFILE;

-- After modification, shut down the database and start it in MOUNT mode.

1 SHUTDOWN IMMEDIATE;2 STARTUP MOUNT;

-- Set the database to archive.

ALTER DATABASE ARCHIVELOG;

(Note: If data restart fails, see section 4th for troubleshooting)

2.1.3 verify whether the archive is set successfully

-- Check whether the PRACTICE database is in archive Mode

1 SELECT dbid, name, log_mode FROM v $ database; 2 or 3 archive log list;

-- Verify whether the parameter settings take effect

SELECT dest_id, status, destination FROM v$archive_dest WHERE dest_id =1;

-- The parameter file settings take effect. Open the database.

ALTER DATABASE OPEN;

2.2 LogMiner settings 2.2.1 create a LogMiner folder

Create a LOGMNR folder named "D: \ oracle \ oradata \ practice \ LOGMNR"

2.2.2 set the path of a LogMiner dictionary file

-- Create a data dictionary file

1 CREATE DIRECTORY utlfile AS 'D:\oracle\oradata\practice\LOGMNR';2 alter system set utl_file_dir='D:\oracle\oradata\practice\LOGMNR' scope=spfile;

2.2.3 enable LogMiner log supplement Mode

-- Create a data dictionary file

alter database add supplemental log data;

2.2.4 verify database restart

-- After modification, shut down the database and start it in MOUNT mode.

1 SHUTDOWN IMMEDIATE;2 STARTUP;

-- Check whether the Logminer folder is set

SHOW PARAMETER utl_file_dir;

2.3 create a data synchronization user

Create a LOGMINER user in the database. The user must have the DBA permission.

-- Create a LOGMINER user in the source database and grant the DBA permission

1 CREATE USER LOGMINER IDENTIFIED BY LOGMINER;2 GRANT CONNECT, RESOURCE,DBA TO LOGMINER;

3. Example of using LogMiner to read logs

When using LogMiner to read archive/online logs, you need to set it according to section 2nd. After setting it, You can analyze the archived and online logs. In particular, you need to enable the LogMiner log supplement mode. If the LogMiner supplement mode is not started, you cannot view DDL statements. According to the test results, you can view DDL statements only after starting the LogMiner log supplement mode, before that, you cannot view the DDL statements.

3.1 Use LogMiner to read online logs 3.1.1 test data preparation

-- Create an AAAAA table by logging on as a LOGMINER user (non-DBA logon) (username and password are case sensitive for Oracle11g)

1 CONNECT LOGMINER/LOGMINER@PRACTICE2 CREATE TABLE AAAAA(field001 varchar2(100));     3 INSERT INTO AAAAA (field001) values  ('000000');  4 INSERT INTO AAAAA (field001) values  ('0000010');5 commit;

3.1.2 create a data dictionary file

When the database object changes, you need to recreate the data dictionary file.

-- Log On As a logminer user (DBA permission) to generate dictionary files

1 CONN LOGMINER/LOGMINER@PRACTICE AS SYSDBA2 EXECUTE dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora', dictionary_location =>'D:\oracle\oradata\practice\LOGMNR');

3.1.3 check the log files that are currently online

-- Check whether the log file is online.

SELECT group#, sequence#, status, first_change#, first_time FROM V$log ORDER BY first_change#;

The online log REDO03 is ACTIVE.

3.1.4 Add the log file to be analyzed

-- Add online Log File Parsing

1 BEGIN2 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\REDO03.LOG',options=>dbms_logmnr.NEW);3 END;4 /

3.1.5 use LogMiner for analysis

-- Start LogMiner for analysis

EXECUTE dbms_logmnr.start_logmnr(dictfilename=>'D:\oracle\oradata\practice\LOGMNR\dictionary.ora');

3.1.6 observe the analysis results

-- Query related operation logs

1 SELECT sql_redo, sql_undo, seg_owner2    FROM v$logmnr_contents3   WHERE seg_name='AAAAA'4     AND seg_owner='LOGMINER';

3.2 use LogMiner to read archived logs 3.2.1 test data preparation

-- Use a LOGMINER user to log on (non-DBA permission) to create an EMP table (Oracle11g should pay attention to the username and password case)

 1 CONN LOGMINER/ LOGMINER@PRACTICE 2 CREATE TABLE EMP 3        (EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY, 4         ENAME VARCHAR2(10), 5         JOB VARCHAR2(9), 6         MGR NUMBER(4), 7         HIREDATE DATE, 8         SAL NUMBER(7,2), 9         COMM NUMBER(7,2),10         DEPTNO NUMBER(2));

-- Insert EMP data

1 INSERT INTO EMP VALUES (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);2 INSERT INTO EMP VALUES (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30);3 INSERT INTO EMP VALUES (7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30);4 INSERT INTO EMP VALUES (7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,NULL,20);5 COMMIT;

-- Find the sequence number of the log file from the v $ log view

1 CONNECT system/system@practice as sysdba2 ALTER SYSTEM SWITCH LOGFILE;3 select sequence#, FIRST_CHANGE#, NEXT_CHANGE#,name from v$archived_log order by sequence# desc;

3.2.2 create a data dictionary file

Make sure that logMiner settings are performed according to 2.2

-- Log On As a logminer user (DBA permission) to generate dictionary files

1 CONN LOGMINER/ LOGMINER@PRACTICE AS SYSDBA2 EXECUTE dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora', dictionary_location =>'D:\oracle\oradata\practice\LOGMNR');

3.2.3 add log files to be analyzed

-- Add the parsing Log File

1 BEGIN2 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\ARCHIVE\ARC00002_0817639922.001',options=>dbms_logmnr.NEW);3 END;4 /

3.2.4 use LogMiner for analysis

-- Start LogMiner for analysis

1 EXECUTE dbms_logmnr.start_logmnr(dictfilename=>'D:\oracle\oradata\practice\LOGMNR\dictionary.ora');

3.2.5 observe the analysis results

-- Query related operation logs

1 SELECT sql_redo, sql_undo2    FROM v$logmnr_contents3   WHERE seg_name='EMP'4     AND seg_owner='LOGMINER';

4 other 4.1 Exception Handling 4.1.1 ORA-12514 Error

If a ORA-12514 error occurs, as shown in:

Listerner needs to be modified. ora file, in the Oracle installation directory \ NETWORK \ ADMIN, the current operation is "D: \ oracle \ product \ 10.2.0 \ db_1 \ NETWORK \ ADMIN \ listener. add the following settings to ora:

1      (SID_DESC =2        (GLOBAL_DBNAME = practice)3        (ORACLE_HOME = D:\oracle\product\10.2.0\db_1) 4        (SID_NAME = practice)5       )

After setting, you need to restart TNSListener to take effect.

4.1.2 error in ORA-16018

If a ORA-16018 error occurs, as shown in:

This problem occurs when the database enables the flash back function. archive files are saved to the flash back path by default. The simple solution is to add the scope = spfile parameter to the archive path.

-- Set the path of the archive log file

ALTER SYSTEM SET log_archive_dest="D:\oracle\oradata\practice\ARCHIVE"   scope=spfile;

In this case, check the flash path. This path does not affect, but the flash and archive files are saved to their respective folders.

4.2 LogMiner related information

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.