Logminer Configuration User Manual

Source: Internet
Author: User
Tags create directory sqlplus

Logminer Configure user Manuals1 Logminer Introduction 1.1 Logminer Introduction

Oracle Logminer is a practical and useful analysis tool that Oracle offers from 8i onwards, which makes it easy to get specific content in Oracle's online/archived log files, especially if the tool can parse all DML and DDL statements for database operations. This tool is especially useful for debugging, auditing, or fallback a particular transaction.

The Logminer analysis tool is actually comprised of a set of PL/SQL packages and some dynamic views (part of the oracle8i built-in package), which is released as part of the Oracle database as a completely free tool provided by 8i products. However, this tool is somewhat more complex to use than other Oracle built-in tools, mainly because the tool does not provide any graphical user interface (GUI).

1.2 Logminer Effect

Prior to Oracle 8i, Oracle did not provide any tools to assist the database administrator in reading and interpreting the contents of the Redo log files. System problems, for a normal data manager, the only job is to package all the log files, and then to the Oracle company technical support, and then quietly wait for the Oracle company technical support to give us the final answer. However, since 8i, Oracle has provided such a powerful tool--logminer.

The Logminer tool can be used to analyze the online or to analyze offline log files, which can be used to analyze the redo log files of their own databases, or to parse other database redo log files.

Overall, the main uses of the Logminer tools are:

1, Tracking database changes: Can be offline tracking changes in the database, without affecting the performance of the online system.

2, fallback database changes: Fallback specific change data, reduce point-in-time recovery execution.

3. Optimization and expansion plan: Analyze data growth patterns by analyzing data in log files

1.3 Using the detailed 1.3.1 installation Logminer

Before using Logminer, you need to confirm that Oracle has a Logminer analysis package, which is generally included by the Windows operating system oracle10g above. If you cannot confirm, you can log in to the system as DBA, see if there are DBMS_LOGMNR, dbms_logmnr_d packages required to run Logminer on the system, and if you do not need to install the Logminer tool, you must first run one of the following two scripts:

1, $ORACLE _home/rdbms/admin/dbmslm.sql

2, $ORACLE _home/rdbms/admin/dbmslmd.sql.

Both scripts must be run as DBA users. The first of these scripts is used to create a DBMS_LOGMNR package that parses the log file. The second script is used to create a dbms_logmnr_d package that is used to create a data dictionary file.

The following procedures and views are included when the creation is complete:

Type

Procedure Name

Use

Process

Dbms_logmnr_d.build

Create a data dictionary file

Process

Dbms_logmnr.add_logfile

Adding log files to the class table for analysis

Process

Dbms_logmnr.start_logmnr

Start Logminer using an optional dictionary file and the previously determined log file to be parsed

Process

Dbms_logmnr.end_logmnr

Stop Logminer Analysis

View

V$logmnr_dictionary

Displays information about the dictionary file used to determine the object ID name

View

V$logmnr_logs

Display the analyzed log list at logminer startup

View

V$logmnr_contents

After Logminer is started, you can use this view to query the contents of the Redo log by entering SQL statements at the SQL prompt

1.3.2 Creating a data dictionary file

The Logminer tool is actually composed of two new PL/SQL built-in packages ((DBMS_LOGMNR and Dbms_ logmnr_d) and four v$ dynamic performance views (views are used in the process DBMS_LOGMNR. START_LOGMNR is created when Logminer is started). You can use the Dbms_logmnr_d package to export a data dictionary to a text file before you use the Logminer tool to parse the redo log file. The dictionary file is optional, but without it, the statements in the Logminer explanation of the data dictionary (such as table name, column name, etc.) and values will be 16 binary forms, which we cannot directly understand. For example, the following SQL statement:

The result that Logminer explains will look like this:

The purpose of creating a data dictionary is to have the Logminer reference refer to parts of the internal data dictionary as their actual name, rather than the 16 binary inside the system. A data dictionary file is a text file that is created using the package dbms_logmnr_d. If the table in the database that we are analyzing changes and the data dictionary that affects the library changes, then the dictionary file needs to be recreated. Another scenario is when you are analyzing a redo log of another database file, you must also regenerate the data dictionary file for the parsed database again.

Before you create a data dictionary file, you need to configure the Logminer folder:

1 CREATE DIRECTORY utlfile as ' D:\oracle\oradata\practice\LOGMNR '; 2 alter system set utl_file_dir= ' D:\oracle\oradata\ PRACTICE\LOGMNR ' Scope=spfile;

To create a dictionary file, log in as a DBA user and create it in the Logminer folder configured above:

1 CONN logminer/[email protected] as SYSDBA2 EXECUTE dbms_logmnr_d.build (dictionary_filename = ' Dictionary.ora ', dic tionary_location = ' D:\oracle\oradata\practice\LOGMNR ');
1.3.3 Adding log files to be analyzed

Oracle's Logminer can analyze both online and archive (offline) log files, add analysis log files using the Dbms_logmnr.add_logfile process, and the first file uses DBMS_LOGMNR. The new parameter, followed by the file using DBMS_LOGMNR. The AddFile parameter.

1. Create a list

1 BEGIN2 dbms_logmnr.add_logfile (logfilename=> ' D:\oracle\oradata\practice\REDO03. LOG ', OPTIONS=>DBMS_LOGMNR. NEW); 3 end;4/

2. Add additional log files to the list

1 BEGIN2 dbms_logmnr.add_logfile (logfilename=> ' D:\oracle\oradata\practice\ARCHIVE\ARC00002_0817639922.001 ', OPTIONS=>DBMS_LOGMNR. AddFile); 3 dbms_logmnr.add_logfile (logfilename=> ' D:\oracle\oradata\practice\ARCHIVE\ARC00003_0817639922.001 ' , OPTIONS=>DBMS_LOGMNR. AddFile); 4 end;5/
1.3.4 using Logminer for log analysis

Oracle's Logminer analysis is divided into both unrestricted and restrictive conditions, all of which are added to the Analysis list log file in an unrestricted condition, and the restriction condition analyzes the specified range log file according to the constraints.

1. No restriction conditions

EXECUTE DBMS_LOGMNR.START_LOGMNR (dictfilename=> ' D:\oracle\oradata\practice\LOGMNR\dictionary.ora ');

2. Restricted conditions

Through the process of Dbms_ LOGMNR. The settings for several different parameters in the START_LOGMNR (see table 1 for parameter meanings) reduce the range of log files to be analyzed. By setting the start time and end time parameters we can limit the log that only analyzes a certain time range.

Parameters

Parameter type

Default value

Meaning

Startscn

Digital type

0

Parsing the SCN≥STARTSCN log file section in the Redo log

Endscn

Digital type

0

Parsing the SCN≤ENDSCN log file section in the Redo log

StartTime

Date type

1998-01-01

Parsing the log file portion of the timestamp ≥starttime in the redo log

EndTime

Date type

2988-01-01

Parsing the log file portion of the timestamp ≤endtime in the redo log

Dictfilename

Character type

A dictionary file that contains a snapshot of a database directory.

As the following example, we only analyze the June 8, 2013 log:

EXECUTE DBMS_LOGMNR.START_LOGMNR (

Dictfilename = Dictfilename=> ' d:\. \practice\logmnr\dictionary.ora ',

StartTime =>to_date (' 2013-6-8 00:00:00 ', ' yyyy-mm-dd HH24:MI:SS ')

EndTime =>to_date (' 2013-6-8 23:59:59 ', ' yyyy-mm-dd HH24:MI:SS ');

You can also limit the range of logs that you want to analyze by setting the start SCN and up to SCN:

EXECUTE DBMS_LOGMNR.START_LOGMNR (

Dictfilename = ' D:\. \practice\logmnr\dictionary.ora ',

STARTSCN =>20,

ENDSCN =>50);

1.3.5 Observation and Analysis results (v$logmnr_contents)

So far, we have analyzed the contents of the Redo log file. The dynamic performance View v$logmnr_contents contains all the information that the Logminer analysis obtains.

SELECT Sql_redo from V$logmnr_contents;

If we just want to know what a user is doing to a table, it can be obtained from the following SQL query, which can get the user Logminer all the work done on the table emp.

SELECT Sql_redo from v$logmnr_contents WHERE username= ' Logminer ' and tablename= ' EMP ';

Serial number

Name

Meaning

1

Scn

System change number for specific data changes

2

Timestam

Time the data change occurred

3

Commit_timestamp

Time when data changes are committed

4

Seg_owner

The name of the segment where the data has changed

5

Seg_name

The owner name of the segment

6

Seg_type

The type of segment where the data has changed

7

Seg_type_name

Name of the segment type for which the data has changed

8

Table_space

Table space for changing segments

9

row_id

ID of a specific data change row

10

Session_info

User process information when data changes

11

Operation

Actions recorded in Redo records (such as insert)

12

Sql_redo

You can redo the SQL statement that specifies the row changes for the record (forward operation)

13

Sql_undo

SQL statements that record fallback or restore specified row changes can be re-made (reverse operation)

It is important to emphasize that the analysis results in view v$logmnr_contents only exist during the lifetime of the session ' DBMS_LOGMRN.START_LOGMNR ' that we are running. This is because all Logminer storage is in PGA memory, and all other processes are not visible, and as the process ends, the results of the analysis disappear.

Finally, use the procedure DBMS_LOGMNR. END_LOGMNR terminates the log parsing transaction, at which time the PGA memory area is cleared and the results of the analysis are no longer present.

2 Data synchronization Oracle Database settings

Oracle data uses Logminer to view the Execute SQL statement, where the following four steps are required to refer to:

1, set the database for the archive mode;

2, set Logminer dictionary file path, etc.;

3, create a data synchronization user (such as user name Logminer, the user has DBA authority);

4, verify the configuration is successful;

2.1 Setting the database to archive mode 2.1.1 See if the database is in archive mode

Connect to the database using Sqlplus or the command line interface (following command-line interface)

-- Enter the Sqlplus program

Sqlplus/nolog

-- log in to the source database by using a DBA user

Conn System/[email protected] as Sysdba

-- To see if the practice database is in archive mode

1 SELECT dbid, name, Log_mode from v$database;2 or 3 ARCHIVE log LIST;

Setting the database to archive mode can be skipped if the display database is displayed as archive mode, and the following settings are required if the display database is in non-archive mode.

The display database is not archived and requires archiving settings.

2.1.2 Setting the archive mode

Create the ARCHIVE folder, ARCHIVE folder path according to the server settings, in the following operation is set to "D:\oracle\oradata\practice\ARCHIVE"

-- Set archive log file path

ALTER SYSTEM SET log_archive_dest= "D:\oracle\oradata\practice\ARCHIVE";

-- log file name format:

ALTER SYSTEM SET log_archive_format= "arc%s_%r.%t" scope=spfile;

-- When you are finished, close the database and start in Mount mode

1 SHUTDOWN immediate;2 STARTUP MOUNT;

-- Setting the database to archive mode

ALTER DATABASE ARCHIVELOG;

(Note: If restarting the data fails, refer to section 4th, exception handling).

2.1.3 Verifying that the archive is set up successfully

-- To see if the practice database is in archive mode

1 SELECT dbid, name, Log_mode from v$database;2 or 3 ARCHIVE log LIST;

-- Verify that the parameter settings are working

SELECT dest_id, status, destination from v$archive_dest WHERE dest_id = 1;

-- when the parameter file settings have worked, open the database

ALTER DATABASE OPEN;

2.2 Logminer Settings 2.2.1 Create Logminer folder

Create the LOGMNR folder with the path "D:\oracle\oradata\practice\LOGMNR"

2.2.2 Setting logminer Dictionary file path

-- Create a data dictionary file

1 CREATE DIRECTORY utlfile as ' D:\oracle\oradata\practice\LOGMNR '; 2 alter system set utl_file_dir= ' D:\oracle\oradata\ PRACTICE\LOGMNR ' Scope=spfile;

2.2.3 Turn on Logminer log replenishment mode

-- Create a data dictionary file

ALTER DATABASE add supplemental log data;

2.2.4 Restart Database Validation

-- When you are finished, close the database and start in Mount mode

1 SHUTDOWN immediate;2 STARTUP;

-- To see if the Logminer folder is set

SHOW PARAMETER Utl_file_dir;

2.3 Creating a data synchronization user

Create Logminer user in database, the user needs DBA authority

-- Create a Logminer user in the source database and give DBA authority

1 CREATE USER Logminer identified by Logminer;2 GRANT CONNECT, resource,dba to Logminer;

3 using Logminer to read the log example

The use of Logminer to read the archive/online log needs to be set in accordance with the 2nd chapter, and the archive and online logs can be analyzed after Setup is complete. In particular, it is necessary to turn on the Logminer log replenishment mode, if there is no start Logminer supplemental mode will not be able to view the DDL statements, according to the test results, only to start logminer the log replenishment mode to view the DDL statements, before the DDL will not be able to view.

3.1 using Logminer to read online logs 3.1.1 test data preparation

-- Create AAAAA table with Logminer user login (non-DBA login) (oracle11g note user name, password case)

1 CONNECT logminer/[email protected]2 CREATE TABLE AAAAA (field001 varchar2);     3 INSERT into AAAAA (field001) values  (' 000000 ');  4 INSERT into AAAAA (field001) values  (' 0000010 '); 5 commit;

3.1.2 Creating a data dictionary file

Database object changes, data dictionary file needs to be recreated

-- log in as Logminer User (DBA authority) to generate a dictionary file

1 CONN logminer/[email protected] as SYSDBA2 EXECUTE dbms_logmnr_d.build (dictionary_filename = ' Dictionary.ora ', dictionary_location = ' D:\oracle\oradata\practice\LOGMNR ');

3.1.3 Confirm that the log file is currently online

-- need to confirm the log file that is currently online

SELECT group#, sequence#, status, first_change#, First_time from V$log ORDER by first_change#;

You can see that the online log REDO03 is in the active state

3.1.4 Adding log files to be analyzed

-- Join parse online log file

1 BEGIN2 dbms_logmnr.add_logfile (logfilename=> ' D:\oracle\oradata\practice\REDO03. LOG ', OPTIONS=>DBMS_LOGMNR. NEW); 3 end;4/

3.1.5 using Logminer for analysis

-- start Logminer for analysis

EXECUTE DBMS_LOGMNR.START_LOGMNR (dictfilename=> ' D:\oracle\oradata\practice\LOGMNR\dictionary.ora ');

3.1.6 Observation and analysis results

-- querying related operations logs

1 SELECT Sql_redo, Sql_undo, seg_owner2 from    v$logmnr_contents3   WHERE seg_name= ' AAAAA ' 4 and     seg_owner= ' Logminer ';

3.2 Using Logminer to read archive logs 3.2.1 test data preparation

-- Create an EMP table with Logminer user logon (non-DBA authority) (oracle11g note the user name, password capitalization)

1 CONN logminer/[email protected] 2 CREATE TABLE EMP 3        (EMPNO number (4) CONSTRAINT pk_emp PRIMARY KEY, 4         ename VA RCHAR2, 5         JOB VARCHAR2 (9), 6         MGR number (4), 7         hiredate DATE, 8         SAL number (7,2), 9         COMM Number (7,2),         DEPTNO number (2));

-- inserting EMP Data

1 INSERT INTO EMP VALUES (7369, ' SMITH ', ' Clerk ', 7902,to_date (' 17-12-1980 ', ' dd-mm-yyyy '), 800,null,20); 2 INSERT INTO EMP VALUES (7499, ' ALLEN ', ' salesman ', 7698,to_date (' 20-2-1981 ', ' dd-mm-yyyy '), 1600,300,30); 3 INSERT into EMP VALUES (7521, ' WARD ', ' salesman ', 7698,to_date (' 22-2-1981 ', ' dd-mm-yyyy '), 1250,500,30); 4 INSERT into EMP VALUES (7566, ' JONES ', ' MANAGER ', 7839,to_date (' 2-4-1981 ', ' dd-mm-yyyy '), 2975,null,20); 5 COMMIT;

-- to find the sequence number of a log file from the V$log view

1 CONNECT system/[email protected] as Sysdba2 ALTER system SWITCH logfile;3 Select sequence#, first_change#, next_change#, Name from V$archived_log order BY sequence# Desc;

3.2.2 Creating a data dictionary file

Make sure to set Logminer according to 2.2

-- log in as Logminer User (DBA authority) to generate a dictionary file

1 CONN logminer/[email protected] as SYSDBA2 EXECUTE dbms_logmnr_d.build (dictionary_filename = ' Dictionary.ora ', dic tionary_location = ' D:\oracle\oradata\practice\LOGMNR ');

3.2.3 Adding log files to be analyzed

-- Add Parse log file

1 BEGIN2 dbms_logmnr.add_logfile (logfilename=> ' D:\oracle\oradata\practice\ARCHIVE\ARC00002_0817639922.001 ', OPTIONS=>DBMS_LOGMNR. NEW); 3 end;4/

3.2.4 using Logminer for analysis

-- start Logminer for analysis

1 EXECUTE dbms_logmnr.start_logmnr (dictfilename=> ' D:\oracle\oradata\practice\LOGMNR\dictionary.ora ');

3.2.5 Observation and analysis results

-- querying related operations logs

1 SELECT Sql_redo, sql_undo2 from    v$logmnr_contents3   WHERE seg_name= ' EMP ' 4 and     seg_owner= ' Logminer ';

4 Other 4.1 exception problem handling 4.1.1 ORA-12514 error occurred

If a ORA-12514 error occurs, as shown in:

The Listerner.ora file needs to be modified, specifically under the Oracle installation directory \network\admin, the current operation is "D:\oracle\product\10.2.0\db_1\NETWORK\ADMIN\ Listener.ora "Add the following settings

1      (Sid_desc =2        (global_dbname = practice) 3        (oracle_home = D:\oracle\product\10.2.0\db_1) 4        (sid_name = practice) 5       )

After Setup, you need to restart Tnslistener to take effect

4.1.2 ORA-16018 Error occurred

If a ORA-16018 error occurs, as shown in:

The problem is that the database has a flashback feature, the archive file is saved to the flashback path by default, and the simple way to do this is to add the Scope=spfile parameter to the set archive path

-- Set archive log file path

ALTER SYSTEM SET log_archive_dest= "D:\oracle\oradata\practice\ARCHIVE"   scope=spfile;

Now look at the flashback path, which is not affected, except that the flashback files and archive files are saved to their respective folders

Logminer Configuration User Manual

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.