Oracle Logminer Analytics Redo Log Redolog and archive logs Archivelog

Source: Internet
Author: User
Tags dba log log sqlplus

In the actual development process, there are times when we are likely to need an operational trace of a table, or a recorded SQL statement for purposeful data recovery (at which point the point-in-time recovery has not satisfied finer granularity), or just a view;

It is said that after Oracle8i, it provides the Logminer log analysis tool, although there is no GUI graphical interface, but also can not block the use of his determination.

Conditions used under oracle11g:

(1), Database open Archive mode

Before installing Logminer, let's start by saying that the database is set to archive mode:

Note: If log analysis is turned on, simply analyze the system default Redo01.log, Redo02.log, Redo03.log is not enough, because redo log is the way of circular filling, the next cycle cycle is to lose the log information of the previous cycle, it is bound to need to archive the redo log to save.

Under Windows, use the cmd under sqlplus command to log in to Oracle and view the archive mode, if it is not archived, you need to turn on archive mode in Mount State, preferably open the database:

1 //cmd to connect to the database2C:\users\administrator>sqlplus sys/123456@orcl asSYSDBA3 4 //After the connection is successful, view the archive mode5Sql>archive log list;6Database log mode No Archive mode//Non-archive mode7 ...9 //Close the databaseTenSql>shutdown immediate; One ... - //start the database in Mount State -Sql>startup Mount; the ...
-sql> ALTER DATABASE Archivelog;//Start Archive Mode - Database altered. +Sql>archive log list; -Database Log Mode Archive mode//This is already an archive mode A //Open Database atsql> ALTER DATABASE open;

(2), installation Logminer

First, you need to determine whether the ORACLE version of the database you have installed has Logminer, under the installation directory, that is, $oracle_home/rdbms/admin under Dbmslm.sql, Dbmslmd.sql, I installed two files in the directory:

1 D:\app\admin\product\11.2. 0 \dbhome_1\rdbms\admin\dbmslm.sql 2 D:\app\admin\product\11.2. 0\dbhome_1\rdbms\admin\dbmslmd.sql

Execute the Install command (use the DBA authority to perform the installation, log in with the SYS user, and then execute Sql>grant dba to Scott; so that the Scott user has permission to dab)

After the installation is successful, DBMS_LOGMNR, dbms_logmnr_d packages can be found under the packages package in the left object objects Navigation of PL/SQL Developer, where:

DBMS_LOGMNR: Used to parse log files

Dbms_logmnr_d: Used to create a data dictionary file

(3). Create a data dictionary file

The data dictionary file will include the db_id of the database, as well as the encoding format, if you do not create a data dictionary file, the default parsing log information is displayed in 16 encoding format, it is not intuitive to understand, the following we use the command to create a data dictionary file:

Need to create a data dictionary directory and a data dictionary file

First, create a data dictionary directory

' D:\app\admin\oradata\orcl\LOGMNR '  Set utl_file_dir='d:\\app\admin\oradata\orcl\logmnr' scope=spfile;  

Also, open additional log mode, explanation of additional log supplemental log: You can instruct the database to add additional information to the log stream to support log-based tools such as logical standby, streams, GoldenGate, Logminer.

ALTER DATABASE add supplemental log data;

At this point, the database needs to be restarted because we are using ' scope=spfile ' when we add the field directory, which takes effect on the next launch.

// Close the database SHUTDOWN IMMEDIATE; // Start the database STARTUP;

If you want to verify that the data dictionary is set up successfully, you can view it by entering the following command at the command line: (The following appears to indicate that the set directory was successful)

SHOW PARAMETER Utl_file_dir;

After creating the directory, continue to create the data dictionary file with the Scoot user with DBA authority:

' Dictionary.ora ', dictionary_location ='D:\app\admin\oradata\orcl\LOGMNR');

(4), analysis log file

Scott user, take some action on the data

--Creating a simple table CREATE table Wy_test (ID number (Ten,0), USERNAME VARCHAR2 (Ten)); SELECT* FROM Wy_test; --the query content is empty at this time--Insert two data insert into Wy_test (id,username) VALUES (1,'WY01') INSERT into Wy_test (id,username) VALUES (2,'WY02'); COMMIT; SELECT* FROM Wy_test; --query verifies that the insert is successful--Modify Content Update wy_test SET USERNAME='WY03'WHERE id=2; COMMIT; SELECT* FROM Wy_test; --query verifies whether the modification was successful--Delete content delete from Wy_test WHERE ID=2; COMMIT; SELECT* FROM Wy_test; --The contents of this table: id=1, username='WY01'

It is reported that the data dictionary file needs to be recreated after the table data has changed, so at this point we need to re-execute the statement that created the data file:

' Dictionary.ora ', dictionary_location ='D:\app\admin\oradata\orcl\LOGMNR');

At this point, start loading the log file for analysis, you can load the log file currently in use, you can also like me, the three redo log files are loaded into;

Sql> begin 2  dbms_logmnr.add_logfile (logfilename='D:\app\admin\oradata\orcl\ REDO01. LOG',options=>DBMS_LOGMNR. NEW); 3  END; 4  /SQL> BEGIN 2  dbms_logmnr.add_logfile (logfilename='  D:\app\admin\oradata\orcl\REDO02. LOG',options=>DBMS_LOGMNR. AddFile); 3  dbms_logmnr.add_logfile (logfilename='D:\app\admin\oradata\orcl\REDO03. LOG',options=>DBMS_LOGMNR. AddFile); 4  END; 5  /

Next, use Logminer to parse the log file

Sql> Execute DBMS_LOGMNR.START_LOGMNR (dictfilename=>'D:\app\admin\oradata\orcl\LOGMNR\ Dictionary.ora');

After the command has been executed successfully, we can view the log contents of the operation in the V$logmnr_contents view

 from where seg_name='wy_test';

From there, you can see all the traces of execution, as well as undo SQL for undoing this step, and from this point of view, log analysis is very useful.

As mentioned above, if you can ' analyze ' the redo Log currently in use, query through the following statement

 from V$log ORDER by first_change#;

As can be seen from the running, the current use of Redo03.Log, so in the above analysis using Logminer, only need to load Redo03.Log log file for analysis

(5), Analysis archive log file

We know that when automatic switching occurs when the redo log is full, the full redo log is written to the archive log if the database is turned on in archive mode, and we can switch manually (usually, several times)

ALTER SYSTEM SWITCH LOGFILE;

After we execute, we will find that the Orcl\archivelog directory will appear under the Flash_recovery_area directory (no archive directory is set, the default is under Flash_recovery_area)

Note: You can query to the archive log directory in the following ways

Select  from V$archived_log ORDER BY sequence# Desc

At this point, I just choose to analyze the latest archive logs to

sql> begin  2  dbms_logmnr.add_logfile (logfilename=>'D:\app\admin\ Flash_recovery_area\orcl\archivelog\2017_08_19\o1_mf_1_18_dsh8ojf9_. ARC', OPTIONS=>DBMS_LOGMNR. New );   3   end;   4  /

As you can see, the results of the execution and the analysis redo log are the same

Welcome you to add guidance and learn together!

Oracle Logminer Analytics Redo Log Redolog and archive logs Archivelog

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.