Use Oracle LogMiner to analyze archived logs

Source: Internet
Author: User

Using Oracle LogMiner to analyze archived log received an alert email today, resulting in a rapid growth of archived log. Unable to determine the cause of the problem. LogMiner is used to analyze the archived log. The main source and network of the operational process outline is a learning process for me. This article records its detailed operations. Before performing this operation, you should first get a general idea of the purpose of LogMiner: the log file www.2cto.com stores all the data for database recovery, and records every change in the database structure, that is, all DML statements for database operations. Before Oracle 8i, Oracle did not provide any tools to assist database administrators in reading and interpreting the contents of duplicated log files. A system problem occurs. For a Common Data Administrator, the only task that can be done is to package all log files and send them to the technical support of Oracle, then quietly wait for Oracle technical support to give us the final answer. However, since 8i, Oracle has provided such a powerful tool, LogMiner. The LogMiner tool can be used to analyze both online and offline log files. It can be used to analyze the duplicate log files of its own database or the duplicate log files of other databases. In general, the main functions of the LogMiner tool are as follows: 1. Tracking Database changes: You can track database changes offline without affecting the performance of the online system. 2. Roll Back database changes: Roll back specific changes to reduce the execution of point-in-time recovery. 3. Optimization and expansion plan: You can analyze the data in the log file to analyze the data growth mode. Problem: $ ls-lth total 729 M-rw-r ----- 1 oracle oinstall 39 M Aug 2 1271937_775334859.dbf-rw-r ----- 1 oracle oinstall 39 M Aug 2 1091936_775334859.dbf-rw-r ----- 1 oracle oinstall 38 M Aug 2 1161935_775334859.dbf-rw-r ----- 1 oracle oinstall 38 M Aug 2 AM-rw-r ----- 1 oracle oinstall 38 M Aug 2 release-rw- r ----- 1 oracle oinstall 40 M Aug 2 11: 0 9 audio-rw-r ----- 1 oracle oinstall 39 M Aug 2 11: 07 audio 1931_775334859.dbf-rw-r ----- 1 oracle oinstall 39 M Aug 2 11: 05 audio 1930_775334859.dbf-rw-r ----- 1 oracle oinstall 39 aug 2 audio-rw-r ----- 1 oracle oinstall 39 M Aug 2 audio-rw-r ----- 1 oracle oinstall 39 M Aug 2 4501927_775334859.dbf-rw-r ----- 1 oracle oinstall 39 M Aug 2 _ 1926_775334859.dbf-rw-r ----- 1 oracle oinstall 39 M Aug 2 audio-rw-r ----- 1 oracle oinstall 39 M Aug 2 51 audio 1924_775334859.dbf-rw-r ----- 1 oracle oinstall 38 M Aug 2 audio-rw-r ----- 1 oracle oinstall 39 M Aug 2 audio 1922_775334859.dbf-rw-r ----- 1 oracle oinstall 39 M Aug 2 audio 1921_775334859.dbf-rw-r ----- 1 oracle oinstall 38 M Aug 2 g01920 _ 775334859.dbf-rw-r ----- 1 oracle oinstall 39 M Aug 2 10: 39 running 1919_775334859.dbf ...... www.2cto.com archived logs is growing too fast. Procedure: 1. set Date Format 01SQL> show parameter nls_date_format; 02 03 NAME TYPE04 orders ----------------------------------- 05VALUE06 ---------------------------- 07nls_date_format string08DD-MON-RR09SQL> select sysdate from dual; 10 11SYSDATE12---------------13 www.2cto.com 02-AUG-1214 15SQL> alter system set nls_date_format = 'yyyy-mm-dd hh24: mi: ss' scope = spfile; 16 17 System altered.1 8 19SQL> 2. add Supplemental log 2.1 to check whether Supplemental log 1SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v $ database is enabled; 2 3SUPPLEMENTAL_LOG_DATA_MI SUPPLEMEN SUPPLEMEN4 ---------------------- --------- 5NO NO NO2.2 enable supplement LOG 1SQL> alter database add supplemental log data (primary key, unique index) COLUMNS; 2 3 Database altered.4 5SQL> 3. enable archive (the production environment must be enabled, skipped here) 3.1 archive log Storage path 01 www.2cto.com SQL> show parameter log_Archive_dest _; 02 03 NAME TYPE04 character 05VALUE06 character 07log_archive_dest_1 string08LOCATION =/opt/character string10 ...... 4. the LogMiner tool installed by LogMiner is actually started by two new PL/SQL built-in packages (DBMS_LOGMNR and DBMS _ LOGMNR_D) and four V $ dynamic performance views (the view is in the process of using DBMS_LOGMNR.START_LOGMNR to start LogMiner Created at the time of creation, namely, v $ logmnr_dictionary, v $ logmnr_parameters, v $ logmnr_logs, and v $ logmnr_contents. Before using the LogMiner tool to analyze the redo log file, you can use the DBMS_LOGMNR_D package to export the data dictionary as a text file. This dictionary file is optional, but if it is not available, the part of the data dictionary (such as the table name and column name) in the statement interpreted by LogMiner) and numeric values are in hexadecimal format, which we cannot directly understand. To install the LogMiner tool, you must first run the following two scripts, both of which must be run as SYS users. The first script is used to create the DBMS_LOGMNR package, which is used to analyze log files. The second script is used to create the DBMS_LOGMNR_D package, which is used to create a data dictionary file. $ ORACLE_HOME/rdbms/admin/dbmslm. SQL $ ORACLE_HOME/rdbms/admin/dbmslmd. SQL $ ORACLE_HOME/rdbms/admin/dbmslms. SQL www.2cto.com 01SQL> @/opt/oracle/product/11.2.0/rdbms/admin/dbmslm. sql02Package created.03Grant succeeded.04Synonym created.05 06SQL> 07 08SQL> @/opt/oracle/product/11.2.0/rdbms/admin/dbmslmd. sql09Package created.10Synonym created.11 12SQL> @/opt/oracle/product/11.2.0/rdbms/admin/dbmslms. sq L13Package created.14No errors.15Grant succeeded.16 www.2cto.com 17SQL> 5. Use the LogMiner tool 5.1 and set the UTL_FILE_DIR data dictionary file to a text file, which is created using the package DBMS_LOGMNR_D. If the tables in the database to be analyzed change, and the data dictionary of the database also changes, you need to recreate the dictionary file. Another case is that when you analyze the duplicate logs of another database file, you must regenerate the data dictionary file of the analyzed database. In ORACLE8I. in the ora initialization parameter file, specify the location of the data dictionary file, that is, add the UTL_FILE_DIR parameter. The value of this parameter is the directory where the data dictionary file is placed on the server. For example, 01SQL> show parameter UTL_FILE_DIR; 02 03 NAME TYPE04 extends 05VALUE06 limit 07utl_file_dir string08 09SQL> 10 www.2cto.com 11SQL> alter system set UTL_FILE_DIR = '/tmp/test' scope = spfile; 12 13SQL> shutdown immediate14SQL> STARTUPORACLE9I, we recommend that you use SPFILE to start it. You can dynamically adjust the parameter: 01SQL> show parameter spfile; 02 03 NAME TYPE04 -------- ---------------------------- ----------------------------------- 05VALUE06 ------------------------------ 07 spfile string08/opt/oracle/product/11.2.0/dbs09/spfileDCGF. ora10 www.2cto.com 11SQL> show parameter utl_file_dir; 12 13 NAME TYPE14 255.255.15value16 -------------------------------- 17utl_file_dir string18/tmp/test19SQL> 5.2 create a data dictionary file 1 $ _Logmnr_d.build.txt 2 3BEGIN4dbms_logmnr_d.build (5dictionary_filename => 'logminer _ dict. ora ', 6dictionary_location =>'/tmp/test'); 7END; 8/1SQL> conn/as sysdba2Connected. 3 www.2cto.com SQL> @dbms_logmnr_d.build.txt 4 5PL/SQL procedure successfully completed.6 7SQL> 5.3 create a list of log files to be analyzed Oracle's duplicate logs are divided into two types: online and offline) archive log files. Here I mainly analyze archived logs. The principle of online logs is the same. ---- Offline archive log file 01SQL> BEGIN02 dbms_logmnr.add_logfile (03 '/opt/archivelog/1_1955_775334859.dbf', 04 DBMS_LOGMNR.new); 05 end; 06 www.2cto.com/07 08PL/SQL procedure successfully completed.09 10SQL >### Note: dbms_logmnr.new -- used to create a log analysis table dbms_logmnr.addfile -- used to add the log file used for analysis, log File for analysis 5.4 start LogMiner for analysis 5.4.1 unrestricted 01SQL> BEGIN02 dbms_logmnr.start_logmnr (03 dictfilename => '/Tmp/test/logminer_dict.ora' 04); 05 END; 06/07 08PL/SQL procedure successfully completed.09 www.2cto.com 10SQL> 5.4.2 restrictions 1inin2dbms_logmnr.start_logmnr (3 dictfilename => '/tmp/test/logminer_dict.ora ', 4 StartTime => to_date ('2017-08-02 16:40:26 ', 'yyyy-MM-DD HH24: MI: ss '), 5 EndTime => to_date ('2017-08-02 16:44:41 ', 'yyyy-MM-DD HH24: MI: ss') 6); 7END; 2012. 5 observe the analysis results (v $ logmnr_contents) until now, we have analyzed and obtained the re-Log File Content. Dynamic Performance view v $ logmnr_contents contains all the information obtained by LogMiner analysis. 1SQL> SELECT SQL _redo FROM v $ logmnr_contents; 6. by disabling LogMiner, you can create a permanent database table for the content in the v $ logmnr_contents view. This is very helpful for 1 SQL> create table logmnr_contents as select * from v $ logmnr_contents; after the redo log check is completed, run end_logmnr 1 www.2cto.com SQL> execute dbms_logmnr.end_logmnr () in dbms_logmnr ();
Source: http://www.cnblogs.com/einyboy/archive/2012/06/16/2551972.htmlhttp://www.bkjia.com/database/201208/146990.html author Wang Zi's key

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.