Use of oracle 11g r2 LogMiner

Source: Internet
Author: User


The general process of using oracle 11g r2 LogMiner! Use the oracle log analysis tool LogMiner. set the date format to alter system set nls_date_format = 'yyyy-mm-dd hh24: mi: ss' scope = spfile; select to_char (sysdate, 'yyyy-mm-dd hh24: mi: ss ') from dual; 2. add supplemental logs if the DATABASE needs to use logminer, it should be added. Only after this LOG is added can dmlalter database add supplemental log data (primary key, unique index) COLUMNS be captured; 3. open the archive www.2cto.com alter system set log_Archive_dest_1 = 'location = e: \ arch 'scope = both; shutdown immediatestartup m Ountalter database archivelog; alter database open; 4. To install the LogMiner tool, you must first run the following two scripts, both of which must be run as SYS users. The first script is used to create the DBMS_LOGMNR package, which is used to analyze log files. The second script is used to create the DBMS_LOGMNR_D package, which is used to create a data dictionary file. 1. $ ORACLE_HOME \ RDBMS \ ADMIN \ dbmslm. SQL 2. $ ORACLE_HOME \ RDBMS \ ADMIN \ dbmslmd. sql3. $ ORACLE_HOME \ RDBMS \ ADMIN \ dbmslms. SQL @ E: \ app \ Administrator \ product \ 11.2.0 \ dbhome_1 \ RDBMS \ ADMIN \ dbmslm. the SQL package has been created. Authorization successful. You have created a synonym. @ E: \ app \ Administrator \ product \ 11.2.0 \ dbhome_1 \ RDBMS \ ADMIN \ dbmslmd. The SQL package has been created. You have created a synonym. Www.2cto.com @ E: \ app \ Administrator \ product \ 11.2.0 \ dbhome_1 \ RDBMS \ ADMIN \ dbmslms. The SQL package has been created with no error. Authorization successful. 5. Use the LogMiner tool 5.1 and set the UTL_FILE_DIR data dictionary parameter to a text file, which is created using the package DBMS_LOGMNR_D. If the tables in the database to be analyzed change, and the data dictionary of the database also changes, you need to recreate the dictionary file. Another case is that when you analyze the duplicate logs of another database file, you must regenerate the data dictionary file of the analyzed database. In ORACLE8I. in the ora initialization parameter file, specify the location of the data dictionary file, that is, add the UTL_FILE_DIR parameter. The value of this parameter is the directory where the data dictionary file is placed on the server. For example, after UTL_FILE_DIR = (e: \ test) ORACLE9I, we recommend that you start with SPFILE to dynamically adjust parameters. SQL> show parameter spfile; NAME TYPE VALUE =------------- ---------------------------- spfile string E: \ APP \ ADMINISTRATOR \ PRODUCT \ 1 1.2.0 \ DBHOME_1 \ DATABASE \ spfile orcl. ORASQL> alter system set utl_file_dir = 'e: \ test' scope = spfile; the system has changed. Shutdown immediateSQL> startup forceORACLE routine has been started. Www.2cto.com Total System Global Area 289406976 bytes Fixed Size 1248600 bytes Variable Size 96469672 bytes Database Buffers 188743680 bytes Redo Buffers 2945024 bytes Database load is complete. The database has been opened. SQL> show parameter utl_file_dir; name type value generation ----------- --------- utl_file_dir string e: \ test 5.2 create a data dictionary file SQL> @ e: \ dbms_logmnr_d.build.txt PL/SQL process has been completed successfully. Script dbms_logmnr_d.build.txt BEGINdbms_logmnr_d.build (dictionary_filename => 'logminer _ dict. dat ', dictionary_location => 'e: \ test'); END;/5.3 create a list of log files to be analyzed. There are two types of Oracle duplicate logs: online) this is the same as offline archiving log files. Here I mainly analyze archiving logs. The principle of online logs is the same. --- Online (online). create the list execute dbms_logmnr.add_logfile (options => dbms_logmnr.new, logfilename => 'e: \ app \ Administrator \ oradata \ orcl \ REDO01.LOG '); B. add another log file to the list SQL> execute dbms_logmnr.add_logfile (options => dbms_logmnr.addfile, logfilename => 'e: \ app \ Administrator \ oradata \ orcl \ REDO02.LOG '); SQL> execute dbms_logmnr.add_logfile (options => dbms_logmnr.addfile, logfilename => 'e: \ app \ Administrator \ oradata \ orcl \ REDO 03. log'); it can also be completed at one time: BEGINdbms_logmnr.add_logfile: \ app \ Administrator \ oradata \ orcl \ REDO02.LOG ', DBMS_LOGMNR.addfile); values ('e: \ app \ Administrator \ oradata \ orcl \ REDO03.LOG', DBMS_LOGMNR.addfile); END; /www.2cto.com ### Description: dbms_logmnr.new -- used to create a log analysis table dbms_logmnr.addfile -- used to add and import the log file dbms_logmnr used for analysis. Removefile -- used for removal. The log file used for analysis deletes the execute dbms_logmnr.add_logfile (options => dbms_logmnr.removefile, logfilename => 'e: \ app \ Administrator \ oradata \ orcl \ REDO03.LOG '); execute Export (options => dbms_logmnr.removefile, logfilename => 'e: \ app \ Administrator \ oradata \ orcl \ REDO02.LOG '); execute Export (options => dbms_logmnr.removefile, logfilename => 'e: \ app \ Administrator \ oradata \ orcl \ R EDO01.LOG '); #### Description: SQL> execute upload (logfilename => 'e: \ app \ Administrator \ oradata \ orcl \ REDO03.LOG', option => dbms_logmnr.REMOVEFILE ); view the list of log files: select db_name, thread_sqn, filename from v $ logmnr_logs; www.2cto.com ---- offline (offline) archiving log files ('e: \ arch \ done', DBMS_LOGMNR.new ); dbms_logmnr.add_logfile ('e: \ arch \ arc0000000009_0742742416.0 001 ', DBMS_LOGMNR.addfile); values ('e: \ arch \ upload', DBMS_LOGMNR.addfile); END;/5.4 start LogMiner for analysis 5.4.1 unrestricted condition www.2cto.com values (dictfilename => 'e: \ test \ logminer_dict.dat '); END;/5.4.2 restriction condition BEGINdbms_logmnr.start_logmnr (dictfilename => 'e: \ test \ logminer_dict.dat ', startTime => to_date ('2017-02-18 16:40:26 ', 'yyyy-MM-DD HH24: MI: ss'), EndTime => To_date ('2017-02-18 16:44:41 ', 'yyyy-MM-DD HH24: MI: ss'); END;/2011 observed analysis results (v $ logmnr_contents) so far, we have analyzed the contents in the re-log file. Dynamic Performance view v $ logmnr_contents contains all the information obtained by LogMiner analysis. SELECT SQL _redo FROM v $ logmnr_contents; SELECT SQL _redo FROM v $ logmnr_contents where seg_name = 't1'; SELECT SQL _redo FROM v $ logmnr_contents where username = 'Scott 'and seg_name = 'Scott. t1 'and upper (operation) = 'delete'; www.2cto.com SELECT SQL _redo FROM v $ logmnr_contents where seg_name = 'T1' and upper (operation) = 'delete '; SELECT SQL _redo FROM v $ logmnr_contents where username = 'sys 'and table_name = 'T1'; 6. to disable LogMiner, you can create a permanent database table for the content in the v $ logmnr_contents view. This is very helpful for SQL> create table logmnr_contents as select * from v $ logmnr_contents; after the redo log check is completed, run end_logmnrexecute dbms_logmnr.end_logmnr () in dbms_logmnr; prepared by yangzhawen

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.