I. [preparations]
1. SQL> alter database add supplemental log data; -- add log support (User session_info can be obtained to facilitate searching and operating users );
2. SQL> @ $ Oracle_HOME/rdbms/admin/dbmslm. SQL; -- install the log analysis Toolkit
3. SQL> @ $ ORACLE_HOME/rdbms/admin/dbmslmd. SQL; -- install the log analysis toolkit dictionary.
4. SQL> alter system set UTL_FILE_DIR = '/home/oracle/logminer' scope = spfile; -- path for storing the analysis result File
5. SQL> shutdown immediate;
6. SQL> startup;
7. mkdir logminer -- create the "/home/oracle/logminer" path to ensure that the folder path exists;
Ii. [Analysis]
1. Use the sysdba user to connect;
2. SQL> EXECUTE dbms_logmnr_d.build (dictionary_filename => 'sqltrace. ora ', dictionary_location =>'/home/oracle/logminer '); -- load the dictionary for analysis;
3. SQL> exec dbms_logmnr.add_logfile (options => dbms_logmnr.new, logfilename => '/home/oracle/flash_recovery_area/GSSA/archivelog/2009_11_16/logs _. arc '); -- load the absolute path file for archiving Or redo logs;
If you add a second log file
SQL> exec dbms_logmnr.add_logfile (logfilename => '/opt/oracle/oradata/shim/redo03.log', options => dbms_logmnr.addfile );
4. SQL> execute dbms_logmnr.start_logmnr (DICTFILENAME => '/home/oracle/logminer/sqltrace. ora'); -- start analysis;
5. SQL> SELECT SQL _REDO FROM V $ LOGMNR_CONTENTS WHERE TABLE_NAME = 't_ LOGMNR '; -- query by analysis result.
Create table temp_kxu as select * from V $ LOGMNR_CONTENTS; -- analyze the table after it is created;
6. execute dbms_logmnr.end_logmnr ();
Note: you do not need to set supplemental log data primary key and unique index for oracle9i and oracle11g. logmnr can also obtain DML
Alter system switch logfile; -- write archived logs;