How to Use logminer to view log Content

Source: Internet
Author: User
We know that any data operation will be recorded in the redo log file. Before 8i, we were unable to view the content in the redo file. However, after 9i, Oracle launched logminer, through logminer analysis, we can clearly view any Oracle operations, such as deleting a table and viewing detailed operation records in logminer. 1. to use logminer, run the following two files: $ ORACLE_HOME/rdbms/admin/dbmslm. SQL
$ ORACLE_HOME/rdbms/admin/dbmslmd. sqlsql> connect sys/sys as sysdba
Connected.
SQL> @/u01/APP/Oracle/product/10.2.0/db_1/rdbms/admin/dbmslm. SQL
Package created. SQL> @/u01/APP/Oracle/product/10.2.0/db_1/rdbms/admin/dbmslmd. SQL
Package created. 2. Modify the initialization parameter utl_file_dir to specify the storage location of the analysis data.
SQL> alter system set utl_file_dir = '/u01/logmine_dir' scope = spfile; SQL> shutdown immediatesql> startup 3. Create a data dictionary file begin
SYS. dbms_logmnr_d.build (dictionary_filename => 'mylogmine. ora ', dictionary_location =>'/u01/logmine_dir ', options >=> dbms_logmnr_d.store_in_flat_file );
End; mylogmine. ora is the name of a dictionary file (which can be arbitrary). The function of a dictionary file is to convert the encoding and internal data types used by oralce internal processing to readable object names and types. for example, this SQL
Insert into HR. Jobs (job_id, job_title, min_salary, max_salary) values ('It _ wt ', 'technical write', 4000,110 00 );
If no data dictionary file is created, logminer generates the following records:
Insert into "unknown". "OBJ #45522" ("col 1", "Col 2", "Col 3", "Col 4") Values
(Hextoraw ('45465f4748'), hextoraw ('546562136e6963616c20577269746572 '),
Hextoraw ('c229'), hextoraw ('c3020b'); 4. At this time, we use a user to log on to the database and delete a table SQL> connect app_user/hxl
SQL> drop table tb_test; SQL> conect/As sysdba
SQL> alter system switch logfile;
At this time, an archive log file will be generated. The operation to delete the table just now is in this file. 5. Analyze the archive log creation list.
Begin
SYS. dbms_logmnr.add_logfile (logfilename => '/u01/arch_log/1_13_787769576.dbf ',
Options => dbms_logmnr.new );
End; here, the archive log file named 13_787769576.dbf is the log file generated by the switch. Of course, you can add other archive logs to view the analysis needs. Here we only analyze one archive log file.
The process for adding other archive log analyses is as follows:
Begin
Dbms_logmnr.add_logfile (logfilename => '/u01/arch_log/201712_787769576.dbf ',
Options => dbms_logmnr.addfile );
End;
6. Start to use logminer for log analysis and analysis of the entire log file (without any conditions)
Begin
SYS. dbms_logmnr.start_logmnr (dictfilename => '/u01/logmine_dir/mylogmine. ora ');
End; if you probably know the time when an operation occurred, you can analyze the process as follows:
Begin
SYS. dbms_logmnr.start_logmnr (dictfilename => '/u01/logmine_dir/mylogmine. ora ',
Starttime => to_date ('2017-07-04 15:00:00 ', 'yyyy-MM-DD hh24: MI: ss '),
Endtime => to_date ('2017-07-04 16:00:00 ', 'yyyy-MM-DD hh24: MI: ss ')
);
End; here we analyze the entire log file. After the analysis is complete, view the content that V $ logmnr_contents is trying to get. note that the content of V $ logmnr_contents is retained in PGA. to check whether this attempt needs to be performed in the same session as the execution of the analysis, otherwise it cannot be viewed in another session, and the ORA-01306 error will be reported.
You can view the table deletion operation in the V $ logmnr_contents view: SQL> select SQL _redo from V $ logmnr_contents

Where upper (SQL _redo) like '% tb_test %'
SQL _redo
-----------------------------------
Alter table "app_user". "tb_test" RENAME to "bin $ w/uel9luoujgqab/aqaw2w = $0 ";
"Drop table tb_test as" "bin $ w/uel9luoujgqab/aqaw2w = $0" ";" 7. End Analysis
Begin
SYS. dbms_logmnr.end_logmnr;
End; Use logminer instructions:
1. The database platform for log analysis must be consistent with the database platform for log generation;
2. The database in which logs are analyzed can be completely independent from the database that generates logs, or the two are the same database;
3. The version of the database where the log is analyzed is higher than that of the database where the log is generated;
4. the character set of the database for log analysis must be consistent with that of the database for log generation;
5. logminer data dictionary files must be generated in the database that generates logs.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.