Oracle log mining commands

Source: Internet
Author: User

Oracle log mining command 1. Log Mining 1.1 supplement log command (log contains rowid)

  alter databaseadd supplemental logdata;

 

1.2 DML-based log mining example: SCOTT updates a record. I want to mine the operation time and SCN, as shown in figure
   update emp set sal=9000;

 

Step: 1.2.1 view the current redo log
Col member for a40 select v1.group #, v1.sequence #, v1.first _ change #, v1.status, v2.member from v $ log v1, v $ logfile v2 where v1.group # = v2.group # order by 1; result: GROUP # SEQUENCE # FIRST_CHANGE # status member failed ------------- -------------- failed 1 7 1346456 CURRENT/u01/oradata/mike/redo01.log 1 7 1346456 CURRENT/u01/oradata/mike/redo01b. log 2 5 1346098 INACTIVE/u01/oradata/mike/redo02b. log 2 5 1346098 INACTIVE/u01/oradata/mike/redo02.log 3 6 1346283 INACTIVE/u01/oradata/mike/redo03b. log 3 6 1346283 INACTIVE/u01/oradata/mike/redo03.log6 rows selected.

 

1.2.2 use the current log to create an analysis list Exec sys. dbms_logmnr.add_logfile, it means to find the data dictionary from the specified dbms_logmnr.add_logfile Or redo log. Exec sys. dbms_logmnr.start_logmnr (options => sys. dbms_logmnr.dict_from_online_catalog); 1.2.4 View analysis results
Select scn, to_char (timestamp, 'yyyy-mm-dd hh24: mi: ss'), SQL _redo, SQL _undo from v $ logmnr_contents where seg_name = 'emp' and seg_owner = 'Scott '; Result: ---------- ------------------- SQL _REDO limit SQL _UNDO limit 1352504 11:26:38 update "SCOTT ". "EMP" set "SAL" = '000000' where "SAL" = '000000' and ROWID = 'aaasb2aaeaaaacxaaa'; update "SCOTT ". "EMP" set "SAL" = '000000' where "SAL" = '000000' and ROWID = 'aaasb2aaeaaaacxaaa'; 2800 11:26:38 update "SCOTT ". "EMP" set "SAL" = '000000' where "SAL" = '000000' and ROWID = 'aaasb2aaeaaaacxaab'; update "SCOTT ". "EMP" set "SAL" = '000000' where "SAL" = '000000' and ROWID = 'aaasb2aaeaaaacx .........

 

1.2.5 end logmnrexec dbms_logmnr.end_logmnr; 1.3 DDL-based Log Mining 1.3.1 data dictionary 1.3.1.1 archive the data dictionary in the archived log to the log execute dbms_logmnr_d.build (options => logs ); 1.3.1.2 perform DDL operations to delete the table test under SCOTT; drop table test; 1.3.1.3 view the redo log and archive log select group #, sequence #, statusfrom v $ log; Result: GROUP # SEQUENCE # STATUS ------------------ ---------------- 1 13 INACTIVE 2 14 CURRENT 3 12 INACTIVE select name, dictio Nary_begin, dictionary_endfrom v $ archived_log; Result: name dic statistics ---/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_3_8w4proqh _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_4_8w4psx1j _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_5_8w4q3sol _. arc NO/u01/f Ast_recovery_area/MIKE/archivelog/2013_06_20/ow.mf_4106_8w4qc0yo _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_7_8w564nnh _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_8_8w56c7gz _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_rj9_8w56d13f _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_10_8w56mrcn _. arc NO /U01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_11_8w5bskbb _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_12_8w5bstfx _. arc YES/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_13_8w5byo0v _. arc NO 11 rows selected. it is found that dictionary_begin and dictionary_end in an archive log are YES, so the data dictionary information will be added to the analysis queue in this archive log. 1.3.1.4 create an analysis list using logs first add the current redo file to the analysis list: execute dbms_logmnr.add_logfile (logfilename => '/u01/oradata/mike/redo02.log', options => dbms_logmnr.new ); then, add the archive log containing the data dictionary information: execute dbms_logmnr.add_logfile (logfilename => '/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/empty _. arc', options => dbms_logmnr.addfile); 1.3.1.5 use a data dictionary to analyze execute sys. dbms_logmnr.start_logmnr (options => sys. dbms_logmnr.dic T_from_online_catalog); 1.3.1.6 view the analysis result selectscn, to_char (timestamp, 'yyyy-mm-dd hh24: mi: ss '), SQL _redo from v $ logmnr_contents where seg_name = 'test' andseg_owner = 'Scott '; Result: SCN TO_CHAR (TIMESTAMP,' SQL _redo limit 1384524 15:14:59 ALTER TABLE "SCOTT ". "TEST" rename to "B IN $ 351_qjklculgragaj3d0fa = $0 "; 1384527 2013-06-20 15:14:59 drop table test AS" BIN $ 351_qjklculgragaj3d0fa = $0 "; 1.3.1.7 end worker; 1.3.2 data dictionary in utl_file_dir 1.3.2.1 if utl_file_dir show parameter utl_file_dir is set to null, set the following value: alter systemset utl_file_dir = '/u01/logmnr' scope = spfile; restart the instance: startup force re-view show parameter utl_file_dir 1.3.2.2 create a data dictionary to the specified directory execute dbms_logmnr_d.build (dic Tionary_filename => 'newdict. ora ', dictionary_location =>'/u01/logmnr '); 1.3.2.3 view the current redo log and archive log select group #, sequence #, first_change #, statusfrom v $ log; Result: GROUP # SEQUENCE # FIRST_CHANGE # STATUS quo ---------------------- 1 16 1389548 CURRENT 2 14 1384482 ACTIVE 3 15 15 1389541 ACTIVE select name, dictionary_begin, dictionary_endfrom v $ archived_log; Result: name dic --dic ---------- Export ---/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_3_8w4proqh _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_4_8w4psx1j _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_5_8w4q3sol _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_6_8w4qc0yo _. arc NO N O/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_20177_8w564nnh _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_8_8w56c7gz _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_rj9_8w56d13f _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_10_8w56mrcn _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_11_8w5bskbb _. a Rc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_12_8w5bstfx _. arc YES/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_13_8w5byo0v _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_14_8w5kl20 _. arc NO/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_15_8w5klxm9 _. arc NO 13 rows selected. 1.3.2.4 perform DDL operations. delete a table drop table test under SCOTT.; 1.3.2.5 create an analysis list using logs because I have written data dictionary information to/u01/logmnr/newdict before dropping a table. ora, so I only need to add the current log and several archived logs to the analysis list. Execute dbms_logmnr.add_logfile (logfilename => '/u01/oradata/mike/redo01.log', options => dbms_logmnr.new ); execute dbms_logmnr.add_logfile (logfilename => '/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/o1_mf_1_15_8w5klxm9 _. arc', options => dbms_logmnr.addfile); execute dbms_logmnr.add_logfile (logfilename => '/u01/fast_recovery_area/MIKE/archivelog/2013_06_20/pai _. arc', options => dbms_logmnr.addfile); 1.3.2.6 execute dbms_logmnr.start_logmnr (dictfilename => '/u01/logmnr/newdict. ora '); 1.3.2.6 query and view the analysis result select scn, to_char (timestamp, 'yyyy-mm-dd hh24: mi: ss '), SQL _redo from v $ logmnr_contents where seg_name = 'test' and seg_owner = 'Scott '; Result: SCN TO_CHAR (TIMESTAMP,' SQL _redo limit 1384524 15:14:59 ALTER TABLE "SCOTT ". "TEST" rename to "BIN $ 351_qjklculgragaj3d0fa = $0"; 1384527 2013-06-20 15:14:59 drop table test AS "BIN $ 351_qjklculgragaj3d0fa = $0 "; 1389502 17:06:50 create table test (id int) tablespace stu; 1389516 17:07:00 insert into "SCOTT ". "TEST" ("ID") values ('99'); 1389980 2013-06-20 17:18:56 alter table "SCOTT ". "TEST" rename to "BIN $ 351_qjkmculgragaj3d0fa = $0"; 1389982 2013-06-20 17:18:56 drop table test AS "BIN $ 351_qjkmculgragaj3d0fa = $0"; 6 rows selected. 1.3.2.7 end logmnrexecdbms_logmnr.end_logmnr; 1.4 log mining summary show parameter utl alter system set utl_file_dir = '/u01/logmnr' scope = spfile; (set parameters and restart the instance) startup force execute dbms_logmnr_d.build (options => dbms_logmnr_d.store_in_redo_logs); (archive data dictionary information to logs) executedbms_logmnr_d.build ('dict. ora ','/u01/logmnr ', options => dbms_logmnr_d.store_in_flat_file); (archive data dictionary information to the specified directory) executedbms_logmnr_d.build (dictionary_filename =>' mcict. ora ', dictionary_location =>'/u01/logmnr '); (create a data dictionary to the specified directory) executedbms_logmnr.add_logfile (logfilename => '', options => dbms_logmnr.new ); (add logs to the analysis list) executedbms_logmnr.add_logfile (logfilename => '', options => dbms_logmnr.addfile); then (dictfilename => '/u01/logmnr/dict. ora '); (use the data dictionary of the specified path for analysis) executesys. dbms_logmnr.start_logmnr (options => sys. dbms_logmnr.dict_from_online_catalog); (use the data dictionary in the log for analysis) select xxx from v $ logmnr_contents wherexxx; (query information after mining) exec dbms_logmnr.end_logmnr; (end mining)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.