1. How to analyze LogMiner
Currently, the only way to analyze Oracle logs is to use the LogMiner provided by Oracle. All changes to the Oracle database are recorded in the log, however, we cannot understand the original log information at all, and LogMiner is a tool that allows us to understand the log information. From this point of view, it is similar to tkprof. One is used to analyze log information, and the other is to format the tracking file. By analyzing logs, we can achieve the following goals:
1. Identify logical changes to the database; 2. detect and correct user misoperations; 3. perform post-event audits; 4. Perform change analysis.
In addition, the information recorded in the log also includes the database change history, change type INSERT, UPDATE, DELETE, DDL, and so on), change the corresponding SCN number, and the user information for performing these operations. When LogMiner analyzes logs, record equivalent SQL statements and UNDO statements in SQL _REDO and SQL _UNDO in V $ LOGMNR_CONTENTS view respectively ). Note that the equivalent statement is not the original SQL statement. For example, we initially executed "delete a where c1 <> 'cyx ';", logMiner restructured six equivalent DELETE statements. Therefore, we should realize that the V $ LOGMNR_CONTENTS view does not display the original version of reality. From the database perspective, this is easy to understand and it records metadata operations, because it is also the "delete a where c1 <> 'cyx';" Statement, the actual number of deleted records may vary in different environments, therefore, recording such a statement does not actually have any practical significance. LogMiner restructured multiple single statements that are converted into Meta operations under actual conditions.
In addition, because the Oracle redo log does not record the original object such as the table and its columns) Name, but their internal numbers in the Oracle database are their object IDs in the database for the table, and for the columns in the table, the corresponding sequence numbers are in the table: COL 1, COL 2, etc.), so in order to make the SQL statement reconstructed by LogMiner easy to recognize, we need to convert these numbers into corresponding names, this requires the use of a data dictionary, that is, LogMiner itself does not need a data dictionary. For details, see the following analysis process.) LogMiner uses DBMS_LOGMNR_D.BUILD () to extract data dictionary information.
LogMiner contains two PL/SQL packages and several views:
1. dbms_logmnr_d package. This package contains only one process for extracting data dictionary information, that is, the dbms_logmnr_d.build () process. 2. dbms_logmnr package, which has three processes: add_logfile (name varchar2, options number)-used to add/delete log files for analysis; start_logmnr (start_scn number, end_scn number, start_time number, end_time number, dictfilename varchar2, options number)-used to enable log analysis, determine the analysis time/SCN window, and determine whether to use the extracted data dictionary information. End_logmnr ()-used to terminate an analysis session. It recycles the memory occupied by LogMiner. LogMiner-related data dictionary. 1. v $ logmnr_dictionary: the data dictionary information that LogMiner may use. Because logmnr can have multiple dictionary files, this view is used to display this information. 2. v $ logmnr_parameters: The parameter information set by the current LogMiner. 3. v $ logmnr_logs, the list of logs currently used for analysis. 4. v $ logmnr_contents: log analysis result.
Ii. Oracle9i LogMiner enhancements:
1. More Data/Storage types are supported: Link/migration rows, CLUSTER Table operations, direct path insertion, and DDL operations. In SQL _REDO of V $ LOGMNR_CONTENTS, except for the CREATE USER statement of the DDL operation, the password will appear in encrypted form instead of the original password ). If the TX_AUDITING initialization parameter is set to TRUE, the database accounts for all operations will be recorded. 2. Data Dictionary extraction and usage options: currently, the data dictionary can be extracted not only to an external file, but also to the redo log stream, it provides a snapshot of the data dictionary at that time in the log stream, so that you can perform offline analysis. 3. DML operations can be grouped by transactions: You can set the COMMITTED_DATA_ONLY option in START_LOGMNR () to group DML operations. In this way, committed transactions are returned in the sequence of SCN. 4. SCHEMA change supported: If the DDL_DICT_TRACKING option of LogMiner is used when the database is opened, the LogMiner of Oracle9i automatically compares the original log stream with the data dictionary of the current system, return the correct DDL statement, and automatically detect and mark the difference between the current data dictionary and the original log stream, in this way, LogMiner returns the correct DDL statement even if the table involved in the log stream has been changed or does not exist at all. 5. The ability to record more column information in logs: for example, the UPDATE operation can not only record the updated row, but also capture more shadow information. 6. Support for value-based queries: Oracle9i LogMiner starts to support queries based on actually involved data based on metadata operations, objects, and so on. For example, if a payroll table is involved, we can easily find out the original update statement for the employee's salary from 1000 to 2000. Before that, we can only select all the update statements.
Iii. log analysis process of Oracle8i/9i
LogMiner can run as long as it is running on an instance. LogMiner uses a dictionary file to convert the name of an Oracle internal object. If this dictionary file is not available, the internal object number is displayed directly, for example, execute the following statement: delete from "C ". "A" where "C1" = 'gototop' and ROWID = 'aaabg1aafaaabqaaah'; if no dictionary file exists, LogMiner analyzes the result as follows: delete from "UNKNOWN ". "OBJ #6197" where "COL 1" = HEXTORAW ('d6a7d4ae ') and ROWID = 'aaabg1aafaaabqaaah'; to use dictionary files, the database should at least be in the MOUNT state. Then execute dbms_logmnr_d.build to extract the data dictionary information to an external file. The specific analysis steps are as follows:
1. confirm that the initialization parameter UTL_FILE_DIR is set, and confirm that Oracle has read and write permissions on the directory to be changed, and then start the instance. In the example, the UTL_FILE_DIR parameter is as follows: SQL> show parameter utl NAME TYPE VALUE ----------- ------------------------------ utl_file_dir string/data6/cyx/logmnr directory is mainly used to store the dictionary information files generated during the dbms_logmnr_d.build process, if this is not used, you can skip the following step.
2. Generate dictionary information files:
Exec dbms_logmnr_d.build (dictionary_filename => 'dic. ora ', dictionary_location =>'/data6/cyx/logmnr '); where dictionary_location refers to the storage location of the dictionary information file. It must completely match the value of UTL_FILE_DIR. For example: assume that UTL_FILE_DIR =/data6/cyx/logmnr/, the above statement will fail, only because "/" is added after UTL_FILE_DIR. in many other places, this "/" is not sensitive. Dictionary_filename refers to the name of the dictionary information file. Of course, we can also leave these two options uncertain, namely, exec dbms_logmnr_d.build ('dic. ora ','/data6/cyx/logmnr '); If the parameter in the first step is not set and you start this step directly, Oracle will report the following error:
ERROR at line 1:ORA-01308: initialization parameter utl_file_dir is not setORA-06512: at "SYS.DBMS_LOGMNR_D", line 923ORA-06512: at "SYS.DBMS_LOGMNR_D", line 1938ORA-06512: at line 1
Note that the following error occurs in oracle817 for Windows:
14:26:05 SQL> execute dbms_logmnr_d.build('oradict.ora','c:\oracle\admin\ora\log');BEGIN dbms_logmnr_d.build('oradict.ora',' c:\oracle\admin\ora\log'); END;*ERROR at line 1:ORA-06532: Subscript outside of limitORA-06512: at "SYS.DBMS_LOGMNR_D", line 793ORA-06512: at line 1Solution:
Edit "$ ORACLE_HOME/rdbms/admindbmslmd. SQL "file, replace TYPE col_desc_array IS VARRAY (513) OF col_description; with: TYPE col_desc_array IS VARRAY (700) OF col_description; save the file, and then execute this script again:
15:09:06 SQL> @c:\oracle\ora81\rdbms\admin\dbmslmd.sqlPackage created.Package body created.No errors.Grant succeeded.
Then re-compile the DBMS_LOGMNR_D package:
15:09:51 SQL> alter package DBMS_LOGMNR_D compile body; Package body altered. then run dbms_logmnr_d.build again: 15: 10: 06 SQL> execute dbms_logmnr_d.build ('oradict. ora ', 'c: \ oracle \ admin \ ora \ log'); PL/SQL procedure successfully completed.3. Add log files to be analyzed
SQL> exec dbms_logmnr.add_logfile (logfilename => '/data6/cyx/rac1arch/arch_1_197.arc', options => dbms_logmnr.new); PL/SQL procedure successfully completed. here, the options option has three parameters: NEW-to create a NEW log file list ADDFILE-to add a log file to this list, in the following example, REMOVEFILE-is opposite to addfile. SQL> exec dbms_logmnr.add_logfile (logfilename => '/data6/cyx/rac1arch/arch_2_86.arc', options => dbms_logmnr.addfile); PL/SQL procedure successfully completed.
4. After you add the log file to be analyzed, we can start LogMiner analysis:
SQL> exec dbms_logmnr.start_logmnr (dictfilename => '/data6/cyx/logmnr/dic. ora '); PL/SQL procedure successfully completed. if you do not use the dictionary information file, you only need to start the instance.) Then you do not need to follow the dictfilename parameter: SQL> exec dbms_logmnr.start_logmnr (); PL/SQL procedure successfully completed. of course, the dbms_logmnr.start_logmnr () process also has several other parameters used to define the log analysis time/SCN window, which are STARTSCN/ENDSCN-define the start/end SCN Number of the analysis, STARTTIME/ENDTIME-defines the start/end time of the analysis. For example, the following process will only analyze logs from '2017-09-21 09:39:00 'to '2017-09-21 09:45:00: SQL> exec dbms_logmnr.start_logmnr (dictfilename => '/data6/cyx/logmnr/dic. ora ',-starttime => '2017-09-21 09:39:00', endtime => '2017-09-21 09:45:00 '); PL/SQL procedure successfully completed. the "-" at the end of the first line in the above process indicates a transposed line. If you are in the same line, you do not need. We can see the timestamp of valid logs: SQL> select distinct TIMESTAMP from v $ logmnr_contents; timestamp ----------------- 09:40:02 09:42:39 note that I have already set the NLS_DATE_FORMAT environment variable, therefore, the date above can be directly written in this format. If you do not set it, you need to use the to_date function to convert it. SQL>! Env grep NLS NLS_LANG = american_america.zhs16cgb231280 NLS_DATE_FORMAT = YYYY-MM-DD HH24: MI: SS ORA_NLS33 =/oracle/oracle9/app/oracle/product/9.2.0/ocommon/nls/admin/data the format of to_date is as follows: exec dbms_logmnr.start_logmnr (dictfilename => '/data6/cyx/logmnr/dic. ora ',-starttime => to_date ('2017-09-21 09:39:00', 'yyyy-MM-DD HH24: MI: ss '), -endtime => to_date ('2017-09-21 09:45:00 ', 'yyyy-MM-DD HH24: MI: ss'); STARTSC The use of N and ENDSCN parameters is similar.
5. After the above process is completed, we can access several views related to LogMiner to extract the information we need. Here, we can see the list of logs currently analyzed in v $ logmnr_logs. If the database has two instances, OPS/RAC, there will be two different THREAD_ID in v $ logmnr_logs. The real analysis result is put in v $ logmnr_contents. There is a lot of information here, and we can track the information we are interested in as needed. I will list the common tracing scenarios separately later.
6. After the entire process is completed, we can execute the dbms_logmnr.end_logmnr process to exit the LogMiner analysis process. You can also directly exit SQL * PLUS, which will automatically terminate. (End)