In-depth analysis of Oracle Database Log Files

Source: Internet
Author: User
As an Oracle DBA, we sometimes need to track the accidental deletion of data or malicious user operations. In this case, we not only need to find the database account that executes these operations, you also need to know which client (IP address, etc.) sends the operation. To address these problems, the most effective, practical, and cost-effective method is to analyze the log files of the Oracle database. This article will discuss the Oracle Log Analysis Technology in depth.

1. How to analyze logminer

Currently, the only way to analyze Oracle logs is to use the logminer provided by Oracle. All changes to the Oracle database are recorded in the log, however, we cannot understand the original log information at all, and logminer is a tool that allows us to understand the log information. From this point of view, it is similar to tkprof. One is used to analyze log information, and the other is to format the tracking file. By analyzing logs, we can achieve the following goals:

1. Identify logical changes to the database;

2. detect and correct user misoperations;

3. perform post-event audits;

4. Perform change analysis.

In addition, the information recorded in the log also includes the database change history, change type (insert, update, delete, DDL, etc), change the corresponding SCN number, and the user information for performing these operations. When logminer analyzes logs, equivalent SQL statements and undo statements are reconstructed (respectively recorded in SQL _redo and SQL _undo in the V $ logmnr_contents view ). Note that the equivalent statement is not the original SQL statement. For example, we initially executed "delete a where c1 <> 'cyx ';", logminer restructured six equivalent Delete statements. Therefore, we should realize that the V $ logmnr_contents view does not display the original version of reality. From the database perspective, this is easy to understand and it records metadata operations, because it is also the "delete a where c1 <> 'cyx';" Statement, the actual number of deleted records may vary in different environments, therefore, recording such a statement does not actually have any practical significance. logminer restructured multiple single statements that are converted into Meta operations under actual conditions.

In addition, Oracle redo logs do not record the names of the original objects (such as tables and columns, they are only the internal numbers in the Oracle database (for tables, they are the object IDs in the database, and for columns in the table, they correspond to the sequence numbers in the table: COL 1, Col 2, etc.), so in order to make the SQL statement reconstructed by logminer easy to recognize, we need to convert these numbers into corresponding names, this requires the use of a data dictionary (that is, logminer itself does not need a data dictionary. For details, see the following analysis process). logminer uses dbms_logmnr_d.build () to extract data dictionary information.

Logminer contains two PL/SQL packages and several views:

1. dbms_logmnr_d package. This package contains only one process for extracting data dictionary information, that is, the dbms_logmnr_d.build () process.

2. dbms_logmnr package, which has three processes:

Add_logfile (name varchar2, options number)-used to add/delete log files for analysis;

Start_logmnr (start_scn number, end_scn number, start_time number, end_time number, dictfilename varchar2, options number)-used to enable log analysis, determine the analysis time/SCN window and whether to use the extracted data dictionary information.

End_logmnr ()-used to terminate an analysis session. It recycles the memory occupied by logminer.

Logminer-related data dictionary.

1. V $ logmnr_dictionary: the data dictionary information that logminer may use. Because logmnr can have multiple dictionary files, this view is used to display this information.

2. V $ logmnr_parameters: The parameter information set by the current logminer.

3. V $ logmnr_logs, the list of logs currently used for analysis.

4. V $ logmnr_contents: log analysis result.

Ii. Oracle9i logminer enhancements:

1. More Data/Storage types are supported: Link/migration rows, cluster Table operations, direct path insertion, and DDL operations. In SQL _redo of V $ logmnr_contents, you can see the original sentence of the DDL operation (except for create user, where the password will appear in encrypted form instead of the original password ). If the tx_auditing initialization parameter is set to true, the database accounts for all operations will be recorded.

2. Data Dictionary extraction and usage options: currently, the data dictionary can be extracted not only to an external file, but also to the redo log stream, it provides a snapshot of the data dictionary at that time in the log stream, so that you can perform offline analysis.

3. DML operations can be grouped by transactions: You can set the committed_data_only option in start_logmnr () to group DML operations. In this way, committed transactions are returned in the sequence of SCN.

4. schema change supported: If the ddl_dict_tracking option of logminer is used when the database is opened, the logminer of Oracle9i automatically compares the original log stream with the data dictionary of the current system, return the correct DDL statement, and automatically detect and mark the difference between the current data dictionary and the original log stream, in this way, logminer returns the correct DDL statement even if the table involved in the log stream has been changed or does not exist at all.

5. The ability to record more column information in logs: for example, the update operation can not only record the updated row, but also capture more shadow information.

6. Support for Value-Based Query: oracle9i logminer supports the query of actually involved data based on the original metadata (operations, objects, etc.) query. For example, if a payroll table is involved, we can easily find out the original update statement for the employee's salary from 1000 to 2000. Before that, we can only select all the update statements.

Iii. log analysis process of Oracle8i/9i

Logminer can run as long as it is running on an instance. logminer uses a dictionary file to convert the name of an Oracle internal object. If this dictionary file is not available, the internal object number is displayed directly, for example, execute the following statement:


     delete from "C"."A" where "C1" = ‘gototop’ and 
Rowid = 'aaabg1aafaaabqaaah'; if no dictionary file exists, logminer analyzes the result as follows: delete from "unknown". "OBJ #6197" where "col 1"
= HEXTORAW('d6a7d4ae') and ROWID = 'AAABg1AAFAAABQaAAH';

If you want to use dictionary files, the database should at least be in the Mount state. Then execute dbms_logmnr_d.build to extract the data dictionary information to an external file. The specific analysis steps are as follows:

1. confirm that the initialization parameter utl_file_dir is set, and confirm that Oracle has read and write permissions on the directory to be changed, and then start the instance.In the example, the utl_file_dir parameter is as follows:


     SQL> show parameter utlNAME              TYPE       VALUE---------------  --------- --------------------------utl_file_dir      string   /data6/cyx/logmnr

This directory is mainly used to store the dictionary information files generated during the dbms_logmnr_d.build process. If this is not used, you can skip the following step.

2. Generate dictionary information files:


     exec dbms_logmnr_d.build(dictionary_filename =>'dic.ora',dictionary_location => '/data6/cyx/logmnr');

Dictionary_location refers to the location where dictionary information files are stored. It must fully match the utl_file_dir value. For example, if utl_file_dir =/data6/cyx/logmnr/is used, the preceding statement will fail, only because utl_file_dir is followed by a "/", this "/" is not sensitive in many other places.

Dictionary_filename refers to the name of the dictionary information file. Of course, we can also not explicitly write these two options, namely:


     exec dbms_logmnr_d.build('dic.ora','/data6/cyx/logmnr');

If the parameters in the first step are not set and you start this step directly, Oracle will report the following error:


     ERROR at line 1:ORA-01308: initialization parameter utl_file_dir is not setORA-06512: at "SYS.DBMS_LOGMNR_D", line 923ORA-06512: at "SYS.DBMS_LOGMNR_D", line 1938ORA-06512: at line 1

Note that the following error occurs in oracle817 for Windows:


     14:26:05 SQL> execute dbms_logmnr_d.build('oradict.ora',
'c:/oracle/admin/ora/log');BEGIN dbms_logmnr_d.build('oradict.ora',
'c:/oracle/admin/ora/log'); END;*ERROR at line 1:ORA-06532: Subscript outside of limitORA-06512: at "SYS.DBMS_LOGMNR_D", line 793ORA-06512: at line 1

Solution:


     Edit the "$ ORACLE_HOME/rdbms/admindbmslmd. SQL" file and change the type col_desc_array is varray (513) of col_description; To type col_desc_array is varray (700) of col_description;

Save the file and run the script again:


     15:09:06 SQL> @c:/oracle/ora81/rdbms/admin/dbmslmd.sqlPackage created.Package body created.No errors.Grant succeeded.

Then re-compile the dbms_logmnr_d package:


     15:09:51 SQL> alter package dbms_logmnr_d compile body; Package body altered. Then run dbms_logmnr_d.build again to: 15: 10: 06 SQL> execute dbms_logmnr_d.build ('oradict. ora ',
'c:/oracle/admin/ora/log');PL/SQL procedure successfully completed.

3. Add log files to be analyzed


     SQL>exec dbms_logmnr.add_logfile( logfilename=>'/data6/cyx/rac1arch/arch_1_197.arc', 
options=>dbms_logmnr.new);PL/SQL procedure successfully completed.

The options here have three parameters that can be used:

New-Indicates creating a new log file list

AddFile: Add a log file to the list, as shown in the following example.

Removefile-the opposite of AddFile.


     SQL> exec dbms_logmnr.add_logfile( logfilename=>'/data6/cyx/rac1arch/arch_2_86.arc', 
options=>dbms_logmnr.addfile);PL/SQL procedure successfully completed.

4. After you add the log file to be analyzed, we can start logminer analysis:


     SQL> exec dbms_logmnr.start_logmnr
(dictfilename=>'/data6/cyx/logmnr/dic.ora');PL/SQL procedure successfully completed.

If you do not use a dictionary file (you only need to start the instance), you do not need to follow the dictfilename parameter:


     SQL> exec dbms_logmnr.start_logmnr();PL/SQL procedure successfully completed.

Of course, the dbms_logmnr.start_logmnr () process has several other parameters used to define the log analysis time/SCN window. They are:

Startscn/endscn-defines the starting/ending SCN Number of the analysis,

Starttime/endtime-defines the start/end time of the analysis.

For example, the following process will only analyze logs from '2017-09-21 09:39:00 'to '2017-09-21 09:45:00:


     SQL> exec dbms_logmnr.start_logmnr
(dictfilename=>'/data6/cyx/logmnr/dic.ora' , -starttime => '2003-09-21 09:39:00',
endtime => '2003-09-21 09:45:00');PL/SQL procedure successfully completed.

The "-" at the end of the first line in the above process indicates a transposed line. If you are in the same line, you do not need. We can see the timestamp of valid logs:


     SQL> select distinct timestamp from 
v$logmnr_contents;TIMESTAMP-------------------2003-09-21 09:40:022003-09-21 09:42:39

Note that, because I have set the nls_date_format environment variable before, you can simply write the date above in this format. If you do not set it, you need to use the to_date function for conversion.


     SQL> !env|grep NLSNLS_LANG=american_america.zhs16cgb231280NLS_DATE_FORMAT=YYYY-MM-DD HH24:MI:SSORA_NLS33=/oracle/oracle9/app/oracle/product/9.2.0
The format of/ocommon/NLS/admin/data using to_date is as follows: exec dbms_logmnr.start_logmnr (dictfilename => '/data6
/cyx/logmnr/dic.ora',-starttime => to_date('2003-09-21 09:39:00','YYYY-MM-DD
 HH24:MI:SS'),-endtime => to_date('2003-09-21 09:45:00','YYYY-MM-DD 
HH24:MI:SS'));

The startscn and endscn parameters are used in a similar way.

5. Now, after the above process is completed, we can access several views related to logminer to extract the information we need. Here, we can see the list of logs currently analyzed in V $ logmnr_logs. If the database has two instances (OPS/RAC), there will be two different thread_id in V $ logmnr_logs.

The real analysis result is put in V $ logmnr_contents. There is a lot of information here, and we can track the information we are interested in as needed. I will list the common tracing scenarios separately later.

6. After the entire process is completed, we can execute the dbms_logmnr.end_logmnr process to exit the logminer analysis process. You can also directly exit SQL * Plus, which will automatically terminate.

4. How to Use logminer to analyze oracle8 log files

Although logminer is introduced only when Oracle8i is used, we can also use it to analyze the log files of oracle8, which is a little troublesome and has certain restrictions. The following describes the specific practices:

Copy $ ORACLE_HOME/rdbms/admin/dbmslmd of Oracle8i first. the SQL script is in the same directory of the host where the oracle8 database is located. This script is used to create the dbms_logmnr_d package (note that the dbms_logmnr package will be created in Oracle9i). If the script is named dbmslogmnrd. SQL. Then run the script on the oracle8 database and then use dbms_logmnr_d.build to create the dictionary information file. Now we can copy the archived logs of oracle8 along with the dictionary information file to the host where the Oracle8i Database is located, and then analyze the logs of oracle8 from step 3 of the above analysis process in the Oracle8i Database, however

Dbms_logmnr.start_logmnr () uses the oracle8 dictionary information file.

As I mentioned earlier, if it is not a dictionary file, we can directly copy the oracle8 archive log to the host where the Oracle8i Database is located and analyze it.

In fact, this involves a problem of using logminer across platforms. I have tried it and can analyze the logs of Oracle8i in Oracle9i. However, these are all limited, mainly manifested in:

1. The dictionary file used by logminer must be generated by the same database as the analyzed log file, and the character set of the database should be the same as that of the logminer database. This is easy to understand. If not the same database, there will be no corresponding relationship.

2. The hardware platform of the database that generates logs must be the same as the hardware platform that executes the logminer database. The operating system version may be different. When I did the test (if the reader is interested can go to my website http://www.ncn.cn to download the entire process of the test, because it is too long to put it here), the two database operating systems used are Tru64 UNIX, but one is v5.1a, and the other is v4.0f. If the operating system is inconsistent, the following error occurs:


     ORA-01284: file /data6/cyx/logmnr/arch_1_163570.arc 
cannot be openedORA-00308: cannot open archived log '/data6/cyx/
logmnr/arch_1_163570.arc'ORA-27048: skgfifi: file header information is invalidORA-06512: at "SYS.DBMS_LOGMNR", line 63ORA-06512: at line 1

5. Analyze v $ logmnr_contents

We have already learned that the logminer analysis result is stored in V $ logmnr_contents. There is a lot of information here. We can track the information we are interested in as needed. So what do we usually be interested in?

1. Track database structure changes, that is, DDL operations. As described above, this is only supported by oracle9i:


     SQL> select timestamp,sql_redo from v$logmnr_contents2 where upper(sql_redo) like '%CREATE%';TIMESTAMP-------------------SQL_REDO-------------------------2003-09-21 10:01:55create table t (c1 number);

2. Tracking users' misoperations or malicious operations:

For example, we have such a demand. Once we found that an employee modified the business database information through a program and changed the charging type of some telephones to free, now, we need to find out from the database who did it? How to check? Logminer provides a means to analyze log files. The session_info column of V $ logmnr_contents contains the following information:


     login_username=NEW_97 client_info= OS_username=oracle8 Machine_name=phoenix1 OS_terminal=ttyp3 OS_process_id=8004 OS_program 
name=sqlplus@phoenix1 (TNS V1-V3)

Although there are already a lot of information, in our business database, the program logs on to the database through the same login_username, which is difficult to judge from the above information.

However, we noticed that not everyone in the company's application server has the permission to write programs on it. Generally, malicious programs are directly connected to the database through their own PC, this requires an accurate positioning. IP tracking is the first thing we think of, and also meets our actual requirements, because the company's internal IP address allocation is managed in a unified manner, we can track the IP address we can accurately locate. But we cannot directly see the IP address from session_info, but we still have a solution, because the content in session_info is actually extracted from the log from the V $ session view, we can create a trigger to track the Client IP address in the production database:


     create or replace trigger on_logon_triggerafter logon on databasebegin  dbms_application_info.set_client_info(sys_context
('userenv', 'ip_address'));end;/

Now, we can see the IP address of the newly logged-on client in the client_info column of the V $ session view. The problem above can be solved. If the updated table name is hmlx, we can use the following SQL statement to find the required information:


     SQL > select session_info ,sql_redo from v$logmnr_contents 2 where upper(operation) = 'UPDATE'  and upper(sql_redo) 
like '%HMLX%'3 /SESSION_INFO-----------------------------------------SQL_REDO-----------------------------------------login_username=C client_info=10.16.98.26 OS_username=
sz-xjs-chengyx Machine_name=GDTEL/SZ-XJS-CHENGYXupdate "C"."HMLX" set "NAME" = 'free' where "NAME" = 
'ncn.cn' and ROWID = 'AAABhTAAFAAABRaAAE';

Well, so far, this article is about to end. If you have any questions, you can log on to my personal website (www.ncn.cn) to get the latest news, or through MSN (gototop_ncn@hotmail.com) contact me directly.

Vi. References:

1. Technical White Paper Oracle9i logminer

2. Metalink Document: How to setup logminer (Document ID: 111886.1)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.