Detailed and operational versions of the rollback of Oracle committed transactions

Source: Internet
Author: User
Tags rollback sqlplus

Detailed version:

Logminer is a tool that every DBA should be familiar with, when one day you need to do incomplete recovery due to the user's misoperation, because you are not sure which time to do this operation, so this is very difficult to restore your recovery, the lost data can not be fully restored back. And Logminer can help you determine the exact timing of this error operation.
A few explanations of logminer
1, Logminer can help you determine at a certain time of the various DML,DDL operation of the specific time and

The SCN number, which is based on an archive log file and an online log file.
2, it can only be used in oracle8i and later versions, but it could analyze the Oracle8 log.
3, oracle8i can only be used to analyze DML operations, to Oracle9i to analyze DDL operations.
4. Logminer does not support index organization tables, Long, LOB, and collection types.
5, MTS Environment also can not use Logminer.
second, the operation steps

When creating a related DBMS_LOGMNR package,

All of the following actions require the SYS user to operate as SYSDBA; Other identities should not be allowed, but not yet verified.

1. Landing Operation Window

1.1 into the Sqlplus operating platform:

1.2 Access mode:

1.2.1 cmd Enter: (Start-> run Cmd->sqlplus)

sql>sys/password@ instance as Sysdba

1.2.2 Oracle Application Entry: Start-> All Programs->oracle-orahome92->application development->sql PLUS)

sql>sys/password@ instance as Sysdba

1.2.3 Third-party tools: Example pl/sql Command window in developer

sys/password@ instance as SYSDBA (login window input)

2. To install the Logminer tool, you must first run the following two scripts


2.1 Creating DBMS_LOGMNR Packages
Sql> @ORACLE_HOME/rdbms/admin/dbmslm.sql

Sql> @ORACLE_HOME/rdbms/admin/dbmslmd.sql

The first script is used to create a DBMS_LOGMNR package that analyzes the log files

The second script is used to create the Dbms_logmnr_d package, which is used to create the data dictionary file
Using sql> @ORACLE_HOME in 1.2.1 and 1.2.2/rdbms/admin/dbmslm.sql

This part of the operation, often error (Oracle_home verification can use absolute path such as: D:/oracle/ora92)

And in 1.2.3 use sql> $ORACLE _home/rdbms/admin/dbmslm.sql, here must distinguish between @ and $

3. Create a dictionary of related data
(Create a data dictionary file

A data dictionary file is a text file that is created by Dbms_logmnr_d.

First you need to modify the parameter Utl_file_dir, which is the directory where the data dictionary files are placed in the server, modified in 10g by dynamically modifying the parameters, and then restart the database to take effect. The directory must be built in advance.

Sql> Show parameter Utl_file_dir;

NAME TYPE VALUE

------------------------------------ -----------

Utl_file_dir string

If the value is empty, the value of the corresponding Utl_file_dir is not specified-the directory where the data dictionary file is placed in the server

4. Modify initialization parameter Utl_file_dir, specify the storage of analysis data
Sql>alter system set utl_file_dir= '/oracle/logminer ' scope=spfile;

After the operation, check out the show parameter Utl_file_dir
Sql> Show parameter Utl_file_dir;

NAME TYPE VALUE

------------------------------------ -----------

Utl_file_dir String/oracle/logminer
5. Restart the database
Sql>shutdown Immediate

(if it is not restarted, you can restart the related Oracle instance through the services in the admin tool)

I test in CMD can restart, in the Pl/sql developer reboot;
Sql>startup

6. Create a data dictionary file

Sql>execute dbms_logmnr_d.build (dictionary_filename => ' Sqltrace.ora ', dictionary_location => '/oracle/ Logminer/');
Generate a data dictionary file, which is done through Dbms_logmnr_d.build ().
Dictionary_location refers to the location where the Logminer data dictionary file is stored, and it must match the Utl_file_dir settings. The dictionary_filename refers to the name of the dictionary file placed in the location, and the name can be arbitrarily taken.

7. Create a log Analysis table

Sql> Execute dbms_logmnr.add_logfile (options =>dbms_logmnr.new,logfilename => '/oracle/ora92/rdbms/ ARC00009.001 ');

A, create log Analysis table database must be in mount or Nomount state, start database to mount state.

b, the establishment of the Log Analysis table, using Dbms_logmnr.add_logfile ()

C, where the options have three kinds of values, dbms_logmnr.new used to build a log analysis

Table; Dbms_logmnr.addfile used to join the log file for parsing; Dbms_logmnr.removefile is used to remove log files for profiling.

8. Add log files for profiling

Sql> Execute dbms_logmnr.add_logfile (options =>dbms_logmnr.addfile,logfilename=> '/oracle/ora92/rdbms/ ARC00010.001 ');

Sql> Execute dbms_logmnr.add_logfile (options =>dbms_logmnr.addfile,logfilename => '//oracle/ora92/rdbms/ ARC00011.001 ');

Delete

Sql> Execute dbms_logmnr.add_logfile (Options =>dbms_logmnr.removefile,logfilenam

E => '/oracle/ora92/rdbms/arc00011.001 ');


9. Start Logminer for analysis

9.1 Unrestricted conditions

Sql> Execute DBMS_LOGMNR.START_LOGMNR (

Dictfilename=> '/oracle/logminer/sqltrace.ora ');

Note that the Sqltrace.ora here is the name of the dictionary created earlier.

The profiling process may take several hours depending on the amount of data in the selected file. Sometimes, DBAs may not need all of the data in these log files, so can you just analyze some of the data.

9.2 Restricted conditions

Sql> Execute DBMS_LOGMNR.START_LOGMNR (dictfilename => '/opt/oracle/logminer/log/sqltrace.ora ', StartTime => To_date (' 20040625 09:00:00 ', ' yyyymmdd hh24:mi:ss '), Endtime =>to_date (' 20040625 22:00:00 ', ' yyyymmdd hh24:mi:ss ')) ;
That is, the analysis of the time from 20040625 09:00:00 to 20040625 22:00:00 and the results of the analysis

To the data dictionary for query purposes. There are also two parameter STARTSCN (start SCN number) and ENDSCN (terminating SCN) number.

You can also limit the scope of the log to be analyzed by setting the start SCN and up to SCN:

Sql> EXECUTE DBMS_LOGMNR.START_LOGMNR (

Dictfilename => '/dataoracle/oracle/logminer/dictionary.ora ',

STARTSCN => 20,ENDSCN => 50);

After the analysis is finished, the data that is parsed can be queried from a view named V$logmnr_contents. We can apply the content in this view to achieve the goal.

Table 1 DBMS_LOGMNR. START_LOGMNR process parameter meaning

Parameters

Parameter type

Default value

Meaning

Startscn

Digital (number)

0

SCN≥STARTSCN log Files section of the profiling redo log

Endscn

Digital (number)

0

SCN≤ENDSCN log Files section of the profiling redo log

StartTime

Date type (dates)

1998-01-01

Parse the log file portion of the timestamp ≥starttime in the redo log

Endtime

Date type (dates)

2988-01-01

Parse the log file portion of the timestamp ≤endtime in the redo log

Dictfilename

Character type (VARCHAR2)

A dictionary file that contains a snapshot of a database directory. Using this file allows the resulting analysis to be an understandable form of text, rather than a system-internal 16

Options

Binary_integer

0

System debugging parameters, actually rarely used

10. See Results of Analysis

Select Operation,sql_redo,sql_undo from V$logmnr_contents;

Note: Operation refers to the operation, Sql_redo refers to the actual operation, Sql_undo refers to the reverse operation for cancellation.

The Sql_redo column shows the action (SQL statement), and the Sql_undo column shows the SQL statements needed to recover the operation, and can revert to the modified data as long as the contents of the Sql_undo are executed sequentially.

The advantage of using this method to recover data is that there is no loss of data. Otherwise you can only do incomplete recovery.

Since all applications are actually using the v$logmnr_contents view, here are a few of the fields that are particularly useful in this view:

Seg_name: Table name;

SCN:SCN number

Sql_redo: SQL statement made

Sql_undo: Corresponding to Sql_redo, recovery of SQL statements

The specific time Timestamp:sql issued

Type of operation:sql, divided into inserts, UPDATE, START (set ...), commit (commit), internal, etc.

session#: Session that emits the action

It should be emphasized that the results of the analysis in the view v$logmnr_contents only exist during the life of the session we are running ' DBMS_LOGMRN.START_LOGMNR '. This is because all Logminer storage is in PGA memory, all other processes are not visible to it, and as the process ends, the analysis results disappear.

Finally, use the process DBMS_LOGMNR. END_LOGMNR terminates the log parsing transaction, at which point the PGA memory area is cleared and the analysis results are no longer present.



11. End Analysis

Sql> Execute DBMS_LOGMNR.END_LOGMNR; "Terminate log profiling transactions, freeing memory"

Once the end of the view v$logmnr_contents the analysis results are no longer in the.

data dictionary related to Logminer .
1, v$loglist it to display some information about the history log file
2, v$logmnr_dictionary because LOGMNR can have more than one dictionary file, this view is used to display this side

Face information.
3, v$logmnr_parameters It is used to display LOGMNR parameters
4, V$logmnr_logs It is used to display the log list information for analysis.

Iv. Relevant reference documents

http://blog.163.com/angel_yxf/blog/static/115569192008644112615/

Http://www.itpub.net/forum.php?mod=viewthread&tid=1051633&highlight

http://dbua.iteye.com/blog/452398

http://blog.csdn.net/wzy126126/article/details/7323810

Http://blog.sina.com.cn/s/blog_5e0fba9d0100ua8n.html

Http://wenku.baidu.com/view/142b6f20dd36a32d73758193.html

Operation version:

actual operation code Oracle9i

。 Remote Telnet telnet 192.168.0.55 root/password

Local cmd

1, landing cmd

Start-> Run->cmd

2 , go to the corresponding application

Sqlplus/nolog

3. Log in to the database

sql>sys/password @ database as SYSDBA;

4, create DBMS_LOGMNR package and Dbms_logmnr_d package

sql>/@ E:/oracle/ora92/rdbms/dbmslm.sql;

sql>/@ E:/oracle/ora92/rdbms/dbmslmd.sql;

5, modify initialization parameter Utl_file_dir, specify the storage of the analysis data

Sql>alter system set utl_file_dir= ' e:/oracle/logminer ' scope=spfile;

6, restart the database

Sql>shutdown Immediate

Sql>startup

7. Create a data dictionary file

Sql>execute dbms_logmnr_d.build (dictionary_filename => ' Sqltrace.ora ', dictionary_location => ' e:/oracle/ Logminer ');

8, the establishment of log Analysis table

Sql>execute dbms_logmnr.add_logfile (Options =>dbms_logmnr.new,logfilename => ' e:/oracle/logminer/ Redo01.log ');

8.1 Adding log files for profiling

Sql>execute dbms_logmnr.add_logfile (Options =>dbms_logmnr.addfile,logfilename=> ' e:/oracle/logminer/ Redo02.log ');

 

Sql>execute dbms_logmnr.add_logfile (Options =>dbms_logmnr.addfile,logfilename => ' e:/oracle/logminer/ Redo03.log ');

9, start Logminer for analysis

Sql>execute DBMS_LOGMNR.START_LOGMNR (dictfilename => ' e:/oracle/logminer/sqltrace.or

A ', StartTime =>to_date (' 20040625 09:00:00 ', ' yyyymmdd hh24:mi:ss '), Endtime =>to_d

Ate (' 20040625 22:00:00 ', ' yyyymmdd hh24:mi:ss '));

10, see analysis results

Sql>select Operation,sql_redo,sql_undo from V$logmnr_contents;

11, get the corresponding DML operation statement

Sql>create table name as select Operation,sql_redo,sql_undo from V$logmnr_contents;

12. End Analysis

Sql>execute DBMS_LOGMNR.END_LOGMNR;

 

The local pl/sql developer

Command Window

 

1, create DBMS_LOGMNR package and Dbms_logmnr_d package

Sql> $ORACLE _home/rdbms/admin/dbmslm.sql;

Sql> $ORACLE _home/rdbms/admin/dbmslmd.sql;

2. Modify initialization parameter Utl_file_dir, specify the storage of analysis data

Sql>alter system set utl_file_dir= '/oracle/logminer ' scope=spfile;

3. Restart the database

Sql>shutdown immediate;

Sql>startup

4. Create a data dictionary file

Sql>execute dbms_logmnr_d.build (dictionary_filename=> ' Dictionary.ora ', dictionary_location => '/oracle/ Logminer ');

5. Create a Log Analysis table

Sql>execute dbms_logmnr.add_logfile (Options =>dbms_logmnr.new,logfilename => '/oracle/oradata/zxtwskp/ REDO01. LOG ");

6. Add log files for profiling

Sql>execute dbms_logmnr.add_logfile (Options =>dbms_logmnr.addfile,logfilename=> '/ORACLE/ORADATA/ZXTWSKP /redo02. LOG ");

7. Start Logminer for analysis

Sql>execute DBMS_LOGMNR.START_LOGMNR (dictfilename => '/oracle/logminer/dictionary.ora ', StartTime =>to_ Date (' 20120302 17:00:00 ', ' yyyymmdd hh24:mi:ss '), Endtime =>to_date (' 20120302 17:10:00 ', ' yyyymmdd hh24:mi:ss '));

8. See Results of Analysis

Sql>select Sql_undo from v$logmnr_contents where username= ' BJWSKP ' and seg_name= ' JDC_DM_CLZL1 ' and sql_undo like ' Inse RT into% ';

9. Create a Sql_undo statement

Sql>create table JDC_FPMX as select Sql_undo from v$logmnr_contents where username= ' BJWSKP ' and Seg_name= ' JDC_DM_CLZL1 ' and sql_undo like ' Insert into% ';

9.1. Processing of data;

Handling scripts that need to be recovered

9.2. command to perform dml/ddl that need to be restored

 

10. End Analysis

Sql>execute DBMS_LOGMNR.END_LOGMNR;

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.