Summary of several methods for data loss recovery of Oracle database _oracle

Source: Internet
Author: User
Tags data structures rollback sessions

Depending on the characteristics of the Oracle database and the tools provided, the main methods are as follows:

    1. Tables that use logical backups to lose data using the Import tool
    2. Use physical backups to restore data files and perform incomplete restores
    3. Recovering from redo log files with DBMS_LOGMNR packages
    4. Recovering data using the Flashback feature

Premise

To facilitate the introduction of the method, the recovery method described above will be based on the following scenario: The system administrator made a full library logical backup of the database the night before 11 o'clock, and then made a hot backup of all the data files. The next day 10 o'clock in the morning, when the system administrator modifies the data in the table Tfundasset, the Ztm field of a batch of records (thousands of) has been modified to the wrong value and has been submitted because the condition of the modified statement was incorrectly written. This table is an asset sheet, and the data is relatively infrequently changed.

Use logical backup to recover lost data using the Import tool

Export/import is an Oracle-provided tool for logical backup of databases. This tool is useful for backing up database systems with small amounts of data and little traffic. Because if you make a logical backup with export at 11 o'clock the previous night, the data modification operations (including DDL and DML) will be lost from backup to database crashes when the database crashes unexpectedly today. If the data on the table in the missing data is relatively stable, that is, there is basically no DML operation on the table, such as the Standard Code table or the historical data in the partition table, then importing it with import will allow you to compare the complete recovery data. If the table is a frequently changing business table, then these lost data can only be recovered from the paper records according to the business situation, or other ways.

The example is as follows: This table is an asset sheet. In contrast, today the system is running with less data to be modified, and the amount of data lost can be sustained or recovered from other sources. Then you can use import to recover.

Method One:

1. Back up the data from this table to another table:

2, delete the record of the table:

3, execute the following command:

In this command, you specify the name of the table that you want to import in the key tables, and Ignore=y indicates that errors that are already in the table are ignored.

4, after the end of the import, check the records in the table, and the appropriate way to restore the changes of the day.

Method Two:

1. Import the table that needs to be restored to another user below:

2. After checking the data, delete the original table record:

3, and then insert it back from another user's table:

4, the data volume is larger when the following methods can be used:

Ii. using physical backups to restore data files and perform incomplete restores

If the database is running in archive mode, you can restore it by using a previous data file backup, and then roll forward with the archive log until you roll back to the point in time of the error operation, and then reset the log file to open the database.

You can confirm whether you are running in archive mode in the following ways:

If it is as shown above, then it is running in archive mode.

▲ Assuming that you have a full library physical backup 11 o'clock the previous night, consider the following recovery:

1. Close the database:

Because incomplete recovery of a database must be implemented on a closed database, a backup of an old database is restored, and then the log rolls forward as needed, rather than restoring a new backup, and then retreating back to a point in time.

Notifies the client that the database will shut down and then issue:

The database has been closed.

The database has been unloaded.

The ORACLE routine has been closed.

2, determine the wrong operation time:

Depending on the operator's estimate, you can determine when a rollback stop is required for incomplete recovery, or you can use Logminer to analyze the log file (which will be described later) to find the exact time of the error operation.

3. Restore Data files:

The current database file is backed up before the existing data file is overwritten with the previous last backup. Note: The existing control files are not overwritten.

4, based on point-in-time recovery, start the database to the Assembly state:

This will restore the database to 9 points 58 minutes and 0 seconds on October 20, 2015.

It then uses the business data to supplement the data during this period.

Third, the use of DBMS_LOGMNR packets from the log file recovery

This package, which is provided by Oracle, is used in conjunction with the Dbms_logmnr_d package to easily analyze online log files and archive log files and extract all changes to the database from these log files.

Before using this package, you need to make some settings and modifications:

1, open Initorcl.ora, modify initialization parameters Utl_file_dir, set the Dbms_logmnr_d package will use the data dictionary file placement directory.

Then restart the database for the parameters to take effect.

2, with the SYS user connected to the database to perform Dbmslmd.sql script rebuild Dbms_logmnr_d this package.

The following steps are used to perform the Logminer analysis redo log files:

Create an external data dictionary file using the stored procedure build in dbms_logmnr_d;

Add the log file to be parsed using the stored procedure in DBMS_LOGMNR add_logfile;

Start the analysis using the stored procedure in DBMS_LOGMNR START_LOGMNR;

Query several views related to DBMS_LOGMNR to get the contents of the log file;

Use the stored procedure in DBMS_LOGMNR to END_LOGMNR end the analysis.

▲ The following details of the use of the process

1. Create an external data dictionary file using the stored procedure build in Dbms_logmnr_d:

2, using the stored procedure in DBMS_LOGMNR add_logfile Add the log file to be analyzed to the list of files to be analyzed:

If you are not running in archive mode, the circular use of the redo log file may cause the log file to be overwritten to get the recovery entry you are looking for. If you are running in archive mode, you can determine which archived log files are added to the list of files to be analyzed by looking at the time and error operation time of the log file in the Alert_orcl.log in the $oracle_home\admin\orcl\bdump directory.

Note: the LogFileName parameter needs to write the full path of the log file when performing the above procedure, otherwise it will report an error. Repeat until all the files you need to analyze are added to the list. This allows for an analysis to start.

3, the use of DBMS_LOGMNR stored procedures START_LOGMNR start analysis;

This allows you to get the contents of the log file through the following query.

4, query and DBMS_LOGMNR related to several views to obtain the contents of the log file;

This allows you to find the statements you want to recover. Note: v$logmnr_contents is only valid for sessions that perform DBMS_LOGMNR.START_LOGMNR, and cannot be accessed if the analysis is terminated through another session or by using DBMS_LOGMNR.END_LOGMNR v$logmnr_ Contents's data. If you want to make this data available to other sessions, you can do it by using a different table, such as:

CREATE TABLE Undo_sql as SELECT * from V$logmnr_contents.

The Undo_sql is then authorized, and other users can access the v$logmnr_contents data.

5, using the DBMS_LOGMNR stored procedures END_LOGMNR End analysis.

End the profiling activity with the following command after use: exec DBMS_LOGMNR.END_LOGMNR;

This frees the resources (memory and data structures) that are assigned to Logminer.

From the above process, it is possible to switch log files if the amount of data updated is large and the log file is small. If the log file is not being mined in time (not running in archive mode), the data may not be recoverable due to the circular usage of the log file. If you are running in archive mode, you may also have more log files to analyze and longer time.

Iv. recover data with new features of flashback

Oracle9i begins to provide a flash-back query (Flashback query) feature that provides a quick and easy way to recover from accidental deletions or accidental updates and commits, and before Oracle provides a flash-back query, This can only be done by backup based on point-in-time recovery or using LOGMNR mining logs to recover, which is no doubt more cumbersome and time-consuming than flash-back queries.

The prerequisites for using this Flashback query feature are:

1. The database must be in the automatic Undo Management state.

2. The time period for which the maximum can be flashed back is specified by the undo_retention initialization parameter (in seconds)

Parameter values can be dynamically modified by alter SYSTEM SET undo_retention = <seconds>;.

▲ How do you use flashback query to recover data?

1. Through SQL

A flashback query using as of the SELECT statement, the syntax is as follows:

Use the As of keyword to flashback Query on a table, view, or materialized view, and if the SCN is specified, the expr section must be a number, and if timestamp is specified, expr must be a timestamp type value. The results of the query return data at the specified SCN or point in time.

Here we use the Scott scheme to make an experiment.

If you want to use as's in the subquery section of update, the query can only return one record, or it will be an error.

You can add a temporary table as a relay and then update it as follows:

2. Recover by Dbms_flashback Package

The Dbms_flashback package provides the following functions:

Enable_at_time: Sets the flashback query time for the current session

Enable_at_system_change_number: Set the Flashback query SCN for the current session

Get_system_change_number: Get the SCN of the current database

DISABLE: Turn off the current session's flash-back query

When a session is set to Flashback query mode, subsequent queries are based on the database state of that point in time or SCN, and the flash-back query automatically expires even if disable is not explicitly specified.

Any DML and DDL operations are not allowed when session runs in the flashback query state. If you want to use DML operations for data recovery, you must use the PL/SQL cursor.

▲ Example:

As you can see from the example above, you can recover data in this way as long as the modification is not earlier than sysdate-(the number of seconds specified by undo_retention).

For bulk data in a problem, you can write a procedure to complete the data before you get the change:

And then use this temporary table data to update the Tfundasset on it.

V. Summary

Comparing the use of several methods of recovering data, we can see that:

Exp/imp is only suitable for data loss in a table with little change of data, that is, it is necessary to modify the data from the business processing data, otherwise it will result in data loss.

The use of Point-in-time incomplete recovery can restore lost data, but need to shut down the database, reduce the system available time, but also lose the recovery point of time data;

Using Logminer can restore data better, but requiring the database to run in archive mode as much as possible may result in data loss. The advantage is that you can get all the data from the log file without shutting down the system.

Using flashback is the most convenient and concise, you can directly get the data before the modification, but need to rely on system settings, and need to occupy a large number of rollback table space.

So choose what method to restore the data, depending on your system environment and specific circumstances, not mechanically. Use the right method to minimize the loss of data.

Of course, it is better not to use these recovery methods. The premise is that you must do the following work:

1, for different environments to create different database users, different passwords (if not the user is different, must be different passwords);

2, will owner and application users separate, and do appropriate authorization;

3, before doing DML, first use the same conditions to do the query, to see if the result set is consistent with expectations.

The above is the entire content of this article, I hope the content of this article for everyone's study or work can bring certain help, if you have questions you can message exchange.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.