Oracle Data File Transfer and Loss Processing

Source: Internet
Author: User

 

1. Data File migration

It's easy, just three steps.

 

Step 1: Move the tablespace Offline to the directory specified by disk D.

 

Step 2: Modify the tablespace file path alter database rename file 'old file path' to 'new file path ';

 

Step 3: Make the tablespace Online.

 

1.1 File System Data File migration

Database must be open:

 

1. alter tablespace tbs read only;

 

2. alter tablespace tbs offline;

 

3. Copy an original file at offline and name it as a new file name.

 

4. alter tablespace tbs rename datafile 'tbs _ file_old.dbf 'to 'tbs _ file_new.dbf ';

 

5. alter tablespace tbs online;

 

6. alter tablespace tbs read write;

 

7. alter database recover datafile 'tbs _ file_new.dbf ';

 

 

 

1.2 migration of bare device data files

Database must be mounted but not open:

 

1. Create a raw device connection file for the new data file

 

2. starup mount;

 

3. alter database rename file 'tbs _ file_old 'to 'tbs _ file_new ';

 

4. alter database recover datafile 'tbs _ file_new ';

 

5. alter database open;

 

 

 

2. Oracle System Emergency troubleshooting

2.1 Control File Corruption:

The control file records important oracle configuration information, such as the database name, Character Set Name, data file, and log file location. If the control file is corrupted, the database is shut down abnormally. Once the control file is missing, the database cannot be started. This is a serious error.

 

You can query database log files to locate corrupted control files. The log file is located at $ ORACLE_BASE/admin/bdump/alert_ORCL.ora.

 

 

 

2.1.1 damage to a single control file:

1. Make sure that the database is closed. If the following command is not used to close the database:

 

Svrmgrl> shutdown immediate;

 

2. Check the initialization file $ ORACLE_BASE/admin/pfile/initORCL. ora to determine the path of all control files.

 

3. Use the operating system command to overwrite the wrong control file with other correct control files.

 

4. Run the following command to restart the database:

 

Svrmgrl> startup;

 

5. Use appropriate methods for full database backup.

 

 

 

2.1.2 damage to all control files:

1. Make sure that the database is closed. If the following command is not used to close the database:

 

Svrmgrl> shutdown immediate;

 

2. Restore the nearest control file from the backup result set. You can directly back up the nearest control file from tape to the corresponding directory for points without database backup; use the appropriate rman script to restore the nearest control file for the point with database backup.

 

3. Use the following command to create a script to generate a Database Control file:

 

Svrmgrl> startup mount;

 

Svrmgrl> alter database backupcontrolfile to trace noresetlogs;

 

4. Modify the trace file generated in step 3, copy some of the statements about creating the control file, and make some modifications so that it can reflect the latest database structure. Assume that the generated SQL file is named createcontrol. SQL.

 

Note: the path of the Trace file can be determined by checking the $ ORACLE_BASE/admin/bdump/alert_ORCL.ora file after the step 3rd is executed.

 

5. Run the following command to recreate the control file:

 

Svrmgrl> shutdown abort;

 

Svrmgrl> startup nomount;

 

Svrmgrl> @ createcontrol. SQL;

 

6. Use appropriate methods for full database backup.

 

 

 

2.2 damaged redo log files:

All Database additions, deletions, and changes are recorded in the redo log. If the active redo log file is damaged, the database is shut down abnormally. The inactive redo log will eventually change to the active redo log because of log switching. Therefore, the corrupted inactive redo log will eventually cause the database to terminate abnormally. In ipas/mSwitch, each redo log group has only one member. Therefore, in the following analysis, only the damage of the redo log group is considered, rather than the damage of a single redo log member.

 

 

 

Determine the location and status of damaged redo logs:

 

1. If the database is available:

 

Select * from v $ logfile;

 

Svrmgrl> select * from v $ log;

 

2. If the database is terminated abnormally:

 

Svrmlgr> startup mount;

 

Svrmgrl> select * from v $ logfile;

 

Svrmgrl> select * from v $ log;

 

Among them, the status of logfile is INVALID, indicating that this set of log files are damaged; the status of log is Inactive, indicating that the redo log files are Inactive; the status of Active indicates that the redo log files are Active; current: indicates that the redo log is a log file currently in use.

 

 

 

2.2.1 The corrupted log file is inactive:

1. Delete the corresponding log group:

 

Svrmgrl> alter database drop logfilegroup group_number;

 

2. Recreate the corresponding log group:

 

Svrmgrl> alter database add log file group group_number ('Log _ file_descritpion ',...) Size log_file_size;

 

 

 

2.2.2 the corrupted log file is active and not the current log:

1. Clear the corresponding log group:

 

Svrmgrl> alter database clear unarchivedlogfile group group_number;

 

 

 

2.2.3 the damaged log file is the current active log file:

Run the following command to clear the corresponding log group:

 

Svrmgrl> alter database clear unarchivedlogfile group group_number;

 

If clearing fails, incomplete restoration based on time points is allowed.

 

Open the database and back up the database in an appropriate way:

 

Svrmgrl> alter database open;

 

 

 

2.3 Damaged data files:

If the corrupted data file belongs to a non-system tablespace, the database can still be opened, but the corrupted data file cannot be accessed. In this case, the damaged data files can be recovered independently when the database is opened. If the data file in the system tablespace is damaged, the database system will terminate abnormally. In this case, the database can only be opened in the Mount mode, and then the data file is restored. You can check the database log file to determine whether the damaged data file actually belongs to the system tablespace.

 

 

 

2.3.1 non-system tablespace Data File Corruption

1. Determine the name of the damaged file:

 

Svrmgrl> select name from v $ datafilewhere status = 'invalid ';

 

2. Leave the corrupted data file in the offline status:

 

Svrmgrl> alter database datafile 'datafile _ name' offline;

 

 

 

3. Recover the latest backup of the data file from the backup result set. You can directly recover data from tape from a point without database backup. You can use the appropriate rman script to recover data from a point with database backup.

 

4. Restore data files:

 

Svrmgrl> alter database recover datafile 'file _ name ';

 

5. Make database files online:

 

Svrmgrl> alter database datafile 'datafile _ name' online;

 

6. Use appropriate methods for full database backup.

 

 

 

2.3.2 damaged data files in the system tablespace:

1. Start the database in mount Mode

 

Svrmgrl> startup mount;

 

2. Recover the latest backup of the data file from the backup result set. You can directly recover data from tape from a point without database backup. You can use the appropriate rman script to recover data from a point with database backup.

 

3. Restore the system tablespace:

 

Svrmgrl> alter database recover datafile 'datafile _ name ';

 

4. Open the database:

 

Svrmgrl> alter database open;

 

5. Use appropriate methods for full database backup.

 

 

 

2.4 table space corruption:

If the system tablespace is not damaged, the database can still be opened, but the damaged tablespace cannot be accessed. In this way, the damaged tablespace can be restored separately when the database is opened. If the system tablespace is damaged, the database system terminates abnormally. In this case, the database can only be opened in the Mount mode, and then the table space is restored. You can check the database log file to determine whether the currently corrupted tablespace is a system tablespace.

 

 

 

2.4.1 non-system tablespace corruption:

1. Leave the corrupted tablespace In the offline status:

 

Svrmgrl> alter tablespace 'tablespace _ name' offline;

 

2. Recover the latest backup of the tablespace from the backup result set. You can directly recover data from tape from a point without database backup. You can use the appropriate rman script to recover data from a point with database backup.

 

3. Restore the tablespace:

 

Svrmgrl> alter database recovertablespace 'tablespace _ name ';

 

4. Make the tablespace online:

 

Svrmgrl> alter tablespace 'tablespace _ name' online;

 

5. Use appropriate methods for full database backup.

 

 

 

2.4.2 system tablespace corruption:

1. Start the database in mount Mode

 

Svrmgrl> startup mount;

 

2. Recover the latest backup of the system tablespace from the backup result set. You can directly recover data from tape from a point without database backup. You can use the appropriate rman script to recover data from a point with database backup.

 

3. Restore the system tablespace:

 

Svrmgrl> alter database recovertablespace system;

 

4. Open the database:

 

Svrmgrl> alter database open;

 

5. Use appropriate methods for full database backup.

 

3. Oracle logical structure fault handling methods:

A logical structure fault generally refers to the loss of important data due to human error. In this case, the physical structure of the database is complete and consistent. This is not suitable for the full recovery of the original database. We generally use three methods to restore user data.

 

 

 

3.1 Use exp/imp to restore user data:

This method can be used only if the lost data is backed up using the exp command.

 

1. Create a temporary user in the database:

 

Svrmgrl> create user test_user identifiedby test;

 

Svrmgrl> grant connect, resource totest_user;

 

2. From the file backed up by the previous exp command, add the table with lost data to the test user as a user:

 

$ Imp system/manager file = export_file_name tables = (lost_data_table_name ...) Fromuser = lost_data_table_owner touser = test_user constraint = n;

 

3. Use the corresponding DML statement to restore the lost data from the test user to the original user.

 

4. Delete the test user:

 

Svrmgrl> drop user test_user cascede;

 

 

 

3.2 use logminer to restore user data:

Logminer is a log analysis tool provided by oracle. It can analyze online and archive logs based on the data dictionary to obtain historical records of various DML operations in the database and rollback information of various DML operations. These users can re-Add the data lost due to misoperations to the database.

 

1. confirm that the utl_file_dir parameter of the database has been set. If not, add this parameter to the oracle initialization parameter file and restart the database. The following example assumes utl_file_dir = '/opt/oracle/db01 ';

 

2. Create the data dictionary information required by logminer

 

Assume that the generated data dictionary text file is dict. ora:

 

Svrmgrl> executedbms_logmnr_d.build (dictionary_filename => 'dict. ora ', dictionary_location =>'/opt/oracle/db01 ');

 

3. Determine the log to be analyzed or the range of archived logs. This can determine the approximate log range based on the time when the user misoperations occur. Assume that the log files may be/opt/oracle/db02/oradata/ORCL/redo3.log and archived logs '/opt/oracle/arch/orcl/orclarc_2017113.ora' During User misoperations '.

 

4. Create a list of log files to be analyzed and add them in sequence:

 

Svrmgrl> executedbms_logmnr.add_logfile (logfilename => '/opt/oracle/arch/orcl/orclarc_2017113.ora', options => dbms_logmnr.NEW );

 

Svrmgrl> executedbms_logmnr.add_logfile (logfilename => '/opt/oracle/db02/oradata/ORCL/redo3.log', options => dbms_logmnr.ADDFILE );

 

5. Start log analysis. Assume that the analysis time is between '2017-06-2003: 00: 00' and '2017-06-28 13:00:00:

 

Svrmgrl> executedbms_logmnr.start_logmnr (

 

Dictfilename => '/opt/oracle/db01/dict. ora ',

 

Starttime => to_date ('2017-06-28 12:00:00 ', 'yyyy-MM-DD HH: MI: ss '),

 

Endtime => to_date ('1970-06-2003: 00: 00', 'yyyy-MM-DD HH: MI: ss ')

 

);

 

6. Obtain the analysis result:

 

Svrmgrl> select operation, SQL _redo, SQL _undofrom v $ logmnr_contents;

 

7. Repair data based on the analysis results.

 

8. End logmnr:

 

Svrmgrl> dbms_logmnr.end_logmnr;

 

9. Back up the original database in an appropriate way.

 

 

 

 

4. Data File loss

4.1 view lost data files

Select file # from v $ recover_file;

 

Selectfile #, status, name from v $ datafile order by 3; (status is recovery)

 

4.2 put the data file offline drop (noArchive)/offline (Archive ).

Alter database datafile file # offline drop;

 

Alter database datafile file # offline;

 

4.3 rebuilding data files [Optional]

Alter database create datafile file #.

 

4.4 recover data files [Optional]

Recover dafafile file #. (archive mode/non-archive mode, but the redo log is not overwritten ).

 

4.5 open the database.

Alter database open;

 

4.6 Delete the tablespace where the lost data file is located (this tablespace has only one data file) [Optional]

Drop tablespace tablespacename including contents and datafiles;

 

Excerpt from Oracle's march

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.