Windows cold Backup Recovery to Linux

Source: Internet
Author: User

Previously, windows to Linux were migrated using the normal exp and IMP methods. Logical backup is used to migrate data across physical structures and operating systems. Such
I believe many of my friends are doing this.

Today, we have a 50 GB database, which requires a Windows-heavy platform to be migrated to a Linux platform. In the past few days, I have been looking at the physical structure of the data file.
File, there is no difference in OS, so does it mean that the data files on different operating systems are of the same structure? The physical structure documents have not been fully studied yet, so no program has been written for verification.
I will write a program to interpret the file and verify the structure after the study. However, with the above conjecture, I have a bold idea. In this case, I will use this window for cold backup.
Can it be restored to Linux to verify the actual action. If it succeeds, you can verify your inference and quickly migrate the data to the user.

Take action immediately.

Original System
OS: Window 20003 64bit
ORACLE: 10.2.0.1
Single Instance

Target System
OS: RHEL 4.7 64bit
ORACLE: 10.2.0.1
Single Instance

Perform migration based on the customer's needs.

1. query all logfiles

SQL> select * from V $ logfile;

2. query all data files

SQL> select * from V $ datafile;

3. First shut down the source database

Shutdow immediate

4. backup log files and data files

5. Start the database to the read only status. The database can also provide services.
Startup Mount;
Alter database open read only;

6. Copy the database file and log file to the target server.
The planned directory is different from the source database.
Data files are stored in/u01/APP/Oracle/oradata/Towell/datafile.
Place the control file in/u01/APP/Oracle/oradata/Towell/controlfile, and change one of them to current.1 to the bottom.
Pfile.
The log files are stored in/u01/APP/Oracle/oradata/Towell/onlinelog. After completing the steps, we can conclude that redo can be remigrated. After all
Cold backup.
Set up the dump directory of some columns under/u01/APP/Oracle/admin/Towell.

7. Now the action is started on the target database.
Create a password file first
Orapwd file = $ ORACLE_HOME/dbs/orapwtowell Password = mypassword
Create a pfile
This pfile file is most convenient to copy from the source library after export
Create pfile = 'd:/backup/init. ora 'from spfile;
Modify the relevant parameters, including the path, and modify control_files to a path. For example, control_files = '/u01/APP/Oracle
/Oradata/Towell/controlfile/current.1 ′

The two steps can be prepared in advance, which can save the downtime.

8. Start the database from pfile to the Mount stage.
Here is the concern. First, Will controlfile be incompatible with OS? If it succeeds, I think the data file can be basically done.
Display on the screen
Startup pfile = 'd:/backup/init. ora 'mount;
Database mounted.
In my heart, I felt that I had already started the first step of success.

9.
Export controlfile. Note that you cannot open it directly. If you open it directly, you will be prompted that the data file cannot be opened, because the path in the window under the control file is
Paths are all windows. So we control the file everywhere and recreate it.

Alter Database Backup controlfile to trace;
Oradebug setmypid;
Statement processed.
Oradebug tracefile_name;
/U01/APP/Oracle/admin/sample/udump/towell_ora_21214.trc

10. Open the trace file. You can see the SQL statement of the reconstructed controlfile.
Set the path of the data file and the path of the log file.
Modified according to the current Linux path. Here, according to the select result of the preceding datafile, the order must be kept in the order in the control file. do not modify the order,
You only need to modify the path. The user has more than 30 data files. You don't need to change it one by one. You can replace it. Modification completed.
I didn't like to use reuse during reconstruction, so I changed reuse to set and changed noresetlogs to resetlogs. As follows:
Create controlfile set database "Towell" resetlogs archivelog
.......

11. Shut down the database again to recreate the controlfile.
Shutdown immediate;
Startup pfile = "D:/backup/init. ora 'nomount;
Execute the above modified create controlfile script
Very smooth,
The system prompts that the control file is successfully created.

12. Open the database with excitement.
Alter database open resetlogs;
Looking at the logs in the log file, the victory is in sight.
The screen is interrupted when you are satisfied. Prompt database open failure, victory instance.

Database characterset is zhs16gbk
Fri Apr 16 13:40:50 2010
Errors in file/u01/APP/Oracle/admin/sample/bdump/sample_smon_7134.trc:
ORA-00604: Error occurred at recursive SQL Level 1
ORA-04031: unable to allocate 16 bytes of shared memory ("shared
Pool "," select null from OBJ $ where ..."," SQL Area "," kglhin: temp ")

I am very excited to see the information here, because I know that since we can see the 604 error, it is estimated that the database can be loaded, as long as there is no conflict between blocks,
According to the prompts here, it should be a shared pool problem.

Open the TRC file,

Chunk 25709420 SZ = 304 freeable "kks cstat"
Chunk 25709550 SZ = 32 freeable "kks pstat"
Chunk 25709570 SZ = 304 freeable "kks cstat"
Chunk 257096a0 SZ = 32 freeable "kks pstat"
Chunk 257096c0 SZ = 304 freeable "kks cstat"
Chunk 257097f0 SZ = 32 freeable "kks pstat"

**************************************** **************
**************************************** **************
====================================
End 4031 diagnostic information
====================================
Warning: Out of Shared Memory loading library cache object
[Handle = 257dde8c] Select null from OBJ $ where OBJ #=: 1 and type #=: 2 and
OBJ # Not in (select p_obj # from dependency $ where p_obj # = OBJ $. OBJ #)
* ** 2010-04-16 13:40:50. 996
SMON: following errors trapped and ignored:
ORA-00604: Error occurred at recursive SQL Level 1
ORA-04031: unable to allocate 16 bytes of shared memory ("shared
Pool "," select null from OBJ $ where ..."," SQL Area "," kglhin: temp ")
Warning: Out of Shared Memory loading library cache object
[Handle = 257e5d2c]
Select OBJ #, type #, ctime, mtime, stime, status, dataobj #, flags, oId $, spare1,
Spare2 from OBJ $ where owner # =: 1 and name =: 2 and namespace =: 3 and
Remoteowner is null and linkname is null and subname is null
* ** 2010-04-16 13:40:51. 045
SMON: following errors trapped and ignored:
ORA-00604: Error occurred at recursive SQL Level 2
ORA-04031: unable to allocate 4108 bytes of shared memory ("shared
Pool "," select OBJ #, type #, ctime, mtim ..."," Typecheck "," kgghteinit ")

The problem here is that because the memory of the target system is larger than that of the original system, I didn't set sga_target in pfile because I wanted to adjust sga_target.
The sga_target is only 117 MB. It is too small. This problem occurs, shutdow instance,
Create pfile as spfile. Then modify the SGA value. Based on the actual memory size of the customer's OS, the customer is 16 GB, And the sga_target is 5 GB first.
Start again.
Look at the log until the Oracle instance started appears.

There is no problem with the database.

Notify the customer to switch the datasource of the Application for half an hour. No problems have been found yet. I don't know if such migration will cause other problems. I tried RMAN first.
And archive. Because it is physically restored.
Therefore, it is recommended that the customer regularly perform exp logical backup. There is no bad thing if there are multiple backups. Although the operation is successful now, it does not mean that such a method has not brought any disaster issues, so we need more insurance.

Physical backup and recovery are fast. It takes more than half an hour to export 50 GB of data. In this way, the entire application read
The only process is less than 10 minutes. In the future, exp/IMP will not be used for such migration.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.