Oracle Database migration skills

Source: Internet
Author: User

This article describes some Oracle database migration techniques, including preparations before Oracle database migration and solutions for different types of migration. We hope to help you understand Oracle database migration. BKJIA database channel has previously introduced the Oracle cross-platform migration structure without data migration methods.

At the end of last year, a lot of system data migration was conducted. Most of the systems performed logical migration due to platform and version reasons. A few of them performed physical migration and some experiences were made, share with you.

First, let's talk about the migration process. Before the Migration, write the solution, especially the steps of the implementation solution, and then complete the test. During the migration, some systems were tested four or five times and the solution and process were improved through tests.

For physical migration, that is, restoring and applying archive through RMAN backup, we do not discuss cold migration through dd ), although you must set the database to force logging, you must execute the following before using RMAN for full backup:

 
 
  1. view plaincopy to clipboardprint?  
  2. alter database force logging;   
  3. alter database force logging; 

Otherwise, Bad blocks may be generated.

For logical migration, pay attention to the next execution time of the job and the user of the job before job_processes is set to a value greater than 0. For example, if the definition of a job has been imported before, but the job has been run at the time of migration, the next time of the job is still the original time after the migration is completed, which may be repeated. In addition, after a job is imported through IMP, the user of the job becomes the name of the import user. Obviously, the original user of the job cannot manage the JOB. You can modify the job using the following SQL statement:

 
 
  1. view plaincopy to clipboardprint?  
  2. update sys.job$ set lowner=cowner , powner=cowner;   
  3. update sys.job$ set lowner=cowner , powner=cowner; 

Before migration, modification and publishing of the system structure should be prohibited, such as the table structure, index, and stored procedure package.

If you use exp/imp to import objects, including stored procedures, you should check whether the objects are consistent with the original production database. For example, after imp due to dblink, the stored procedure cannot be created, some stored procedures may be lost, although these stored procedures may not be used.

The following are some tips for accelerating migration:

Through dblink, append insert is used and parallel is used, which is faster than exp/imp.

For columns with the LONG type, insert .. the select method is obviously not feasible. You can use the exp/imp method, but this method is very slow because the table is inserted one row at a time in imp. Another method is the copy command of sqlplus. The following is an example: view plaincopy to clipboardprint?

 
 
  1. spool copy_long_table_1.log     
  2. conn / as sysdba     
  3. set copycommit=2000     
  4. set arraysize 30     
  5. set long 10485760     
  6.     
  7. copy from system/xxxx@source_db append username.table_name using select * from username.table_name;     
  8.     
  9. spool off    
  10. exit   
  11.  
  12. spool copy_long_table_1.log  
  13. conn / as sysdba  
  14. set copycommit=2000  
  15. set arraysize 30  
  16. set long 10485760  
  17.  
  18. copy from system/xxxx@source_db append username.table_name using select * from username.table_name;  
  19.  
  20. spool off 
  21. exit 

However, the copy command of sqlpus does not support tables of the timestamp and lob column types. If a table of the timestamp type exists, you can divide the table into multiple parts by adding the rowid condition during the exp operation. For tables of the lob type, it can also be processed because insert... In the select mode, when a column of the lob type exists, it is also inserted in one row ). Note that in this mode, you cannot use the direct method exp/imp. The following is an example of parfile when exp is exported:

 
 
  1. query="where rowid>=dbms_rowid.rowid_create(1,71224,52,9,0) and rowid<=dbms_rowid.rowid_create(1,71224,55,1038344,10000)"    
  2. file=/dumpdata/n1.dmp     
  3. tables=username.table1     
  4. constraints=n     
  5. grants=no    
  6. indexes=no    
  7. buffer=104857600     
  8. ...     
  9. ...     
  10. query="where rowid>=dbms_rowid.rowid_create(1,71224,423,137,0) and rowid<=dbms_rowid.rowid_create(1,71224,432,59272,10000)"    
  11. file=/dumpdata/n6.dmp     
  12. tables=username.table1     
  13. constraints=n     
  14. grants=no    
  15. indexes=no    
  16. buffer=104857600   
  17.  
  18. query="where rowid>=dbms_rowid.rowid_create(1,71224,52,9,0) and rowid<=dbms_rowid.rowid_create(1,71224,55,1038344,10000)" 
  19. file=/dumpdata/n1.dmp  
  20. tables=username.table1  
  21. constraints=n  
  22. grants=no 
  23. indexes=no 
  24. buffer=104857600  
  25. ...  
  26. ...  
  27. query="where rowid>=dbms_rowid.rowid_create(1,71224,423,137,0) and rowid<=dbms_rowid.rowid_create(1,71224,432,59272,10000)" 
  28. file=/dumpdata/n6.dmp  
  29. tables=username.table1  
  30. constraints=n  
  31. grants=no 
  32. indexes=no 
  33. buffer=104857600 

You can divide a table into several parts at the same time, not only using rowid, but also using the columns in the table. For example, the table has a created_date column and the data is inserted incrementally, in this case, you can also use this field to split the table into different ranges for simultaneous export and import. However, using ROWID is usually more efficient.

Of course, for tables with lob columns, you can split them into multiple insert statements and insert them at the same time without exp/imp.

For a particularly large partition table, although parallel processing can improve the speed, a single process cannot perform parallel transactions across DB links and can only perform parallel queries, that is, insert .. select can only be a part of a SELECT statement in parallel.) The processing speed is still limited in this mode. You can insert data into multiple intermediate tables in parallel and exchange partitions using exchange partition without validation, which greatly improves the speed.

Some may ask why we do not insert partition tables directly in parallel. Of course, if it is not the direct path (append) method, it is okay, but the insertion performance is low. However, the direct path method holds the tmlock with mode = 6 mutex on the table. multiple sessions cannot be inserted at the same time. (Update: insert into tablename partition (partname) select * from tablename where ...., Is simpler and more efficient .)

During migration, the data is divided into two parts: the historical table and the dynamic table. Before migration, import the historical table and create an index on the historical table, this will undoubtedly greatly reduce the interruption time of the Business System during migration.
Remove junk data before migration.

During migration, ensure that there are no indexes on the table, except for the constraint not null) and triggers. After the data is imported, re-create the index. When you create an index, multiple processes run the script at the same time. If the index fails to be created, remove the PARALLEL attribute of the index.

When creating constraints, you should first create CHECK constraints, primary keys, unique keys, and then create foreign key constraints. The constraint state is enable novalidate, which greatly reduces the constraint creation time. After the migration is complete, set it back to enable validate.

You can use dbms_stats.export_schame_stats and dbms_stats.import_schame_stats to import the statistical information of the original database instead of collecting statistics again.
As you can see, all of the above are for 9i. In fact, in 10g or even 11g environments, there are still many references. Of course, these skills are not only used for complete database migration, but can also be applied to copy individual tables to other databases.

What is not mentioned here is the use of materialized views, advanced replication, triggers, and other technologies, because these technologies, after all, need to modify the production database, which has a great impact on the operation of the production database. Therefore, it should be considered only when the downtime requirements are particularly strict and the migration cannot be completed within this period.

In terms of migration experience, only a complete process and a complete test can ensure the success. Here are some tips. If you are interested in the entire migration process, you can discuss this topic again.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.