Oracle Database Data Migration Solution

Source: Internet
Author: User

Most of the system because of the platform and version of the reason, do is the logical migration, a small part of the physical migration, and then the experience to share with you

At the end of last year did a lot of system data migration, most of the system because of the platform and version of the reason, do is the logical migration, a small part do is physical migration, have some experience, share with you.

First of all talk about the migration process, before the migration, write a good plan, especially the implementation of the program steps must be clear, and then complete testing. When we migrated, some systems were tested four or five times to improve the program and process through testing.

For physical migrations, that is, the way to restore and apply the archive via Rman backup (this does not discuss the cold migration through DD), although it is important to note that the database should be set to force logging, before it is fully prepared with Rman, it must be performed:

Failure to do so may result in bad blocks.

For logical migrations, note the next execution time for the job and the user to which the job belongs before Job_processes is set to a value of >0. For example, the job definition has been imported before, but at the time of the migration, the job has already been run, then the next time after the migration is completed, the job will be the original time, which may run repeatedly. In addition, after the job is imported through IMP, the job owner will become the name of the imported user, obviously the job can not be managed by the original user, and the following SQL to modify:

Before migrating, the system should be banned from structural modification and publishing, such as table structure, index, stored procedure package and so on.

If you are importing objects with EXP/IMP, including stored procedures, you should check that the objects are consistent with the original production library, for example, because of Dblink, the stored procedures cannot be created after imp, resulting in the loss of some stored procedures, although these stored procedures may not be used.

  Here are some tips to speed up your migration :

by Dblink, using append insert, while leveraging parallelism, this approach is faster than exp/imp

For columns with a long type, insert: The way of select is obviously not possible, and can be done by exp/imp, but this is very slow because the IMP inserts the table one line at a time. There is another way, the copy command for Sqlplus, and here is an example:

However, Sqlpus's copy command does not support tables with timestamp and LOB column types. If you have a table of type timestamp, you can do this by dividing a table into multiple parts at exp, plus the ROWID condition, and for tables with LOB types, as well (because insert ... select mode, when there are LOB type columns, is also inserted in a row). Note that in this way, you cannot use the direct Exp/imp method. The following is an example of parfile when exporting Exp:

You can also use this field to divide a table into different ranges by using ROWID or by using columns on the table, for example, with a created_date column on the table, and to ensure that the data is incrementally inserted, in which case it is possible to export and import the tables into separate scopes. However, using ROWID is usually more efficient.

Of course, for a table with LOB columns, it can be inserted in multiple inserts as described above, without the need for exp/imp.

• For particularly large partitioned tables, although using parallelism can increase speed, it is limited by a single process (cannot do parallel transactions across DB link, only parallel queries, or insert: Select can only be a select part in order to be parallel) processing power, in this way the speed is still limited. The ability to insert data into multiple intermediate tables, and then swap partitions with Exchange partition without validation, will greatly increase the speed.

• A friend may ask why it is not possible to insert a partitioned table directly in parallel, although it is not a problem if it is a non-direct path (append), but the performance of this approach is low. The direct path method holds the mode=6 (mutually exclusive) TM lock on the table and cannot be inserted at the same time for multiple sessions. (Update: Use such a statement in insert: INSERT INTO TableName partition (PartName) SELECT * FROM TableName where ....), simpler and more efficient. )

• When migrating, the data is divided into two parts, the history table, the second part is a dynamically changing table, the history table is imported before migration, and the index is built on the history table, which will undoubtedly greatly reduce the business system outage time during the migration.

• Before migrating, consider cleaning up garbage data.

• When migrating, you should ensure that there are no indexes on the table, except for constraints (NOT NULL) and triggers, and then index the data after the import is complete. When you build an index, you run the script with multiple processes simultaneously. After the index is created underachievement, the parallel property of the index should be removed

• When creating constraints, you should create a check constraint, a primary key, a unique key, and then create the order of the foreign KEY constraint. The constraint state is ENABLE novalidate, which greatly reduces the constraint creation time. After the migration is complete, consider setting back to enable VALIDATE.

• Import statistics on the original library by using Dbms_stats.export_schame_stats and dbms_stats.import_schame_stats instead of collecting statistics for use.

Friends can see that the above are for 9i, in fact, in 10g or even 11g environment, also still a lot of reference significance. Of course, these techniques are not just for full database migrations, but also for copying individual tables to other databases.

What is not mentioned here is the use of materialized views or advanced replication, triggers and other technologies, because these technologies, after all, to modify the production library, the operation of the production library has a relatively large impact, so only when the downtime requirements are particularly stringent, and in this time can not complete the migration should be considered.

From the experience of migration, only the perfect process, complete testing can guarantee success. Here are just a few tips, if you are interested in the entire migration process, you can discuss this topic again.

Oracle Database Data Migration Solution

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.