Oracle Database Data Migration Solution _oracle

Source: Internet
Author: User
Tags oracle database

At the end of last year did a lot of system data migration, most of the system due to platform and version of the reasons, do is a logical migration, a small part to do is physical migration, some experience, and share with you.

First of all talk about the migration process, before migrating, write a good plan, especially the implementation of the program steps must be clearly written, and then complete the test. When we migrated, some systems were tested four or five times to refine the program and process through testing.

For physical migrations, the way to restore and apply the archive through Rman backups (no cold migration via DD is discussed here), although it is important to set the database to force logging, before you make full preparation with Rman, be sure to perform:

Otherwise, bad blocks may be generated.

For logical migrations, note the next execution time of the job and the user to which the job belongs before the job_processes is set to the >0 value. For example, the definition of a job has been imported before, but at the time of the migration, the job has already been run, and then the next time the job has been completed, it may run repeatedly. In addition, after the job is imported by IMP, the user who owns the job becomes the name of the imported user, and it is obvious that the job's original user cannot manage the job, which can be modified using the following SQL:

Before migrating, you should prohibit structural modification and release of the system, such as table structure, index, stored procedure package, etc.

If you are importing objects with EXP/IMP, including stored procedures, and so on, you should check to see if the object is consistent with the original production library, for example, after imp, the stored procedure cannot be created because of dblink, and some stored procedures are lost, although these stored procedures may not be used.

  Here are some tips to speed up migration :

by Dblink, using append inserts, while leveraging parallelism, is faster than exp/imp

For columns with a long type, insert ... The Select method is clearly not possible, and can be done in a exp/imp way, but this is very slow, because IMP inserts the table one line at a time. There is another way, the copy command for Sqlplus, and here is an example:

However, the Sqlpus Copy command does not support tables with timestamp and LOB column types. If you have a table of type timestamp, you can split a table into parts simultaneously by adding the ROWID condition at exp, and you can do the same for tables with LOB types (because of the insert ... in the case of the LOB type column, is also inserted in one line. Note that in this way, you cannot use Direct's way of exp/imp. The following is an example of Parfile when exp is exported:

You can use this field to separate the table into several parts, not only the ROWID, but also the columns on the table, for example, there is a created_date column on the table, and you are guaranteed to insert the data incrementally. However, using ROWID is usually more efficient.

Of course, for tables with LOB columns, you can insert them into multiple inserts in the way that you want, without exp/imp.

• For particularly large partitioned tables, although the use of parallelism can improve speed, it is limited to a single process (you cannot do parallel transactions across DB link, only parallel queries, or inserts). A select is only a select part for parallel processing, which is still limited in speed. You can insert data into multiple intermediate tables and then swap partitions through Exchange partition without validation, which will greatly improve speed.

• A friend may ask why it is not possible to insert a partitioned table directly in parallel, although it is OK if it is not direct path (append), but the performance of this approach is low. In the way of direct path, a mode=6 (mutually exclusive) TM Lock is held on the table, and multiple sessions cannot be inserted simultaneously. (UPDATE: Use this statement when insert: INSERT INTO TableName partition (PartName) SELECT * FROM TableName where ..., simpler and more efficient.) )

• When migrating, divide the data into two parts, a history table, and the second part is a dynamically changing table, importing the History table and indexing the history table before migrating, which will undoubtedly greatly reduce the disruption time of the business system during migration.

• Before migrating, consider cleaning up garbage data.

• When migrating, you should ensure that there are no indexes on the table, constraints (except NULL) and triggers, and then build the index after the data import completes. When you build an index, you also use multiple process run scripts. After the index is created underachievement, the parallel property of the index should be removed

• In order to create a constraint, you should create a check constraint first, a primary key, a unique key, and then a foreign key constraint. The constraint state is the ENABLE novalidate, which greatly reduces the constraint creation time. After the migration is complete, consider setting back to the Enable VALIDATE.

• Import statistics on the original library by using Dbms_stats.export_schame_stats and dbms_stats.import_schame_stats without having to collect statistical usage again.

Friends can see that the above are for 9i, in fact, in 10g or even 11g environment, there are still a lot of reference meaning. Of course, these techniques are used not only for complete database migrations, but also for copying individual tables to other databases.

What is not mentioned here is the use of materialized views or advanced replication, triggers such as technology, because these technologies, after all, to modify the production library, the production of the operation of a relatively large impact, so only in the downtime requirements are particularly stringent, and at this time can not complete the migration should be considered.

From the experience of migration, only the perfect process, complete testing can guarantee success. Here are just a few tips, and if you're interested in the entire migration process, you can discuss this topic again.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.