At the end of last year did a lot of system data migration, most of the system due to platform and version of the reasons, do is a logical migration, a small part to do is physical migration, some experience, and share with you.
• For particularly large partitioned tables, although the use of parallelism can improve speed, it is limited to a single process (you cannot do parallel transactions across DB link, only parallel queries, or inserts). A select is only a select part for parallel processing, which is still limited in speed. You can insert data into multiple intermediate tables and then swap partitions through Exchange partition without validation, which will greatly improve speed.
• A friend may ask why it is not possible to insert a partitioned table directly in parallel, although it is OK if it is not direct path (append), but the performance of this approach is low. In the way of direct path, a mode=6 (mutually exclusive) TM Lock is held on the table, and multiple sessions cannot be inserted simultaneously. (UPDATE: Use this statement when insert: INSERT INTO TableName partition (PartName) SELECT * FROM TableName where ..., simpler and more efficient.) )
• When migrating, divide the data into two parts, a history table, and the second part is a dynamically changing table, importing the History table and indexing the history table before migrating, which will undoubtedly greatly reduce the disruption time of the business system during migration.
• Before migrating, consider cleaning up garbage data.
• When migrating, you should ensure that there are no indexes on the table, constraints (except NULL) and triggers, and then build the index after the data import completes. When you build an index, you also use multiple process run scripts. After the index is created underachievement, the parallel property of the index should be removed
• In order to create a constraint, you should create a check constraint first, a primary key, a unique key, and then a foreign key constraint. The constraint state is the ENABLE novalidate, which greatly reduces the constraint creation time. After the migration is complete, consider setting back to the Enable VALIDATE.
• Import statistics on the original library by using Dbms_stats.export_schame_stats and dbms_stats.import_schame_stats without having to collect statistical usage again.
Friends can see that the above are for 9i, in fact, in 10g or even 11g environment, there are still a lot of reference meaning. Of course, these techniques are used not only for complete database migrations, but also for copying individual tables to other databases.