This article describes some Oracle database migration techniques, including preparations before Oracle database migration and solutions for different types of migration. We hope to help you understand Oracle database migration. BKJIA database channel has previously introduced the Oracle cross-platform migration structure without data migration methods.
At the end of last year, a lot of system data migration was conducted. Most of the systems performed logical migration due to platform and version reasons. A few of them performed physical migration and some experiences were made, share with you.
First, let's talk about the migration process. Before the Migration, write the solution, especially the steps of the implementation solution, and then complete the test. During the migration, some systems were tested four or five times and the solution and process were improved through tests.
For physical migration, that is, restoring and applying archive through RMAN backup, we do not discuss cold migration through dd ), although you must set the database to force logging, you must execute the following before using RMAN for full backup:
- view plaincopy to clipboardprint?
- alter database force logging;
- alter database force logging;
Otherwise, Bad blocks may be generated.
For logical migration, pay attention to the next execution time of the job and the user of the job before job_processes is set to a value greater than 0. For example, if the definition of a job has been imported before, but the job has been run at the time of migration, the next time of the job is still the original time after the migration is completed, which may be repeated. In addition, after a job is imported through IMP, the user of the job becomes the name of the import user. Obviously, the original user of the job cannot manage the JOB. You can modify the job using the following SQL statement:
- view plaincopy to clipboardprint?
- update sys.job$ set lowner=cowner , powner=cowner;
- update sys.job$ set lowner=cowner , powner=cowner;
Before migration, modification and publishing of the system structure should be prohibited, such as the table structure, index, and stored procedure package.
If you use exp/imp to import objects, including stored procedures, you should check whether the objects are consistent with the original production database. For example, after imp due to dblink, the stored procedure cannot be created, some stored procedures may be lost, although these stored procedures may not be used.
The following are some tips for accelerating migration:
Through dblink, append insert is used and parallel is used, which is faster than exp/imp.
For columns with the LONG type, insert .. the select method is obviously not feasible. You can use the exp/imp method, but this method is very slow because the table is inserted one row at a time in imp. Another method is the copy command of sqlplus. The following is an example: view plaincopy to clipboardprint?
- spool copy_long_table_1.log
- conn / as sysdba
- set copycommit=2000
- set arraysize 30
- set long 10485760
-
- copy from system/xxxx@source_db append username.table_name using select * from username.table_name;
-
- spool off
- exit
-
- spool copy_long_table_1.log
- conn / as sysdba
- set copycommit=2000
- set arraysize 30
- set long 10485760
-
- copy from system/xxxx@source_db append username.table_name using select * from username.table_name;
-
- spool off
- exit
However, the copy command of sqlpus does not support tables of the timestamp and lob column types. If a table of the timestamp type exists, you can divide the table into multiple parts by adding the rowid condition during the exp operation. For tables of the lob type, it can also be processed because insert... In the select mode, when a column of the lob type exists, it is also inserted in one row ). Note that in this mode, you cannot use the direct method exp/imp. The following is an example of parfile when exp is exported:
- query="where rowid>=dbms_rowid.rowid_create(1,71224,52,9,0) and rowid<=dbms_rowid.rowid_create(1,71224,55,1038344,10000)"
- file=/dumpdata/n1.dmp
- tables=username.table1
- constraints=n
- grants=no
- indexes=no
- buffer=104857600
- ...
- ...
- query="where rowid>=dbms_rowid.rowid_create(1,71224,423,137,0) and rowid<=dbms_rowid.rowid_create(1,71224,432,59272,10000)"
- file=/dumpdata/n6.dmp
- tables=username.table1
- constraints=n
- grants=no
- indexes=no
- buffer=104857600
-
- query="where rowid>=dbms_rowid.rowid_create(1,71224,52,9,0) and rowid<=dbms_rowid.rowid_create(1,71224,55,1038344,10000)"
- file=/dumpdata/n1.dmp
- tables=username.table1
- constraints=n
- grants=no
- indexes=no
- buffer=104857600
- ...
- ...
- query="where rowid>=dbms_rowid.rowid_create(1,71224,423,137,0) and rowid<=dbms_rowid.rowid_create(1,71224,432,59272,10000)"
- file=/dumpdata/n6.dmp
- tables=username.table1
- constraints=n
- grants=no
- indexes=no
- buffer=104857600
You can divide a table into several parts at the same time, not only using rowid, but also using the columns in the table. For example, the table has a created_date column and the data is inserted incrementally, in this case, you can also use this field to split the table into different ranges for simultaneous export and import. However, using ROWID is usually more efficient.
Of course, for tables with lob columns, you can split them into multiple insert statements and insert them at the same time without exp/imp.
For a particularly large partition table, although parallel processing can improve the speed, a single process cannot perform parallel transactions across DB links and can only perform parallel queries, that is, insert .. select can only be a part of a SELECT statement in parallel.) The processing speed is still limited in this mode. You can insert data into multiple intermediate tables in parallel and exchange partitions using exchange partition without validation, which greatly improves the speed.
Some may ask why we do not insert partition tables directly in parallel. Of course, if it is not the direct path (append) method, it is okay, but the insertion performance is low. However, the direct path method holds the tmlock with mode = 6 mutex on the table. multiple sessions cannot be inserted at the same time. (Update: insert into tablename partition (partname) select * from tablename where ...., Is simpler and more efficient .)
During migration, the data is divided into two parts: the historical table and the dynamic table. Before migration, import the historical table and create an index on the historical table, this will undoubtedly greatly reduce the interruption time of the Business System during migration.
Remove junk data before migration.
During migration, ensure that there are no indexes on the table, except for the constraint not null) and triggers. After the data is imported, re-create the index. When you create an index, multiple processes run the script at the same time. If the index fails to be created, remove the PARALLEL attribute of the index.
When creating constraints, you should first create CHECK constraints, primary keys, unique keys, and then create foreign key constraints. The constraint state is enable novalidate, which greatly reduces the constraint creation time. After the migration is complete, set it back to enable validate.
You can use dbms_stats.export_schame_stats and dbms_stats.import_schame_stats to import the statistical information of the original database instead of collecting statistics again.
As you can see, all of the above are for 9i. In fact, in 10g or even 11g environments, there are still many references. Of course, these skills are not only used for complete database migration, but can also be applied to copy individual tables to other databases.
What is not mentioned here is the use of materialized views, advanced replication, triggers, and other technologies, because these technologies, after all, need to modify the production database, which has a great impact on the operation of the production database. Therefore, it should be considered only when the downtime requirements are particularly strict and the migration cannot be completed within this period.
In terms of migration experience, only a complete process and a complete test can ensure the success. Here are some tips. If you are interested in the entire migration process, you can discuss this topic again.