In this example, we will shorten the downtime during Oracle database migration and use the export/import method to perform the steps for fast application database migration, we do not focus much on the detailed steps of export/import or database connection or parallel processing. We focus on the following methods: first, only the data structure is migrated to the new system during system operation, and then only the data is migrated during system downtime. Other export/import parameters (such as direct, buffers, and log) are required to adjust the performance and behavior of the tools used. Depending on the data volume and the allowable downtime, this data migration process may need to go through more stages. 1. migrate the logical structure of the source database to the target database (which can be completed before migration without downtime) * Only export the database structure (tables, indexes, restriction definitions, PL/SQL stored procedures, etc.) from the source database. The command is as follows: exp sys/password file = structure. dmp rows = n indexes = y constraints = y full = y * import the database structure to the target database without restrictions or indexes. The command is as follows: imp sys/password file = structure. dmp constraints = n indexes = n full = y these operations do not require too much adjustment, because they can be executed during system operation. 2. only import data from the source database to the target database (this step is required after the source database is ready for upgrade) * to retain data consistency, we must ensure that no one can change the database, this usually means disabling the application server and any interface and setting the database to the limit mode. * Only the rows of the database table are exported from the source database. The command is as follows: exp sys/password file = data. dmp rows = y constraints = n indexes = n full = y * the exported file only imports the rows of the database table. The command is as follows: imp sys/password file = data. dmp constraints = n indexes = n full = y ignore = y non-import constraints and indexes are designed to avoid temporary data inconsistency and index maintenance performance problems during data loading. After all data is successfully migrated, create indexes and constraints. The following are some other export/import parameters that can improve the performance of data migration: parameter annotation direct if set to Y, this allows export/import to bypass the conventional SQL mechanism and directly import data from data files, so the speed is very fast. Restrictions: Before Oracle 8.1.5, rows containing large objects or object fields are not exported. You must use the general method. The client character set (set using the NLS_LANG parameter) must match the character set of the database. Recordlength is measured in bytes. It specifies the amount of data that can be written to an "exported file" or read from the "exported file" in a request. The maximum value is 65536 (64 KB). The default value is specific to the operating system, usually 1 kb. Under normal circumstances, 64 KB is the best. Buffer is only used for export and import in traditional paths (direct = n ). Specifies the size of the buffer, in bytes. It determines the maximum number of rows of an array that can be exported at a time, and the maximum size of the inserted array at the time of import. The default value of Commit is N, indicating that the Commit is performed only at the end of each table (or the middle part of the table. When importing a large table and there is not enough space for rollback, set this parameter to Y. However, the buffer size should be set to be large enough to avoid performance problems caused by "over-commit. As an alternative, you can use the parallel direct loading and Insertion Method on the database connection to migrate very large tables. Because exporting dozens of GB database tables to a disk and importing them back to a new database from the disk may not be feasible, you can use the parallel direct loading and insertion method ". Using the "parallel direct loading and insertion" method on database connections may be much faster than the export/import method, because data does not need to be stored in transition, but is directly transferred through the network. Command: alter session enable parallel dml; alter table table_name nologging; insert/* + append parallel (table_name, 4) */into table_name select * from table_name @ dblink; alter table table_name logging; Do not forget to reset the table or tablespace back to "log mode ". Starting from Oracle 9i, you can insert the command to use the NOLOGGING hint parameter. In this way, you do not need to manually change the table or tablespace to the "Close log" status. The command is as follows: insert/* + append nologging parallel (table_name, 4) */into table_name select * from table_name @ dblink; the preceding PARALLEL implies that the insert itself will be executed in PARALLEL. To use PARALLEL queries to extract data from the source database, we can add a PARALLEL hint parameter to the select statement. However, this usually does not help, because of the simplest nature of full table scanning-that is, disk input/output and network are often bottlenecks. Command: insert/* + append nologging parallel (table_name, 4) */into table_name select/* + PARALLEL (table_name, 4) */from table_name @ dblink; see the simple script for creating insert and update commands for associated tables in appendix C. * In a local area network, you can use the separated image technology or "Business continuous volumes" on an EMC system to open a precise copy of a database on a temporary server, in this way, data can be loaded into multiple source environments. This technology can accelerate data migration when migrating data from the old slow system to the new fast system. This technology must temporarily use a server with the same operating system and platform during product upgrade. You need to differentiate the table or mode name so that you can import part of data from one server and import other data from other servers. You can combine the above methods to migrate mode objects. For example, you can use the export/import method for small tables and modes, and use the parallel direct loading and Insertion Method for large tables on database connections ". 3. create an index and activate constraints (system downtime required) * Method 1: Use the export structure file described earlier to create an index and constraints in sequence. The command is as follows: imp sys/password file = structure. dmp constraints = y indexes = y full = y ignore = y this method has a problem-using the import method, we cannot actually specify the concurrent Custom Level to create an index, you cannot specify the NOLOGGING attribute. Even if we disable redo write operations on the disk using special parameters not recorded in the official file, the NOLOGGING attribute can still improve performance by setting fewer Detection Points and using fewer CPUs. Fortunately, there is an import feature that can bypass this issue. It runs the script for extracting the index from the exported file. * Method 2: Use the indexfile option of the import operation to create an index separately, and then use the import creation constraints as usual. A long index creation time can be greatly shortened when concurrent execution and attribute nologging are used. Since the imported syntax itself does not allow us to specify the index creation method, we have to first get the index creation script and then modify them according to our actual needs. The imported indexfile option is very convenient to use. The command is as follows: imp sys/password file = structure. dmp constraints = y indexes = y full = y ignore = y \ show = y indexfile = indexes. in the preceding SQL example, the SQL data definition language (DDL) command is used to create separate and restricted indexes (for key and unique constraints, some indexes are also implicitly created ). Note that the show = y parameter specifies that the database cannot be changed. The exported file is only scanned from start to end, any index created during this period is stored in the indexes specified by the indexfile option. SQL script. Next, we can use a text editor to open the SQL script, use NOLOGGING instead of LOGGING, and add the PARALLEL clause for each large index creation command. You can also use the change session command at the beginning of the script to modify some parameters, such as sort_area_size and db_file_multiblock_read_count, these two parameters greatly increase the index creation speed (for more information, see database performance tuning ). After modification, use sqlplus to run the script. The command is as follows: sqlplus sys/password @ indexes. The SQL index file (indexfile) contains the indexes required by the independent index and foreign key constraints or uniqueness constraints, but does not contain the constraint definition. To create and activate these constraints, we must repeat the structure. dmp runs the import process. The command is as follows: imp sys/password file = structure. dmp constraints = y indexes = n full = y ignore = y in the above command, we set indexes = n because they have been created in the previous step. At the same time, because all required indexes already exist, the index creation constraint should be completed very quickly, because you do not need to create an index for the unique key/primary key 4. verify that no object definition in the source database has been changed since the database structure was exported (performed at the same time in step 2 or step 3) * query the dba_segments described above and migrate any new objects to the new database, if appropriate (for example, use the Enterprise Manager or Toad to extract the "Data Definition Language" of a single object "). There should be no many new objects or no new objects at all, because after the database structure is exported, the configuration of the program server should be frozen and cannot be changed. This task can usually be performed at the same time as data migration or index creation. Note: If you use "direct loading" and disable all indexes on the associated table, no major rollback and redo operations should be performed. This is one of the reasons why primary key/unique key constraints should be disabled during data loading, because they all require index support.