Oracle data migration Solution

Source: Internet
Author: User

Oracle data migration solution data migration generally refers to transferring data from one place to another. Main application scenarios include: Building a test environment based on the formal system, copying data from the Intranet to the Internet, and upgrading the database server hardware. Different migration methods can be adopted based on the data volume and system architecture to be migrated. Note: The following methods do not consider incremental data updates, real-time data synchronization, and logical data conversion. If you have these requirements, we recommend that you use third-party ETL tools or other Oracle Data Synchronization technologies. I. Common Example 1.1 how to build a test environment on the customer's site? In the general solution, use the imp/exp tool to export data directly in the source database, and then import the data to the target database. The execution speed of IMP/EXP is mainly limited by the disk and network. Data volume: GB export time: 5 minutes import time: 23 minutes export file size: 64 MB Export Import environment: single CPU, MB memory. To maximize the speed, direct path export, maximum I/O buffer setting, and import and export files are all executed on the server. 1.2 is there any faster way? Yes, impdp/expdp is still used. Instead of exporting and importing data, you can directly import the data from the source database to the target database. CMD> Impdp testi @ destination database directory = DMPDIR schemas = TESTInetwork_link = source database dblink remap_schema = TESTI: the operations in the statements above TESTA are the data of TESTI users in the source database, import to the TESTA user of the target database. This operation is the most convenient tool for data migration in the LAN, but it may also be the slowest tool. 1.3 is there any faster way? Yes. Use impdp/expdp instead. Export is also performed in the source database and import is performed in the target database. The operation speed can be greatly improved. The speed of IMPDP/EXPDP is mainly limited by the disk and has nothing to do with the network. Original data size: 1.5Gexpdp export operation time: 5 minutes impdp import operation time: 22 minutes export file size: 588M Export Import environment: single CPU, 700M memory, degree of parallelism = 1 ?? Didn't you say it would be faster? Why is the speed similar to 1.1 imp/exp? See the explanation in the fourth part. 1.4 are you sure you want to hurry up? Use tablespace migration. Export the metadata of the tablespace and copy it together with the data file to the new database. Execute metadata import. Generally, the data volume imported and exported is less than 5 MB. The speed is quite fast, but there are many restrictions. Export time: 1 minute import time: 3 minutes export file: 60 M + data file 1.5G 1.5 how to convert data from linux to windows? View v $ transportable_platform. If the data encoding is consistent, you can directly copy the data file. Otherwise, rman, impdp/expdp, or imp/exp is used. 1.6 if you have an excel data table that needs to be updated remotely to the customer database, how can you update it? Use pl/SQL developer to copy, paste, and submit data. 1.7 if you need to migrate several tables from the official database to the test database, how can you get it faster? Use dblink + script or impdp for remote import. Ii. Local data migration 2.1. Wide Area Network migration 2.1.1 pl/SQL developer migration of small data volume in wide area network, which is usually completed by pl/SQL developer tools. Open an excel file locally and copy the data. Then paste it on the pl/SQL interface of the remote server through remote desktop. Easy to operate. Step 1: Copy data locally Step 2: enable remote desktop Step 3: paste data in pl/SQL of the remote machine Step 4: save data this method is useful in small data volumes. When there is a large amount of data, it is troublesome to paste a table and a table, and it may be stuck there as soon as it is pasted. It will take 10 minutes. 2.1.2 migration of large data volumes within the imp/exp Wan usually uses the imp/exp tool. First, use the exp tool on the source database to export the data compressed package and send it to the target database through the network. Run imp on the target database. Step 1: connect the local machine to the source database and execute expExp. Generally, direct path export is used, which can be more than three times faster than normal path export. Parameter description: Parfile: Specify the exported parameter configuration file Log: the file to which the exported Log is output. recordlength = 65535: set the maximum I/O buffer to 64 K (the maximum value of this parameter is 64 K) direct = y: data is exported directly through the path, and the Owner = testi is not exported by SGA: Only testi data is exported. Step 2: connect the local machine to the target database and execute imp Parfile: Specify the imported parameter configuration file Log: the file to which the import Log is output Feedback = 1000: 1000 lines per import, output ". "Buffer = 10000000: set the size of the import Buffer Fromuser = testi: import data of only testi users Touser = testi: import data to the new user testi. 2.2. flexible solutions for data migration within the LAN. Common methods include imp/exp, impdp/expdp, dblink + script, and tablespace migration. Here, imp/exp has been introduced in 2.1.2. Other methods are introduced: 2.2.1 dblink + script 2.2.1.1. This section describes how to connect multiple distributed databases through dblink to provide unified services. You can access multiple distributed databases in one database. You can use the "dblink + script" method to transfer data. The configuration is flexible, but it is troublesome to write the script. You need to write a separate script for each table. 2.2.1.2 main configuration of the implementation scheme is divided into two steps: 1) create a database connection create database link LINKNAME connect to DBUSER identified by password using '(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = 192.168.1.9) (PORT = 1521) (CONNECT_DATA = (SERVICE_NAME = ORCL) '; 2) execute the extraction script, for example: extract data from Table B to Table. Create table A as select * from B @ LINKNAME; write scripts for each table to be synchronized. 2.2.2 impdp/expdp2.2.2.1 the basic introduction Impdp/expdp is an upgraded version of imp/exp, which is introduced in Oracle 10g. Its main enhancement functions are as follows: 1) performance optimization, significantly improved Import and Export speed; 2) parallel execution capability, accelerated Import and Export speed; 3) Interactive Interface, you can pause the import and export operations at any time. 4) Multiple table loading policies are provided, such as append, replace, and skip. 5) direct exchange between database objects is provided. 6) provides the export file size estimation function 7) provides the Import and Export progress viewing function 8) automatically generates import and export log files in the export file directory. However, impdp/expdp has obvious restrictions. 1) incompatible with data packets generated by the imp/exp tool 2) It can be called remotely, but the imported and exported files must be placed on the server. In general, impdp/expdp has obvious advantages, therefore, if impdp/expdp can be used, try not to use imp/exp. 2.2.2.1 the implementation scheme Impdp/expdp is mainly divided into three steps: 1) Create directory ing on the database, Create directory DMPDIR as 'C: \ oracle \ dump \ '; Grant the user USER01 the permission to read and write the Directory: Grant read, write on directoryDMPDIR toUSER01; 2) execute the export Script: parameter description: Directory: the path to which the data file is exported. This specifies the directory created in step 1. Dumpfile: exported file name Logfile: Log File Name Parallel: Set the degree of parallelism of the export job. If you have high export speed requirements, you can set the number of CPUs-1Job_name: Name the export job SCHEMAS: specifies the user data to be exported. 3) execute the data import script. First, create the operating system directory ing on the target database. Then copy the exported file in step 2 to the corresponding directory of the target database. Then execute the following Script: parameter explanation: Directory: Path of the import file Dumpfile: Import file name Logfile: Specify the storage location of the generated log file Parallel: Specify the operation concurrency job_name: specify the import job name SCHEMAS: Specify the username to import REMAP_SCHEMA = TESTI: TESTB: specify to import the data of the TESTI user to TABLE_EXISTS_ACTION = REPLACE: if the table to be imported already exists, replace it directly. 2.2.2.3 interface Introduction 1) the entire exported file is displayed on the export interface, which requires a storage space of about MB. Of course, expdp also supports only evaluating space without exporting data. 2) on the status view page, if you want to view the data import progress, open a new window and execute the following Script:> expdp test @ target ATTACH = TESTIMP> status 2.2.3 tablespace migration 2.2.3.1, it is equivalent to directly copying a database file to another database using a USB flash drive. Although this principle is simple, the operation is complicated. This operation has many restrictions: 1) the character set of the source database and the target database must be the same, and the national character set must be the same. You can check the view v $ nls_parameters to confirm. 2) the source database and the target database should be in the same Oracle version. 3) the tablespace where SYS and SYSTEM user objects are not moved. 2.2.3.2 the tablespace migration in the implementation scheme can be completed using imp/exp or impdp/expdp, mainly in three steps: 1) complete the table space set self-contained check SQL> exec dbms_tts.transport_set_check ('tbs1', true); after execution, query: select * from v $ transport_set_violations; if no data is found, table space migration can be performed. Otherwise, use other methods based on the query results. 2) execute the tablespace export SQL> alter tablespace users read only; CMD> expdp test @ orcl directory = DMPDIR dumpfile = tbs_dmp.dmptransport_tablespace = USERS3) execute table space import. copy the data file USER01.DBF generated in step 2 to the target database through the USB flash drive, and execute import on the target database: CMD> impdp test @ orcl directory = DMPDIR dumpfile = tbs_dmp.dmptransport_tablespace = y tablespaces = USERS transport_datafiles = 'C :\... \ user01.dbf' SQL> alter tablespace users read write; 2.2.3.3 the table is empty. Inter-data transmission only exports the metadata of the tablespace and inserts it into the new database, so the speed is very fast. Iii. Full-database migration the whole-database migration is generally used during the first environment setup process. It is to move the entire database to another machine. It is suitable for building an independent test environment. The imp/exp and impdp/expdp tools described earlier can also be used for full-database migration. However, they are very slow and often report errors. It is better to use the following method. 3.1 cold backup migration: Shut down the source database, copy the data files to the same location of the new machine, and open the new database directly. This migration process is relatively simple and well controlled, but it has its own limitations: it cannot be used on a cross-operating system hardware platform or a large database version. The migration steps in Windows are roughly as follows: 1) Close the source database; 2) Create the corresponding operating system directory on the new machine based on the data file address of the source database; 3) copy the data files, control files, parameter files, and password files of the source database to the machine where the new database is located. 4) Start the source database. 5) create other directories recorded in the control file. 6) use oradim to create an instance 7) Start the target database 8) execute utlrp. SQL script to compile all invalid objects. 3.2 It is convenient to use RMAN for full-database migration during RMAN migration. The main advantage is that it supports cross-operating system hardware platforms. The following is a specific implementation step for migrating a linux system to the wimdows System (of course, data files can be directly copied between 32-bit linux and 32-bit windows, without the need to do so, here is an example): 1) open Database SQL> startup open read only in read only mode; 2) convert the data file CMD> RMAN target/RMAN> run {convert database transport script '/home/Oracle/temp/transcript. SQL 'on target platform convert script'/home/oracle/temp/convert. SQL 'to platform' Microsoft Windows IA (32-bit) 'db _ file_name_convert ('/oracle/oradata/orcl', '/home/oracl E/temp ') ;}; 3) copy the parameter file, data file, and conversion script to windows. 4) Create a database instance on windows, then execute the script convert in sequence. SQL, transcript. sql5) Open the database and execute utlrp. SQL, compile invalid database objects. Iv. Summary 4.1 how to select different migration solutions? The time spent may fluctuate between 10 minutes + to 10 hours + ........ The solution is right. You can do it in minutes. Otherwise, you have to stay up late and work overtime. In general, if you want to migrate data, consider the priority of the tool as follows: read this document from the back and see the following order ........ 1) if the entire database is migrated, cold backup migration and RMAN migration are preferred. Otherwise, table space migration is preferred. 2) impdp/expdp3) imp/exp4) which scheme should be selected based on the implementation environment for dblink + script. Maybe the environment does not support your long-term plan. But there is always one that suits you. 4.2 what is the difference between impdp/expdp and imp/exp? Impdp/expdp = imp/exp + direct mor + parallelImpdp/expdp is faster than imp/exp because it has the parallel execution feature and is exported directly by default. In addition to the performance advantages, impdp/expdp also provides several attractive features: 1) provides parallel execution capabilities to speed up import and export; 2) provides an interactive interface, you can pause the import and export operations at any time. 3) Multiple table loading policies, such as append, replace, and skip. 4) direct exchange between database objects. 5) provides the export file size estimation function 6) provides the Import and Export progress viewing function 7) automatically generates import and export log files in the export file directory. Back to the initial question, why is the import/export speed of imp/exp and impdp/expdp similar? This test uses impdp/expdp, and sets the degree of parallelism to 1. I lost the most important feature. Can it be faster. Since the degree of parallelism is so important, how much can be set? The setting is too high. The server CPU usage has soared to 100%, and the import speed has not been improved yet. The setting is too low to see the acceleration effect. Recommended: equal to the number of server CPUs, but not higher than the number of dmp files.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.