Starting from 10 Gb, Oracle provides more efficient Data Pump (expdp/impdp) for Data import and export. The old exp/imp can also be used, but it is not recommended. Note: expdp/impdp AND exp/imp are incompatible. In other words, exp exported files can only be imported using imp, and expdp exported files can only be imported using impdp.
Key Points of migrating 11G data to 10g platform through expdp & impdp
Oracle Data Pump examples and some precautions (expdp/impdp)
Oracle datapump expdp/impdp hang
Expdp/impdp for Oracle 10g to 11g data migration
Data Pump consists of the following three parts:
- Client tool: expdp/impdp
- Data Pump API (DBMS_DATAPUMP)
- Metadata API (DMBS_METADATA)
In general, we usually equate expdp/impdp with Data Pump, but we can know from the above that it is actually only a part of Data Pump. In fact, what actually works is two APIs, they are hidden in the background and are rarely noticed at ordinary times. However, if some inexplicable errors (such as internal error) occur, it is usually because these two APIs are damaged and you can run scripts to recompile them. By default, users can export/import Data in their own schema. However, if you want to export/import Data in other schemas, you must assign the following two roles to the user:
- DATAPUMP_EXP_FULL_DATABASE
- DATAPUMP_IMP_FULL_DATABASE
Of course, sys, system account, and dba roles have the above two roles by default. Data Pump Data import Method
- Data file copy: this is the fastest way. dumpfile only contains metadata and copies data files at the operating system level. The relevant parameters include TRANSPORT_TABLESPACES,
TRANSPORTABLE=ALWAYS
- Direct path loading: This is the fastest method except for file copying. This method is used unless it is unavailable (such as BFILE ).
- External table: The External table is used only when the first and second types are unavailable.
- Traditional path loading: traditional path loading is used only when all the preceding methods are unavailable. This method has poor performance.
Data Pump Job when executing expdp/impdp, it is actually a job to execute Export and Import. A Data Pump job consists of the following three parts:
- Master process: controls the entire job and is the coordinator of the entire job.
- Master table: records the metadata of the database object in dumpfile. When the expdp ends, it is written to dumpfile and read from impdp, so that the content in dumpfile can be known.
- Worker processes: executes the Export and Import tasks. Multiple worker processes are automatically created and executed in PARALLEL according to the actual situation, but cannot exceed the number defined by the PARALLEL parameter.
The current running status of the Data Pump Job can be seen in the output and logfile of the monitoring Job on the screen. You can also query the view in the database.
DBA_DATAPUMP_JOBS
,
USER_DATAPUMP_JOBS
, Or
DBA_DATAPUMP_SESSIONS。
For long-time jobs, you can view the current Job completion status in the dynamic view V $ SESSION_LONGOPS and predict how long the Job can be completed. The specific fields are as follows: USERNAME-job owner
OPNAME-job name
TARGET_DESC-job operation
SOFAR-megabytes transferred thus far during the job
TOTALWORK-estimated number of megabytes in the job
UNITS-megabytes (MB)
MESSAGE-a formatted status message of the form:
'Job _ name: operation_name: nnn out of mmm MB done' to create Directory Data Pump. Unlike exp/imp, it can be executed on the client. It must be executed on the server, all the files it generates are stored on the server. Therefore, you must first CREATE a directory object in Oracle. The following is an example: SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles '; after creating a directory object, you must GRANT the READ and WRITE permissions TO the user who executes Data Pump, as shown in the following code: SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;
[Content navigation] |
Page 1st: overview |
Page 1: expdp |
Page 1: impdp |
|