I. Use of EXPDP and IMPDP the OracleDatabase10g introduces the latest data pump technology. The role of Data Pump Export and Import (EXPDP and IMPDP) 1) implements logical backup and logical recovery. 2) move objects between database users. 3) move objects between databases 4) Implement table space migration. ii. Data Pump export and traditional
1. Use of EXPDP and IMPDP: Oracle Database 10 Gb introduces the latest Data pump technology and the role of Data Pump Export and Import (EXPDP and IMPDP) implement logical backup and logical recovery. 2) move objects between database users. 3) move objects between databases 4) Implement table space migration. ii. Data Pump export and traditional
1. Use EXPDP and IMPDP
Oracle Database 10 Gb introduces the latest Data pump technology, the role of Data Pump Export Import (EXPDP and IMPDP)
1) implement logical backup and logical recovery.
2) move objects between database users.
3) move objects between databases
4) Implement table space migration.
Ii. Differences between data pump export and traditional export
Before 10 Gb, the traditional export and import respectively use the EXP tool and IMP tool, starting from 10 Gb, not only retain the original EXP and IMP tools, the Data Pump Export Import tools EXPDP and IMPDP are also provided. precautions when using EXPDP and IMPDP:
1) EXP and IMP are client tool programs that can be used either on the client or on the server.
2) EXPDP and IMPDP are tool programs on the server. They can only be used on the ORACLE server and cannot be used on the client.
3) IMP only applies to EXP export files, not EXPDP export files; IMPDP only applies to EXPDP export files, not EXP export files.
Data Pump export includes four modes: export table, export scheme, export tablespace, and export database.
3. Expdp Parameters
EXPDP command line options can be viewed through expdp help = y:
E: \ emp> expdp help = y
Export: Release 10.2.0.1.0-Production on Sunday, 03 Aug 17, 2009 17:54:49
Copyright (c) 2003,200 5, Oracle. All rights reserved.
The data pump export utility provides a method for transferring data between Oracle databases.
Data Object mechanism. The utility can be called using the following command:
Example: expdp scott/tigerDIRECTORY = dmpdir DUMPFILE = scott. dmp
You can control the export running mode. The specific method is: Enter
Parameters. To specify parameters, use the Keyword:
Format: expdp KEYWORD = value or KEYWORD = (value1, value2,..., valueN)
Example: expdp scott/tigerDUMPFILE = scott. dmp DIRECTORY = dmpdir SCHEMAS = scott
Or TABLES = (T1: P1, T1: P2). If T1 is a partitioned table
USERID must be the first parameter in the command line.
Keyword description (default)
------------------------------------------------------------------------------
ATTACH connects to an existing job, for example, ATTACH [= job name].
COMPRESSION reduces the size of valid dump files
Keyword values: (METADATA_ONLY) and NONE.
CONTENT specifies the data to be detached. The valid keyword is:
(ALL), DATA_ONLY and METADATA_ONLY.
DIRECTORY is the DIRECTORY object used by dump files and log files.
The list of DUMPFILE target dump files (expdat. dmp,
For example, DUMPFILE = scott1.dmp, scott2.dmp, dmpdir: scott3.dmp.
ENCRYPTION_PASSWORD is the key word used to create an encryption column.
ESTIMATE calculates the estimated job value. The valid keyword is:
(BLOCKS) and STATISTICS.
ESTIMATE_ONLY calculates the estimated job value without executing the export operation.
EXCLUDE is used to EXCLUDE specific object types, such as EXCLUDE = TABLE: EMP.
FILESIZE specifies the size of each dump in bytes.
FLASHBACK_SCN is used to set the session snapshot back to the SCN in the previous state.
FLASHBACK_TIME is used to obtain the SCN time closest to the specified time.
FULL export the entire database (N ).
HELP displays the HELP message (N ).
INCLUDE includes specific object types, such as INCLUDE = TABLE_DATA.
The name of the export job to be created in JOB_NAME.
LOGFILE log File Name (export. log ).
The name of the remote database to which NETWORK_LINK is linked.
NOLOGFILE does not write log files (N ).
PARALLEL changes the number of active workers of the current job.
PARFILE specifies the parameter file.
QUERY is the predicate clause used to export a subset of a table.
The percentage of data to be exported in SAMPLE;
List of schemes to be exported by SCHEMAS (logon scheme ).
When the default value (0) shows the new STATUS when available,
The frequency (in seconds) job status to be monitored.
TABLES identifies the list of TABLES to be exported-There is only one solution.
TABLESPACES identifies the list of TABLESPACES to be exported.
TRANSPORT_FULL_CHECK verifies the storage segments of all tables (N ).
TRANSPORT_TABLESPACES: List of tablespaces from which metadata is to be detached.
The VERSION of the object to be exported. The valid keyword is:
(COMPATIBLE), LATEST or any valid database version.
The following commands are valid in interactive mode.
Note: abbreviations are allowed.
Command description
------------------------------------------------------------------------------
ADD_FILE adds a dump file to the dump set.
CONTINUE_CLIENT returns to record mode. If the job is idle, the job is restarted.
EXIT_CLIENT exits the Client Session and keeps the job running.
The default file size (in bytes) of the subsequent ADD_FILE command ).
HELP summarizes interactive commands.
KILL_JOB separates and deletes jobs.
PARALLEL changes the number of active workers of the current job.
PARALLEL = .
START_JOB start/restore the current job.
When the default value (0) shows the new STATUS when available,
The frequency (in seconds) job status to be monitored.
STATUS [= interval]
STOP_JOB closes the executed job in sequence and exits the client.
STOP_JOB = IMMEDIATE will be closed immediately
Data pump operation.
1) ATTACH
This option is used to establish an association between a customer session and an existing export role. The syntax is as follows:
ATTACH = [schema_name.] job_name
Schema_name is used to specify the scheme name, And job_name is used to specify the export job name. Note: if you use the ATTACH option on the command line, you cannot specify any other options except the connection string and ATTACH options, as shown in the following example:
Expdp scott/tiger ATTACH = scott. export_job
2) CONTENT
This option is used to specify the content to be exported. The default value is ALL.
CONTENT = {ALL | DATA_ONLY | METADATA_ONLY}
When the CONTENT is set to ALL, only the object data is exported when the definition of the exported object and ALL its data is set to DATA_ONLY. If the value is METADATA_ONLY, only the object definition is exported.
Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dump
CONTENT = METADATA_ONLY
3) DIRECTORY
Specifies the directory where the dump and log files are located
DIRECTORY = directory_object
Directory_object is used to specify the DIRECTORY object name. Note that the DIRECTORY object is an object created using the create directory statement, not an OS DIRECTORY.
Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dump
Create a directory:
SQL> createdirectory dump_dir as 'd: \ dump ';
The directory has been created.
SQL> grantread, write on directory dump_dir to scott;
Authorization successful.
Query the subdirectories created:
SELECT * FROM dba_directories;
4) DUMPFILE
Specifies the name of the dump file. The default name is expdat. dmp.
DUMPFILE = [directory_object:] file_name [,…]
Directory_object is used to specify the DIRECTORY object name, And file_name is used to specify the dump file name. Note that if directory_object is not specified, the export tool automatically uses the DIRECTORY object specified by the DIRECTORY option.
Expdp scott/tiger DIRECTORY = dump1DUMPFILE = dump2: a. dmp
5.) ESTIMATE
Used to estimate the disk space occupied by the exported table. The default value is BLOCKS.
EXTIMATE = {BLOCKS | STATISTICS}
When it is set to BLOCKS, oracle estimates the space occupied by the object based on the number of data BLOCKS occupied by the target object multiplied by the data block size. When it is set to STATISTICS, it estimates the space occupied by the object based on the recent statistical value.
Expdp scott/tiger TABLES = empESTIMATE = STATISTICS
DIRECTORY = dump DUMPFILE = a. dump
6.) EXTIMATE_ONLY
Specifies whether to estimate only the disk space occupied by the Export job. The default value is N.
EXTIMATE_ONLY = {Y | N}
When it is set to Y, the export function only estimates the disk space occupied by the object, and does not execute the export job. When it is N, it not only estimates the disk space occupied by the object, the export operation is also performed.
Expdp scott/tiger ESTIMATE_ONLY = yNOLOGFILE = y
7.) EXCLUDE
This option is used to specify the release object type or related objects to be excluded when the operation is executed.
EXCLUDE = object_type [: name_clause] [,…]
Object_type is used to specify the object type to be excluded. name_clause is used to specify the specific object to be excluded. EXCLUDE and INCLUDE cannot be used at the same time.
Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dup EXCLUDE = VIEW
8) FILESIZE
Specifies the maximum size of the exported file. The default value is 0, indicating that the file size is unlimited)
9. FLASHBACK_SCN
Specifies the time when table data is exported for a specific SCN.
FLASHBACK_SCN = scn_value
Scn_value is used to identify the SCN value. FLASHBACK_SCN and FLASHBACK_TIME cannot be used at the same time.
Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dmp
FLASHBACK_SCN = 358523
10) FLASHBACK_TIME
Export table data at a specific time point
FLASHBACK_TIME = "TO_TIMESTAMP (time_value )"
Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dmp FLASHBACK_TIME =
"TO_TIMESTAMP ('25-08-200414: 35: 00', 'dd-MM-YYYYHH24: MI: ss ')"
11) FULL
Specify the database mode for export. The default value is N.
FULL = {Y | N}
If the value is Y, the database is exported.
12) HELP
Specifies whether to display the help information of the EXPDP command line option. The default value is N.
When set to Y, the export options help information is displayed.
Expdp help = y
13) INCLUDE
Specifies the object type and related objects to be included during export.
INCLUDE = object_type [: name_clause] [,… ]
14) JOB_NAME
Name of the role to be exported. The default value is SYS_XXX.
JOB_NAME = jobname_string
15) LOGFILE
Specifies the name of the exported log file. The default name is export. log.
LOGFILE = [directory_object:] file_name
Directory_object is used to specify the DIRECTORY object name, And file_name is used to specify the exported log file name. If directory_object is not specified, the corresponding DIRECTORY option value is automatically used for export.
Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dmp logfile = a. log
16) NETWORK_LINK
Specifies the database chain name. If you want to export the remote database object to the dump file of the local routine, you must set this option.
17) NOLOGFILE
This option is used to specify whether to generate exported log files. The default value is N.
18) PARALLEL
Number of parallel processes that execute the export operation. The default value is 1.
19) PARFILE
Specify the name of the exported parameter file
PARFILE = [directory_path] file_name
20) QUERY
Where condition used to filter and export data
QUERY = [schema.] [table_name:] query_clause
Schema is used to specify the solution name, table_name is used to specify the table name, And query_clause is used to specify the condition restriction clause. The QUERY option cannot be used with CONNECT = METADATA_ONLY, EXTIMATE_ONLY, TRANSPORT_TABLESPACES and other options at the same time.
Expdp scott/tiger directory = dumpdumpfiel = a. dmp
Tables = emp query = 'where deptno = 20'
21) SCHEMAS
This scheme is used to specify the execution scheme mode for export. The default is the current user scheme.
22) STATUS
Specifies the detailed status of the exported process. The default value is 0.
23) TABLES
Export in specified table mode
TABLES = [schema_name.] table_name [: partition_name] [,…]
Schema_name is used to specify the solution name, table_name is used to specify the name of the exported table, and partition_name is used to specify the name of the partition to be exported.
24) TABLESPACES
Table space list to export
25) TRANSPORT_FULL_CHECK
This option is used to check the association between the removed tablespace and the unmoved tablespace. The default value is N.
When it is set to Y, the export function checks the complete association of the space directly. If only one tablespace in the tablespace or its index is moved, an error message is displayed. when it is set to N, the export function only checks single-ended dependencies. If the index's tablespace is moved but the table's tablespace is not moved, the error message is displayed. If the table's tablespace is moved, if the tablespace where the index is not moved, no error message is displayed.
26) TRANSPORT_TABLESPACES
Specify the execution tablespace mode for export
27) VERSION
Specifies the database version of the exported object. The default value is COMPATIBLE.
VERSION = {COMPATIBLE | LATEST | version_string}
When the value is COMPATIBLE, object metadata is generated based on the initialization parameter COMPATIBLE. When the value is LATEST, object metadata is generated based on the actual database version. version_string is used to specify the database version string.
Iv. EXPDP usage
When the EXPDP tool is used, its dump file can only be stored in the OS directory corresponding to the DIRECTORY object, rather than directly specifying the OS DIRECTORY where the dump file is located. therefore, you must first create a DIRECTORY object when using the EXPDP tool. the database user must be granted the permission to use the DIRECTORY object.
Create directory dump_dir AS 'C: \ emp ';
Grant read, write on directory dump_dir TO scott;
1. Export a table
Expdp scott/tiger DIRECTORY = dump_dir DUMPFILE = dept. dmpTABLES = dept
Note: Pay attention to the read and write permissions of the directory in unix, such:
View the directory where dump_dir is located: Use the sys user to view the data dictionary dba_directories
Change the permission for this folder: chown-R oracle: dba/exp. Problem Solving
2. Export Solution
Expdp scott/tiger directory = dump_dirdumpfile = schema. dmp logfile = schema. log schemas = system
3. Export tablespace
Expdp scott/tiger directory = dump_dirdumpfile = tb. dmp logfile = tb. log tablespaces = users
4. Export the database
Expdp system/manager DIRECTORY = dump_dirDUMPFILE = full. dmp FULL = Y
Expdp scott/tiger DIRECTORY = dump_dirDUMPFILE = full. dmp FULL = Y
Note: If the scott user does not have the corresponding permissions, grant scott the corresponding permissions or use system for full-Database Export.
SQL> grant exp_full_database to scott;
Grant succeeded.
Export the entire database:
V. IMPDP Parameters
Different IMPDP and EXPDP parameters:
1. REMAP_DATAFILE
This option is used to convert the source data file name to the target data file name. This option may be required when the tablespace is moved between different platforms.
REMAP_DATAFIEL = source_datafie: target_datafile
2. REMAP_SCHEMA
This option is used to load all objects in the source scheme to the Target Scheme.
REMAP_SCHEMA = source_schema: target_schema
3. REMAP_TABLESPACE
Import all objects in the source tablespace to the target tablespace.
REMAP_TABLESPACE = source_tablespace: target: tablespace
4. REUSE_DATAFILES
This option specifies whether to overwrite existing data files when creating a tablespace. The default value is N.
REUSE_DATAFIELS = {Y | N}
5. SKIP_UNUSABLE_INDEXES
Whether to skip unused indexes during import. The default value is N.
6. SQLFILE
Specify to write the index DDL operation to the SQL script.
SQLFILE = [directory_object:] file_name
Impdp scott/tiger DIRECTORY = dumpDUMPFILE = tab. dmp SQLFILE = a. SQL
7. STREAMS_CONFIGURATION
Specifies whether to import stream metadata (StreamMatadata). The default value is Y.
8. TABLE_EXISTS_ACTION
This option is used to specify the operations to be performed by the import job when the table already exists. The default value is SKIP.
TABBLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | FRPLACE}
When this option is set to SKIP, the import job skips existing tables and processes the next object. When it is set to APPEND, data is appended. When it is set to TRUNCATE, the import job truncates the table, then append new data to it. When set to REPLACE, the import job will delete the existing table, recreate the table, and append data. Note that the TRUNCATE option is not applicable to the cluster Table and NETWORK_LINK options.
9. TRANSFORM
This option is used to specify whether to modify the DDL statement of the created object.
TRANSFORM = transform_name: value [: object_type]
Transform_name is used to specify the conversion name. SEGMENT_ATTRIBUTES is used to identify the block attributes (physical attributes, STORAGE attributes, tablespace, logs, and other information), and STORAGE is used to identify the block STORAGE, VALUE is used to specify whether to include the segment attribute or the Segment Storage attribute, and object_type is used to specify the object type.
Impdp scott/tiger directory = dumpdumpfile = tab. dmp
Transform = segment_attributes: n: table
10. TRANSPORT_DATAFILES
This option is used to specify the data file to be imported to the target database when moving the space.
TRANSPORT_DATAFILE = datafile_name
Datafile_name is used to specify the data file to be copied to the target database.
Impdp system/manager DIRECTORY = dumpDUMPFILE = tts. dmp
TRANSPORT_DATAFILES = '/user01/data/tbs1.f'
Vi. IMPDP usage
1. Import table
Impdp hsiufo/hsiufo directory = dump_dirdumpfile = full. dmp tables = scott. emp remap_schema = scott: scott
Note: To back up full. dmp for a full-database logical backup, delete user scott's emp table and import emp to user scott in full. dmp.
Impdp hsiufo/hsiufo directory = dump_dirdumpfile = full. dmp tables = scott. test remap_schema = scott: system
The first method means to import the EMP table to the SCOTT solution, and the second method means to import the test table to the SYSTEM solution.
Note: to import a table to other schemes, you must specify the remap schema option.
2. Import Solution
Impdp hsiufo/hsiufodirectory = dump_dir dumpfile = full. dmp schemas = scott
Impdp system/manager DIRECTORY = dump_dirDUMPFILE = schema. dmp
SCHEMAS = scott REMAP_SCHEMA = scott: system
3. Import tablespace
Impdp system/manager DIRECTORY = dump_dirDUMPFILE = tablespace. dmp
TABLESPACES = user01
4. Import Database
Impdp system/manager DIRECTORY = dump_dirDUMPFILE = full. dmp FULL = y