Data Pump Usage Details

Source: Internet
Author: User

From: http://www.qqread.com/oracle/2008/07/u421371.html

EXPDP and IMPDP instructions for use EXPDP and IMPDP

Oracle Database 10 Gb introduces the latest Data pump technology, allowing DBAs or developers to quickly move Database metadata (object definition) and Data to another oracle Database.

Role of Data Pump Export Import (EXPDP and IMPDP)

1. implement logical backup and logical recovery.

2. move objects between database users.

3. move objects between databases

4. Implement table space migration.

Differences between data pump export and traditional export

Before 10 Gb, the traditional export and import respectively use the EXP tool and IMP tool, starting from 10 Gb, not only retain the original EXP and IMP tools, the Data Pump Export Import tools EXPDP and IMPDP are also provided. precautions when using EXPDP and IMPDP;

Exp and IMP are client-segment tool programs that can be used either on the client or on the server.

Expdp and impdp are the tool programs on the server. They can only be used on the Oracle server and cannot be used on the client.

IMP only applies to exp export files, not expdp export files; impdp only applies to expdp export files, not exp export files.

Data Pump export includes four modes: export table, export scheme, export tablespace, and export database.

 

========================================================== ======================================

 

Expdp command line options

1. Attach

This option is used to establish an association between a customer session and an existing export role. The syntax is as follows:

Attach = [Schema_name.] job_name

Schema_name is used to specify the scheme name, And job_name is used to specify the export job name. Note: If the attach option is used, no other options can be specified in the command line except the connection string and attach options. The example is as follows:

Expdp Scott/tiger attach = Scott. export_job

2. Content

This option is used to specify the content to be exported. The default value is all.

Content = {All | data_only | metadata_only}

When the content is set to all, only the object data is exported when the definition of the exported object and all its data is set to data_only. If the value is metadata_only, only the object definition is exported.

Expdp Scott/tiger directory = dump dumpfile = A. Dump

Content = metadata_only

3. Directory

Specifies the directory where the dump and log files are located

Directory = directory_object

Directory_object is used to specify the directory object name. Note that the directory object is an object created using the create directory statement, not an OS directory.

Expdp Scott/tiger directory = dump dumpfile = A. Dump

Create a directory:

Create directory dump as 'd: Dump ';

Query the subdirectories created:

Select * From dba_directories;

4. dumpfile

Specifies the name of the dump file. The default name is expdat. dmp.

Dumpfile = [directory_object:] file_name [,…]

Directory_object is used to specify the DIRECTORY object name, And file_name is used to specify the dump file name. Note that if directory_object is not specified, the export tool automatically uses the DIRECTORY object specified by the DIRECTORY option.

Expdp scott/tiger DIRECTORY = dump1 DUMPFILE = dump2: a. dmp

5. ESTIMATE

Used to estimate the disk space occupied by the exported table. The default value is BLOCKS.

EXTIMATE = {BLOCKS | STATISTICS}

When it is set to BLOCKS, oracle estimates the space occupied by the object based on the number of data BLOCKS occupied by the target object multiplied by the data block size. When it is set to STATISTICS, it estimates the space occupied by the object based on the recent statistical value.

Expdp scott/tiger TABLES = emp ESTIMATE = STATISTICS

DIRECTORY = dump DUMPFILE = a. dump

6. EXTIMATE_ONLY

Specifies whether to estimate only the disk space occupied by the Export job. The default value is N.

EXTIMATE_ONLY = {Y | N}

When it is set to Y, the export function only estimates the disk space occupied by the object, and does not execute the export job. When it is N, it not only estimates the disk space occupied by the object, the export operation is also performed.

Expdp scott/tiger ESTIMATE_ONLY = y NOLOGFILE = y

7. EXCLUDE

This option is used to specify the release object type or related objects to be excluded when the operation is executed.

EXCLUDE = object_type [: name_clause] [,…]

Object_type is used to specify the object type to be excluded. name_clause is used to specify the specific object to be excluded. EXCLUDE and INCLUDE cannot be used at the same time.

Expdp scott/tiger DIRECTORY = dump DUMPFILE = a. dup EXCLUDE = VIEW

8. FILESIZE

Specifies the maximum size of the exported file. The default value is 0, indicating that the file size is unlimited)

9. flashback_scn

Specifies the time when table data is exported for a specific SCN.

Flashback_scn = scn_value

Scn_value is used to identify the SCN value. flashback_scn and flashback_time cannot be used at the same time.

Expdp Scott/tiger directory = dump dumpfile = A. dmp

Flashback_scn = 358523

10. flashback_time

Export table data at a specific time point

Flashback_time = "to_timestamp (time_value )"

Expdp Scott/tiger directory = dump dumpfile = A. dmp flashback_time =

"To_timestamp ('25-08-2004 14:35:00 ', 'dd-MM-YYYY hh24: MI: ss ')"

11. Full

Specify the database mode for export. The default value is N.

Full = {Y | n}

If the value is Y, the database is exported.

12. Help

Specifies whether to display the help information of the expdp command line option. The default value is N.

When set to Y, the export options help information is displayed.

Expdp help = y

13. Include

Specifies the object type and related objects to be included during export.

Include = object_type [: name_clause] [,… ]

14. job_name

Name of the role to be exported. The default value is sys_xxx.

Job_name = jobname_string

15. logfile

Specifies the name of the exported log file. The default name is export. log.

Logfile = [directory_object:] file_name

Directory_object is used to specify the directory object name, And file_name is used to specify the exported log file name. If directory_object is not specified, the corresponding directory option value is automatically used for export.

Expdp Scott/tiger directory = dump dumpfile = A. dmp logfile = A. Log

16. network_link

Specifies the database chain name. If you want to export the remote database object to the dump file of the local routine, you must set this option.

17. NOLOGFILE

This option is used to specify whether to generate exported log files. The default value is N.

18. PARALLEL

Number of parallel processes that execute the export operation. The default value is 1.

19. PARFILE

Specify the name of the exported parameter file

PARFILE = [directory_path] file_name

20. QUERY

Where condition used to filter and export data

QUERY = [schema.] [table_name:] query_clause

Schema is used to specify the solution name, table_name is used to specify the table name, And query_clause is used to specify the condition restriction clause. The QUERY option cannot be used with CONNECT = METADATA_ONLY, EXTIMATE_ONLY, TRANSPORT_TABLESPACES and other options at the same time.

Expdp scott/tiger directory = dump dumpfiel = a. dmp

Tables = emp query = 'where deptno = 20'

21. SCHEMAS

This scheme is used to specify the execution scheme mode for export. The default is the current user scheme.

22. STATUS

Specifies the detailed status of the exported process. The default value is 0.

23. TABLES

Export in specified table mode

TABLES = [schema_name.] table_name [: partition_name] [,…]

Schema_name is used to specify the solution name, table_name is used to specify the name of the exported table, and partition_name is used to specify the name of the partition to be exported.

24. TABLESPACES

Table space list to export

25. transport_full_check

This option is used to check the association between the removed tablespace and the unmoved tablespace. The default value is N.

When it is set to Y, the export function checks the complete association of the space directly. If only one tablespace in the tablespace or its index is moved, an error message is displayed. when it is set to N, the export function only checks single-ended dependencies. If the index's tablespace is moved but the table's tablespace is not moved, the error message is displayed. If the table's tablespace is moved, if the tablespace where the index is not moved, no error message is displayed.

26. transport_tablespaces

Specify the execution tablespace mode for export

27. Version

Specifies the database version of the exported object. The default value is compatible.

Version = {compatible | latest | version_string}

When the value is compatible, object metadata is generated based on the initialization parameter compatible. When the value is latest, object metadata is generated based on the actual database version. version_string is used to specify the database version string.

 

========================================================== ==================================

 

Use impdp

The impdp command line options are similar to expdp, but they are different:

1, remap_datafile

This option is used to convert the source data file name to the target data file name. This option may be required when the tablespace is moved between different platforms.

Remap_datafiel = source_datafie: target_datafile

2, remap_schema

This option is used to load all objects in the source scheme to the Target Scheme.

Remap_schema = source_schema: target_schema

3, remap_tablespace

Import all objects in the source tablespace to the target tablespace.

Remap_tablespace = source_tablespace: Target: tablespace

4. reuse_datafiles

This option specifies whether to overwrite existing data files when creating a tablespace. The default value is N.

Reuse_datafiels = {Y | n}

5. skip_unusable_indexes

Whether to skip unused indexes during import. The default value is N.

6. sqlfile

Specify to write the index DDL operation to the SQL script.

Sqlfile = [directory_object:] file_name

Impdp scott/tiger DIRECTORY = dump DUMPFILE = tab. dmp SQLFILE = a. SQL

7. STREAMS_CONFIGURATION

Specifies whether to import Stream metadata (Stream Matadata). The default value is Y.

8, TABLE_EXISTS_ACTION

This option is used to specify the operations to be performed by the import job when the table already exists. The default value is SKIP.

TABBLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | FRPLACE}

When this option is set to SKIP, the import job skips existing tables and processes the next object. When it is set to APPEND, data is appended. When it is set to TRUNCATE, the import job truncates the table, then append new data to it. When set to REPLACE, the import job will delete the existing table and re-create the table disease to append data. Note that the TRUNCATE option is not applicable to the cluster Table and NETWORK_LINK options.

9. TRANSFORM

This option is used to specify whether to modify the DDL statement of the created object.

TRANSFORM = transform_name: value [: object_type]

Transform_name is used to specify the conversion name. SEGMENT_ATTRIBUTES is used to identify the block attributes (physical attributes, STORAGE attributes, tablespace, logs, and other information), and STORAGE is used to identify the Block STORAGE attributes, VALUE is used to specify whether to include the segment attribute or the Segment Storage attribute, and object_type is used to specify the object type.

Impdp scott/tiger directory = dump dumpfile = tab. dmp

Transform = segment_attributes: n: table

10. TRANSPORT_DATAFILES

This option is used to specify the data file to be imported to the target database when moving the space.

TRANSPORT_DATAFILE = datafile_name

Datafile_name is used to specify the data file to be copied to the target database.

Impdp system/manager DIRECTORY = dump DUMPFILE = tts. dmp

TRANSPORT_DATAFILES = '/user01/data/tbs1.f'

 

Call EXPDP

When the EXPDP tool is used, its dump file can only be stored in the OS directory corresponding to the DIRECTORY object, rather than directly specifying the OS DIRECTORY where the dump file is located. therefore, you must first create a DIRECTORY object when using the EXPDP tool. the database user must be granted the permission to use the DIRECTORY object.

Create directory dump dir AS 'd: DUMP ';

Grant read, wirte on directory dump_dir TO scott;

 

1. Export the table

Expdp scott/tiger DIRECTORY = dump_dir DUMPFILE = tab. dmp TABLES = dept, emp

2. Export Solution

Expdp scott/tiger DIRECTORY = dump_dir DUMPFILE = schema. dmp

SCHEMAS = system, scott

3. Export tablespace

Expdp system/manager DIRECTORY = dump_dir DUMPFILE = tablespace. dmp

TABLESPACES = user01, user02

4. Export the database

Expdp system/manager DIRECTORY = dump_dir DUMPFILE = full. dmp FULL = Y

5. Usage of exclusion

Expdp scott/tiger dumpfile = fname. dp logfile = lname. log directory = db_backup exclude = table:/"in/(/'table1/',/'table2 /'/)/"

 

========================================================== ==================================

 

Call IMPDP

1. Import table

Impdp scott/tiger DIRECTORY = dump_dir DUMPFILE = tab. dmp TABLES = dept, emp

Impdp system/manage DIRECTORY = dump_dir DUMPFILE = tab. dmp TABLES = scott. dept, scott. emp REMAP_SCHEMA = SCOTT: SYSTEM

The first method means to import the DEPT and EMP tables to the SCOTT solution, and the second method means to import the DEPT and EMP tables into the SYSTEM solution.

Note: to import a table to other schemes, you must specify the remap schema option.

2. Import Solution

Impdp scott/tiger DIRECTORY = dump_dir DUMPFILE = schema. dmp SCHEMAS = scott exclude = USER:/"=/'iscs /'/"

Impdp system/manager DIRECTORY = dump_dir DUMPFILE = schema. dmp SCHEMAS = scott REMAP_SCHEMA = scott: system

3. Import the tablespace

Impdp system/manager DIRECTORY = dump_dir DUMPFILE = tablespace. dmp TABLESPACES = user01

4. Import the database

Impdp system/manager DIRECTORY = dump_dir DUMPFILE = full. dmp FULL = y

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.