Data Pump File

Source: Internet
Author: User
Expdp Introduction

Expdp command line options

1. Attach
This option is used to establish an association between a customer session and an existing export role. The syntax is as follows:
Attach = [Schema_name.] job_name
Schema_name is used to specify the scheme name, And job_name is used to specify the export job name. Note: If the attach option is used, no other options can be specified in the command line except the connection string and attach options. The example is as follows:
Expdp Scott/tiger attach = Scott. export_job

2. Content
This option is used to specify the content to be exported. The default value is all.
Content = {All | data_only | metadata_only}
When the content is set to all, only the object data is exported when the definition of the exported object and all its data is set to data_only. If the value is metadata_only, only the object definition is exported.
Expdp Scott/tiger directory = dump dumpfile = A. Dump content = metadata_only

3. Directory
Specifies the directory where the dump and log files are located
Directory = directory_object
Directory_object is used to specify the directory object name. Note that the directory object is an object created using the create directory statement, not an OS directory.
Expdp Scott/tiger directory = dump dumpfile = A. Dump

4. dumpfile
Specifies the name of the dump file. The default name is expdat. dmp.
Dumpfile = [directory_object:] file_name [,…]
Directory_object is used to specify the directory object name, And file_name is used to specify the dump file name. Note that if directory_object is not specified, the export tool automatically uses the directory object specified by the Directory option.
Expdp Scott/tiger directory = dump1 dumpfile = dump2: A. dmp

5. Estimate
Used to estimate the disk space occupied by the exported table. The default value is blocks.
Estimate = {blocks | statistics}
When it is set to blocks, Oracle estimates the space occupied by the object based on the number of data blocks occupied by the target object multiplied by the data block size. When it is set to statistics, it estimates the space occupied by the object based on the recent statistical value.
Expdp Scott/tiger tables = EMP estimate = statistics directory = dump dumpfile = A. Dump

6. estimate_only
Specifies whether to estimate only the disk space occupied by the Export job. The default value is N.
Extimate_only = {Y | n}
When it is set to Y, the export function only estimates the disk space occupied by the object, and does not execute the export job. When it is N, it not only estimates the disk space occupied by the object, the export operation is also performed.
Expdp Scott/tiger estimate_only = y nologfile = y

7. exclude (for details, see 2. Exclude to export the specified object of the specified type in the user)
This option is used to specify the release object type or related objects to be excluded when the operation is executed.
Exclude = object_type [: name_clause] [,…]
Object_type is used to specify the object type to be excluded. name_clause is used to specify the specific object to be excluded. exclude and include cannot be used at the same time.
Expdp Scott/tiger directory = dump dumpfile = A. DUP exclude = View

8. filesize
Specifies the maximum size of the exported file. The default value is 0, indicating that the file size is unlimited)

9. flashback_scn
Specifies the time when table data is exported for a specific SCN.
Flashback_scn = scn_value
Scn_value is used to identify the SCN value. flashback_scn and flashback_time cannot be used at the same time.
Expdp Scott/tiger directory = dump dumpfile = A. dmp flashback_scn = 358523

10. flashback_time
Export table data at a specific time point
Flashback_time = "to_timestamp (time_value )"
Expdp Scott/tiger directory = dump dumpfile = A. dmp flashback_time = "to_timestamp ('25-08-2004 14:35:00 ', 'dd-MM-YYYY hh24: MI: ss ')"

11. Full
Specify the database mode for export. The default value is N.
Full = {Y | n}
If the value is Y, the database is exported.

12. Help
Specifies whether to display the help information of the expdp command line option. The default value is N.
When set to Y, the export options help information is displayed.
Expdp help = y

13. Include (for details, see 1. include: export specified objects of the specified type in the user)
Specifies the object type and related objects to be included during export.
Include = object_type [: name_clause] [,… ]

14. job_name
Name of the role to be exported. The default value is sys_xxx.
Job_name = jobname_string
Select * From dba_datapump_jobs; -- view existing jobs

15. logfile
Specifies the name of the exported log file. The default name is export. log.
Logfile = [directory_object:] file_name
Directory_object is used to specify the directory object name, And file_name is used to specify the exported log file name. If directory_object is not specified, the corresponding directory option value is automatically used for export.
Expdp Scott/tiger directory = dump dumpfile = A. dmp logfile = A. Log

16. network_link
Specifies the database chain name. If you want to export the remote database object to the dump file of the local routine, you must set this option.
For example, expdp gwm/gwm directory = dir_dp network_link = igisdb tables = p_street_area dumpfile = p_street_area.dmp logfile = p_street_area.log job_name = my_job
Igisdb is the link name between the target database and the source data,
Dir_dp is the directory on the target database
If you directly use the connection string (@ fgisdb) and expdp is a server tool, the files generated by expdp are stored on the server by default.

17. nologfile
This option is used to specify whether to generate exported log files. The default value is N.

18. Parallel
Number of parallel processes that execute the export operation. The default value is 1.
Note: The concurrency setting should not exceed 2 times of the number of cpus. If the number of CPUs is 2, you can set parallel to 2. The speed of importing data is faster than that of parallel.
For exported files, if parallel is set to 2, there is only one exported file, and the export speed is not improved much. Because the exported files are all the same file, they compete for resources. Therefore, you can set two export files as follows:
Expdp gwm/gwm directory = d_test dumpfile = gwmfile1.dp, gwmfile2.dp parallel = 2

19. parfile
Specify the name of the exported parameter file
Parfile = [directory_path] file_name

20. Query
Where condition used to filter and export data
Query = [schema.] [table_name:] query_clause
Schema is used to specify the solution name, table_name is used to specify the table name, And query_clause is used to specify the condition restriction clause. The query option cannot be used with connect = metadata_only, extimate_only, transport_tablespaces and other options at the same time.
Expdp Scott/tiger directory = dump dumpfile = A. dmp tables = EMP query = 'where deptno = 20'

21. Schemas
This scheme is used to specify the execution scheme mode for export. The default is the current user scheme.

22. Status
Specifies the detailed status of the exported process. The default value is 0.

23. Tables
Export in specified table mode
Tables = [Schema_name.] table_name [: partition_name] [,…]
Schema_name is used to specify the solution name, table_name is used to specify the name of the exported table, and partition_name is used to specify the name of the partition to be exported.

24. tablespaces
Table space list to export

25. transport_full_check
This option is used to check the association between the removed tablespace and the unmoved tablespace. The default value is N.
When it is set to Y, the export function checks the complete association of the space directly. If only one tablespace in the tablespace or its index is moved, an error message is displayed. when it is set to N, the export function only checks single-ended dependencies. If the index's tablespace is moved but the table's tablespace is not moved, the error message is displayed. If the table's tablespace is moved, if the tablespace where the index is not moved, no error message is displayed.

26. transport_tablespaces
Specify the execution tablespace mode for export

27. Version
Specifies the database version of the exported object. The default value is compatible.
Version = {compatible | latest | version_string}
When the value is compatible, object metadata is generated based on the initialization parameter compatible. When the value is latest, object metadata is generated based on the actual database version. version_string is used to specify the database version string. expdp is called.

Steps for Data Pump tool export:

1. create directory
Create directory dir_dp as 'd: \ oracle \ dir_dp ';
2. Authorization
Grant read, write on directory dir_dp to lttfm;
-- View directories and permissions
Select privilege, directory_name, directory_path from user_tab_privs T, all_directories d
Where T. table_name (+) = D. directory_name order by 2, 1;
3. Execute Export
Expdp lttfm/lttfm @ fgisdb schemas = lttfm directory = dir_dp dumpfile = expdp_test1.dmp logfile = expdp_test1.log;

Connect to: Oracle Database 10g Enterprise Edition Release 10.2.0.1
With the partitioning, OLAP and Data Mining options
Start "lttfm". "sys_export_schema_01": lttfm/******** @ fgisdb sch
Ory = dir_dp dumpfile = expdp_test1.dmp logfile = expdp_test1.log ;*/
Note:
1. Directory = dir_dp must be placed before, if placed last, will prompt ORA-39002: The operation is not valid
ORA-39070: Unable to open the log file.
ORA-39087: Invalid directory name data_pump_dir;

2. During the export process, data dump creates and uses an object named sys_export_schema_01, which is the job name used in the data dump export process, if you do not specify the name of the exported job when executing this command, a default job name is generated. If you specify the job name during the export process
Change it:
Expdp lttfm/lttfm @ fgisdb schemas = lttfm directory = dir_dp dumpfile = expdp_test1.dmp logfile = expdp_test1.log, job_name = my_job1;
3. Do not use semicolons after the export statement. Otherwise, the job table name in the preceding Export Statement is 'my _ job1; 'instead of my_job1. Therefore, expdp lttfm/lttfm attach = lttfm. my_job1 always prompts that the job table cannot be found when executing this command.
4. The Created directory must be on the machine where the database is located. Otherwise, the following message is displayed:

ORA-39002: Invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: Invalid directory name data_pump_dir;

 

 

Export related commands:

1) Ctrl + C: During execution, you can press Ctrl + C to exit the current interaction mode. After exiting, the export operation will not stop.
2) Export> status -- view the status and related information of the current job
3) Export> stop_job -- pause a job (the job will exit the expor mode after it is paused)
4) re-enter the export mode: C: \ Documents ents and Settings \ Administrator> expdp lttfm/lttfm attach = lttfm. my_job1 -- the statement is not followed by a semicolon
5) Export> start_job -- open the paused job (not restarting)
6) Export> continue_client -- Use this command to restart "lttfm". "my_job ":
7) Export> kill_job -- cancel the current job and release the Client Session (delete the job and delete the DMP file)
8) Export> exit_client -- exit export mode through this command (through 4) and then enter export mode again)
Note: The job is automatically uninstalled after the export is complete.

Data Pump export modes:

1. Export in Table mode:
Expdp lttfm/lttfm @ fgisdb tables = lttfm. B $ I _exch_info, lttfm. B $ export dumpfile = expdp_test2.dmp logfile = expdp_test2.log directory = dir_dp job_name = my_job

2. Export based on query conditions:
Expdp lttfm/lttfm @ fgisdb tables = lttfm. B $ I _exch_info dumpfile = expdp_test3.dmp logfile = expdp_test3.log directory = dir_dp job_name = my_job query = '"where rownum <11 "'

3. Export by tablespace:
Expdp lttfm/lttfm @ fgisdb dumpfile = expdp_tablespace.dmp tablespaces = gcomm. DBF logfile = expdp_tablespace.log directory = dir_dp job_name = my_job

4. Export Solution
Expdp lttfm/lttfm directory = dir_dp dumpfile = schema. dmp schemas = lttfm, gwm

5. Export the entire database:
Expdp lttfm/lttfm @ fgisdb dumpfile = full. dmp full = y logfile = full. log directory = dir_dp job_name = my_job

Use exclude and include to export data
1. include: exports the specified object of the specified type from the user.
-- Only all tables starting with B under the lttfm user are exported, including table-related indexes and remarks. Other object types such as non-inclusion process:
Expdp lttfm/lttfm @ fgisdb dumpfile = include_1.dmp logfile = include_1.log directory = dir_dp job_name = my_job include = table: \ "like \ 'B % \'\"

-- Export all tables starting with B $ excluded from the lttfm User:
Expdp lttfm/lttfm @ fgisdb schemas = lttfm dumpfile = include_1.dmp logfile = include_1.log directory = dir_dp job_name = my_job include = table: \ "not like \ 'B $ % \'\"

-- Only export all stored procedures under the lttfm User:
Expdp lttfm/lttfm @ fgisdb schemas = lttfm dumpfile = include_1.dmp logfile = include_1.log directory = dir_dp job_name = my_job include = procedure;

2. exclude: exports the specified object of the specified type from the user.
-- Export all objects except the table type under the lttfm user. If the table is not exported, the indexes, constraints, and other object types associated with the table will not be exported:
Expdp lttfm/lttfm @ fgisdb schemas = lttfm dumpfile = exclude_1.dmp logfile = exclude_1.log directory = dir_dp job_name = my_job exclude = table;

-- Export all tables starting with B $ excluded from the lttfm User:
Expdp lttfm/lttfm @ fgisdb dumpfile = include_1.dmp logfile = include_1.log directory = dir_dp job_name = my_job exclude = table: \ "like \ 'B $ % \'\";

-- Export all objects under the lttfm user, but only tables starting with B $ are exported for the table Type:
Expdp lttfm/lttfm @ fgisdb dumpfile = include_1.dmp logfile = include_1.log directory = dir_dp job_name = my_job exclude = table: \ "not like \ 'B $ % \'\";

Impdp Introduction

The impdp command line options are similar to expdp, but they are different:

1. remap_datafile
This option is used to convert the source data file name to the target data file name. This option may be required when the tablespace is moved between different platforms.
Remap_datafiel = source_datafie: target_datafile

2. remap_schema
This option is used to load all objects in the source scheme to the Target Scheme.
Remap_schema = source_schema: target_schema

3. remap_tablespace
Import all objects in the source tablespace to the target tablespace.
Remap_tablespace = source_tablespace: Target: tablespace

4. reuse_datafiles
This option specifies whether to overwrite existing data files when creating a tablespace. The default value is N.
Reuse_datafiels = {Y | n}

5. skip_unusable_indexes
Whether to skip unused indexes during import. The default value is N.

6. sqlfile
Specify to write the index DDL operation to the SQL script.
Sqlfile = [directory_object:] file_name
Impdp Scott/tiger directory = dump dumpfile = tab. dmp sqlfile = A. SQL

7. streams_configuration
Specifies whether to import stream metadata (Stream matadata). The default value is Y.

8. table_exists_action
This option is used to specify the operations to be performed by the import job when the table already exists. The default value is Skip.
Tabble_exists_action = {skip | append | truncate | frplace}
When this option is set to skip, the import job skips existing tables and processes the next object. When it is set to append, data is appended. When it is set to truncate, the import job truncates the table, then append new data to it. When set to replace, the import job will delete the existing table and re-create the table disease to append data. Note that the truncate option is not applicable to the cluster Table and network_link options.

9. Transform
This option is used to specify whether to modify the DDL statement of the created object.
Transform = transform_name: value [: object_type]
Transform_name is used to specify the conversion name. segment_attributes is used to identify the block attributes (physical attributes, storage attributes, tablespace, logs, and other information), and storage is used to identify the Block Storage attributes, value is used to specify whether to include the segment attribute or the Segment Storage attribute, and object_type is used to specify the object type.
Impdp Scott/tiger directory = dump dumpfile = tab. dmp transform = segment_attributes: N: Table

10. transport_datafiles
This option is used to specify the data file to be imported to the target database when moving the space.
Transport_datafile = datafile_name
Datafile_name is used to specify the data file to be copied to the target database.
Impdp system/manager directory = dump dumpfile = TTS. dmp
Transport_datafiles = '/user01/data/tbs1.f' call impdp

Impdp import mode:

1. Import by table
The table in the p_street_area.dmp file, which is exported by the gwm user according to schemas = gwm:
Impdp gwm/gwm @ fgisdb dumpfile = p_street_area.dmp logfile = imp_p_street_area.log directory = dir_dp tables = p_street_area job_name = my_job

2. Import by user (user information can be directly imported if user information does not exist)
Impdp gwm/gwm @ fgisdb schemas = gwm dumpfile = expdp_test.dmp logfile = expdp_test.log directory = dir_dp job_name = my_job

3. Directly import the DMP file without using the expdp step:
-- Import the table p_street_area from the source database to the target database
Impdp gwm/gwm directory = dir_dp network_link = igisdb tables = p_street_area logfile = p_street_area.log job_name = my_job
Igisdb is the link name between the target database and the source data, and dir_dp is the directory on the target database.

4. Change the tablespace
Use the remap_tablespace Parameter
-- Export all data under the gwm user
Expdp system/orcl directory = data_pump_dir dumpfile = gwm. dmp schemas = gwm
Note: If the user data is exported by the Sys user, including user creation and authorization, it is not included when exported by the user.
-- The following describes how to import all data under the gwm user to the gcomm table space (originally under the gmapdata table space ).
Impdp system/orcl directory = data_pump_dir dumpfile = gwm. dmp remap_tablespace = gmapdata: gcomm

Exp and IMP
Exp keyword description:
Keyword description (default)
------------------------------
Userid username/password
Buffer data buffer size
File output file (expdat. dmp)
Compress import to a zone (y)
Grants export permission (y)
Indexes export index (y)
Direct direct path (n) -- direct export is faster
Log File output by log Screen
Rows export data rows (y)
Consistent cross tabulation consistency (N)
Full export the entire file (N)
Owner owner username list
Tables Table Name List
Recordlength Io record length
Inctype incremental export type
Incremental export of record tracking (y)
Triggers export trigger (y)
Statistics Analysis object (estimate)
Parfile parameter file name
Constraints for constraints Export (y)
Object_consistent is set to read-only transaction processing only during object Export (N)
The progress of feedback is displayed on every X rows (0)
Filesize: maximum size of each dump.
Flashback_scn is used to set the session snapshot back to the previous state SCN
Flashback_time is used to obtain the SCN time closest to the specified time.
Query: Select clause used to export a subset of a table
When the resumable encounters a space-related error, it suspends (n)
Resumable_name is a text string used to identify a recoverable statement.
The waiting time of resumable_timeout resumable.
Tts_full_check performs a full or partial correlation check on TTS.
Table space list of tablespaces to Be Exported
Transport_tablespace export the table space metadata that can be transferred (N)
Template: name of the template to be exported in IAS mode.

Common exp keywords

1. Full is used to export the entire database. It is used together with rows = n to export the structure of the entire database.
Example: EXP userid = gwm/gwm file =/test. dmp log = test. Log full = y rows = n direct = y
2. Owner and tables are used to define the exp export object. You can add the query condition to set the number of exported rows.
Example: EXP userid = gwm/gwm file =/test. dmp log = test. Log owner = gwm table = (Table1, table2) query = "'where rownum <11 '"
3. If the exported data of buffer and feedback is large, use these two parameters.
Example: EXP userid = gwm/gwm file =/test. dmp log = test. Log feedback = 10000 buffer = 100000000 tables = (Table1, table2)
4. File and log are used to specify the backup DMP name and log name.
5. Compress does not compress the exported data. The default value is Y.
6. If the exported data file is large, use this parameter to limit the file size to 2 GB.
Example: EXP userid = gwm/gwm file =/test1, Test2, test3, test4, test5 filesize = 2G log = test. Log
In this way, test1.dmp and test2.dmp are created, and each file size is 2 GB.

 

IMP keyword description

Keyword description (default) keyword description (default)
-------------------------------------------------------------
Userid username/password full import the entire file (N)
Buffer data buffer size fromuser owner username list
File input file (expdat. dmp) touser username list
Show only lists the file content (n) tables Table names
Ignore ignores creation error (n) recordlength Io record length
Grants import permission (y) inctype incremental Import Type
Indexes import index (y) Commit submit array insert (N)
Rows import data row (y) parfile parameter file name
Restrictions on importing log files output by log screens (y)

Destroy overwrite the tablespace data file (N)
Indexfile writes table/index information to the specified file
Skip_unusable_indexes skips maintenance of unavailable indexes (N)
The progress of feedback is displayed on every X rows (0)
Toid_novalidate skips the verification of the specified type ID
Filesize: maximum size of each dump.
Statistics always imports pre-calculation statistics
When a space error occurs, the resumable suspends (n)
Resumable_name is a text string used to identify recoverable statements.
The waiting time of resumable_timeout resumable.
Compile compilation process, package and function (y)
Streams_configuration the General metadata of the imported stream (y)
Streams_instantiation import stream instantiation metadata (N)

The following keywords are only used for table spaces that can be transferred.
Transport_tablespace import the deletable tablespace metadata (N)
Tablespaces tablespace to be transmitted to the database
Data files to be transmitted to the database
Tts_owners has users who can transmit table space centralized data.

Note:

Inctype incremental Import Type this parameter to 9i has been deprecated. I used the oracle11g experiment and found that this parameter is no longer available. The following experiment shows: C: \ Users \ ThinkPad> imp fyzh_ora/fyzh_ora file = rm_trs_seg.dmp log = rm_trs_seg.log F
Romuser = ltwebgis inctype = restore

Import: Release 11.1.0.7.0-production on Tuesday January 10 22:18:14 2012
Copyright (c) 1982,200 7, Oracle. All rights reserved.
Connect to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0-Production
With the partitioning, OLAP, data mining and real application testing options
Export File Created by export: v10.02.01 in the normal path
Warning these objects are exported by ltwebgis instead of the current user.
The zhs16gbk Character Set and al16utf16 nchar character set have been imported.
Imp-00021: inctype parameter obsolete
Imp-00083: the dump file does not contain incremental Export
Imp-00000: Import failed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.