Data Pump File

Source: Internet
Author: User
Tags create directory
EXPDP Introduction

EXPDP command line Options
1. ATTACH
This option is used to establish an association between a client session and an existing export action. The syntax is as follows
Attach=[schema_name.] Job_name
Schema_name is used to specify the scheme name, job_name to specify the export job name. Note that if you use the Attach option, you cannot specify any other options on the command line except for the connection string and the Attach option, as shown in the following example:
EXPDP Scott/tiger Attach=scott.export_job

2. CONTENT
This option specifies what to export. The default value is all
Content={all | data_only | Metadata_only}
When the content is set to all, the object definition and all its data are exported. When data_only, only object data is exported, and when metadata_only, only the object definition is exported
EXPDP Scott/tiger directory=dump dumpfile=a.dump content=metadata_only

3. DIRECTORY
Specify the directory where dump files and log files are located
Directory=directory_object
Directory_object is used to specify the directory object name. Note that directory objects are objects created using the Create DIRECTORY statement, not an OS directory
EXPDP Scott/tiger Directory=dump Dumpfile=a.dump

4. DumpFile
Use to specify the name of the dump file, the default name is Expdat.dmp
Dumpfile=[directory_object:]file_name [,....]
Directory_object is used to specify the directory object name and file_name to specify the dump file name. Note that if you do not specify Directory_object, the export tool automatically uses directory objects specified by the directory option
EXPDP Scott/tiger DIRECTORY=DUMP1 dumpfile=dump2:a.dmp

5. Estimate
Specifies how disk space is to be estimated by the exported table. The default value is blocks
Estimate={blocks | STATISTICS}
When set to blocks, Oracle uses the number of blocks of data used by the target object multiplied by the size of the data block to estimate the space occupied by the object, and when set to statistics, estimate the object footprint according to the most recent statistic
EXPDP scott/tiger tables=emp estimate=statistics directory=dump dumpfile=a.dump

6. Estimate_only
Specifies whether to estimate only the disk space occupied by the export job, and the default value is N
Extimate_only={y | N
When set to Y, the export effect estimates only the disk space occupied by the object, but does not perform an export job, when n, not only estimating the disk space occupied by the object, but also performing an export operation.
EXPDP Scott/tiger estimate_only=y Nologfile=y

7. EXCLUDE (see 2, EXCLUDE the specified object of the specified type in the user)
This option specifies the release of the object type or related object to exclude when performing an action
Exclude=object_type[:name_clause] [,....]
Object_type is used to specify the type of object to exclude, name_clause to specify the specific object to exclude. Exclude and include cannot be used concurrently
EXPDP Scott/tiger directory=dump dumpfile=a.dup Exclude=view

8. FILESIZE
Specifies the maximum size of the exported file, which defaults to 0 (indicates file size is not limited)

9. Flashback_scn
Specify table data to export a specific SCN time
Flashback_scn=scn_value
Scn_value is used to identify the SCN value. FLASHBACK_SCN and Flashback_time cannot be used at the same time
EXPDP Scott/tiger directory=dump dumpfile=a.dmp flashback_scn=358523

Flashback_time
Specify the export of table data for a specific point in time
Flashback_time= "To_timestamp (time_value)"
EXPDP scott/tiger directory=dump dumpfile=a.dmp flashback_time= "To_timestamp (' 25-08-2004 14:35:00 ', ' DD-MM-YYYY HH24: Mi:ss ') "

One. Full
Specify database schema export, default to N
Full={y | N
When Y, the identity performs the database export.

Help
Specifies whether to display help information for the EXPDP command-line option, which defaults to n
When set to Y, help information for the export option is displayed.
EXPDP help=y

Include (see 1, include the specified object of the specified type of user)
Specify the types of objects and related objects to include when exporting
INCLUDE = Object_type[:name_clause] [,...]

Job_name
Specifies the name of the function to export, default to Sys_xxx
Job_name=jobname_string
SELECT * from dba_datapump_jobs;--view existing job

LOGFILE
Specifies the name of the export log file file, with the default name Export.log
Logfile=[directory_object:]file_name
Directory_object is used to specify the directory object name, file_name to specify the export log file name. If Directory_object is not specified. The Export action automatically uses the appropriate option values for your directory.
EXPDP Scott/tiger directory=dump dumpfile=a.dmp logfile=a.log

Network_link
Specifies the database chain name, and you must set this option if you want to export the remote database object to a dump file in a local routine.
such as: EXPDP gwm/gwm directory=dir_dp network_link=igisdb tables=p_street_area dumpfile =p_street_area.dmp logfile=p_ Street_area.log Job_name=my_job
Igisdb is the link name of the destination database and the source data.
DIR_DP is the directory on the destination database
If you use a connection string (@fgisdb) directly, EXPDP is a server-side tool, and EXPDP generated files are stored by default on the server

Nologfile
This option specifies that the export log file is prevented from being generated, and the default value is N.

PARALLEL
Specifies the number of parallel processes that perform the export operation, with a default value of 1
Note: The parallelism setting should not exceed twice times of the CPU number, if the CPU is 2, you can set the parallel to 2, when the import speed is faster than parallel 1
For exported files, if the parallel is set to 2, the export file is only one, and the export speed is not much improved because the export is to the same file and will scramble for resources. So you can set the export file to two, as follows:
EXPDP gwm/gwm directory=d_test DUMPFILE=GWMFILE1.DP,GWMFILE2.DP parallel=2

Parfile
Specify the name of the exported parameter file
Parfile=[directory_path] File_name

QUERY
Used to specify where conditions for filtering exported data
Query=[schema.] [TABLE_NAME:] Query_clause
Schema is used to specify the scenario name, table_name to specify the table name, and query_clause to specify the conditional restriction clause. The query option cannot be used in conjunction with options such as Connect=metadata_only,extimate_only,transport_tablespaces.
EXPDP scott/tiger directory=dump dumpfile=a.dmp tables=emp query= ' WHERE deptno=20 '

SCHEMAS.
This scenario is used to specify the execution scenario schema export, which defaults to the current user scenario.

STATUS
Specifies the detailed status of the export action process, with a default value of 0

TABLES
Specify table Schema Export
Tables=[schema_name.] table_name[:p artition_name][,...]
Schema_name is used to specify the scenario name, table_name to specify the exported table name, partition_name to specify the partition name to export.

Tablespaces
Specify the list of table spaces to export

Transport_full_check
This option specifies how to check the relationship between the moved tablespace and the unbound table space, by default N.
When set to Y, the export action examines the direct full association of the table space. An error message is displayed if only one tablespace is moved in the tablespace where the tablespace or its index is located. When set to n, the export effect examines only single-ended dependencies, and if you move the table space in which the index is located, but do not move the table space, An error message is displayed, and if the table space in which the move table is moved, the table space for the index is not moved, an error message is not displayed.

Transport_tablespaces
Specifies that the table space mode export is performed

VERSION
Specifies the database version of the exported object, the default value is compatible.
version={compatible | LATEST | Version_string}
When compatible, the object metadata is generated according to the initialization parameter compatible, and for latest, object metadata is generated based on the actual version of the database. Version_string is used to specify the database version string. Call EXPDP

Data Pump Tool export steps:
1. Create Directory
Create directory DIR_DP as ' D:\ORACLE\DIR_DP ';
2. Authorization
Grant read,write on directory DIR_DP to LTTFM;
--View directories and permissions
SELECT privilege, Directory_name, Directory_path from User_tab_privs T, All_directories D
WHERE T.table_name (+) = D.directory_name ORDER by 2, 1;
3. Perform export
EXPDP lttfm/lttfm@fgisdb schemas=lttfm directory=dir_dp dumpfile =expdp_test1.dmp;

Connect to: Oracle Database 10g Enterprise Edition Release 10.2.0.1
With the partitioning, OLAP and Data Mining options
Start "LTTFM". Sys_export_schema_01 ": lttfm/******** @fgisdb Sch
ORY=DIR_DP dumpfile =expdp_test1.dmp Logfile=expdp_test1.log; */
Note:
1, DIRECTORY=DIR_DP must be placed in front, if put it last, will prompt ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name data_pump_dir; Invalid

2. During the export process, data dump creates and uses an object named Sys_export_schema_01, which is the job name used in the data dump export process. If you do not specify the exported job name when you execute this command, a default job name is generated, if you specify the job name in the export process to appear with the specified name
If you change the following into:
EXPDP lttfm/lttfm@fgisdb schemas=lttfm DIRECTORY=DIR_DP dumpfile =expdp_test1.dmp MY_JOB1;
3. Do not have a semicolon after the export statement, otherwise the job table in the export statement above is named ' My_job1 ', not my_job1. As a result, EXPDP LTTFM/LTTFM attach=lttfm.my_job1 is always prompted to not find the job table when executing the command
4. The directory you create must be on the machine where the database is located. Otherwise it is prompt: ORA-39002: Invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name data_pump_dir; Invalid

exported related commands are used:
1 Ctrl + C key combination: During the execution, you can press CTRL + C to exit the current interactive mode, after exiting, the export operation will not stop
2) export> Status--View the status of the current job and related information
3 export> stop_job-Suspend job (quit Expor mode after suspending job)
4) Re-entry in export mode: C:\Documents and SETTINGS\ADMINISTRATOR>EXPDP lttfm/lttfm attach=lttfm.my_job1--with no semicolon following the statement
5) export> Start_job--Open the paused job (not started again)
6) export> continue_client--restart "LTTFM" with this command. " My_job ":
7 export> Kill_job--Cancel the current job and release the related customer session (delete the job delete the DMP file at the same time)
8 export> exit_client--This command exits the export mode (through 4) to enter export mode again)
Note: The job automatically unloads after the export completes

various modes of data pump export:
1. Export by Table mode:
EXPDP lttfm/lttfm@fgisdb tables=lttfm.b$i_exch_info,lttfm.b$i_manhole_info dumpfile =expdp_test2.dmp logfile=expdp_ Test2.log DIRECTORY=DIR_DP Job_name=my_job

2. Export by Query condition:
EXPDP lttfm/lttfm@fgisdb tables=lttfm.b$i_exch_info dumpfile =expdp_test3.dmp logfile=expdp_test3.log DP Job_name=my_job query= ' where rownum<11 '

3, according to the table space export:
EXPDP lttfm/lttfm@fgisdb dumpfile=expdp_tablespace.dmp Tablespaces=gcomm. DBF Logfile=expdp_tablespace.log DIRECTORY=DIR_DP Job_name=my_job

4. Export scheme
EXPDP LTTFM/LTTFM DIRECTORY=DIR_DP dumpfile=schema.dmp schemas=lttfm,gwm

5, export the entire database:
EXPDP lttfm/lttfm@fgisdb dumpfile =full.dmp full=y logfile=full.log DIRECTORY=DIR_DP job_name=my_job


exporting data using Exclude,include
1, include export user specified type of specified object
--only export all tables with a LTTFM user starting with B, including table-related indexes, notes, and so on. Other object types, such as procedures, are not included:
EXPDP lttfm/lttfm@fgisdb dumpfile=include_1.dmp logfile=include_1.log DIRECTORY=DIR_DP job_name=my_job : \ "like \ ' b%\ '"

--Export all tables that begin with the LTTFM user exclusion b$:
EXPDP lttfm/lttfm@fgisdb schemas=lttfm dumpfile=include_1.dmp logfile=include_1.log DIRECTORY=DIR_DP Include=table:\ "not like \ ' b$%\ '"

--only export all stored procedures under the LTTFM User:
EXPDP lttfm/lttfm@fgisdb schemas=lttfm dumpfile=include_1.dmp logfile=include_1.log DIRECTORY=DIR_DP Include=procedure;

2. Exclude export the specified object of the specified type in the user
--Export all objects except the table type under the LTTFM user, if the table does not export then the table-related indexes, constraints, and other object types associated with the table will not be exported:
EXPDP lttfm/lttfm@fgisdb schemas=lttfm dumpfile=exclude_1.dmp logfile=exclude_1.log DIRECTORY=DIR_DP exclude=table;

--Export all tables that begin with the LTTFM user exclusion b$:
EXPDP lttfm/lttfm@fgisdb dumpfile=include_1.dmp logfile=include_1.log DIRECTORY=DIR_DP job_name=my_job : \ "like\ ' b$%\ '";

--Export all objects under the LTTFM user, but only the table that starts with b$ is exported for table types:
EXPDP lttfm/lttfm@fgisdb dumpfile=include_1.dmp logfile=include_1.log DIRECTORY=DIR_DP job_name=my_job : \ "Not like \ ' b$%\ '";


IMPDP Introduction

The IMPDP command-line option has many of the same options as EXPDP:
1, Remap_datafile
This option is used to convert the source data file name to the destination data file name and may be required to move the table space between different platforms.
Remap_datafiel=source_datafie:target_datafile

2, Remap_schema
This option is used to load all objects of the source scheme into the target scenario.
Remap_schema=source_schema:target_schema

3, Remap_tablespace
Import all objects from the source table space into the destination table space
Remap_tablespace=source_tablespace:target:tablespace

4, Reuse_datafiles
This option specifies whether to overwrite a data file that already exists when the table space is created. The default is n
Reuse_datafiels={y | N

5, Skip_unusable_indexes
Specifies whether the import is skipping an unavailable index, and the default is n

6, Sqlfile
Specifies that the index DDL operation to be specified is written to the SQL script by the import
Sqlfile=[directory_object:]file_name
IMPDP Scott/tiger directory=dump dumpfile=tab.dmp sqlfile=a.sql

7, Streams_configuration
Specifies whether to import stream metadata (stream matadata) and the default value is Y.

8, Table_exists_action
This option specifies the action to be taken by the import job when the table already exists, and the default is skip
Tabble_exists_action={skip | APPEND | TRUNCATE | Frplace}
When this option is set to skip, the import job skips the existing table processing the next object, and when set to append, appends the data, and when it is truncate, the import job truncates the table and appends it with new data; When set to replace, the import job deletes the existing table. Rebuilding table sickness append data, note that the TRUNCATE option does not apply with the cluster table and the Network_link option

9, TRANSFORM
This option specifies whether to modify DDL statements that establish the object
Transform=transform_name:value[:object_type]
Transform_name is used to specify the name of the transformation, where segment_attributes is used to identify segment attributes (physical properties, storage properties, tablespace, logs, and so on), storage to identify segment storage properties, and value to specify whether to include segment attributes or segment storage properties , object_type is used to specify the object type.
IMPDP Scott/tiger directory=dump dumpfile=tab.dmp transform=segment_attributes:n:table

10, Transport_datafiles
This option is used to specify the data files to be imported into the target database when moving space
Transport_datafile=datafile_name
Datafile_name used to specify the data files to be replicated to the target database
IMPDP System/manager Directory=dump dumpfile=tts.dmp
transport_datafiles= '/user01/data/tbs1.f ' calls IMPDP


IMPDP Import Mode:
1. Import by Table
The table in the P_street_area.dmp file, which is exported as a GWM user by Schemas=gwm:
IMPDP gwm/gwm@fgisdb dumpfile =p_street_area.dmp logfile=imp_p_street_area.log DIRECTORY=DIR_DP Job_name=my_job

2. Import by user (user information can be imported directly, that is, if the user information does not exist under the circumstances can also be directly imported)
IMPDP gwm/gwm@fgisdb schemas=gwm dumpfile =expdp_test.dmp logfile=expdp_test.log DIRECTORY=DIR_DP job_name=my_job

3, not through the EXPDP steps to generate DMP files and direct import methods:
--Import tables from the source database to the target database P_street_area
IMPDP gwm/gwm DIRECTORY=DIR_DP network_link=igisdb tables=p_street_area logfile=p_street_area.log
Igisdb is the link name of the destination database and the source data, DIR_DP is the directory on the destination database

4, replace the table space
Using Remap_tablespace parameters
--Export all data under the GWM user
EXPDP SYSTEM/ORCL Directory=data_pump_dir dumpfile=gwm.dmp schemas=gwm
Note: If the user data exported by the SYS user, including the user creation, authorization section, is exported with its own user, it does not contain the content
--The following is to import all the data under the GWM user into the Tablespace gcomm (originally gmapdata under the table space)
IMPDP SYSTEM/ORCL Directory=data_pump_dir dumpfile=gwm.dmp Remap_tablespace=gmapdata:gcomm


Exp and Imp
Exp Keyword Description:
Keyword description (default value)
------------------------------
USERID User name/password
Buffer Data buffers Size
File output files (expdat. DMP)
COMPRESS Import to a zone (Y)
Grants Export Permission (Y)
INDEXES Export Index (Y)
Direct directly path (N)-Direct export faster
Log screen output logs file
Rows export rows of data (Y)
Consistent cross table consistency (N)
Full export entire file (N)
Owner User Name list
Table List of tables
Length of RecordLength io record
Inctype Incremental Export Type
Record tracking incremental export (Y)
Triggers export triggers (Y)
STATISTICS Analysis Object (estimate)
Parfile parameter filename
CONSTRAINTS export Constraint (Y)
Object_consistent transactions that are set to read-only during object export (N)
FEEDBACK Show progress per x line (0)
FILESIZE the maximum size of each dump file
FLASHBACK_SCN is used to set the session snapshot back to the previous state of the SCN
Flashback_time is used to get the SCN time closest to the specified time
Select clause used by QUERY to export a subset of tables
Resumable hangs (N) when it encounters a space-related error
Resumable_name the text string used to identify recoverable statements
Resumable_timeout Resumable's Waiting time
Tts_full_check perform a full or partial dependency check on TTS
tablespaces List of table spaces to export
Transport_tablespace export of removable tablespace metadata (N)
TEMPLATE Call IAS-mode exported template name

Common Exp Keywords

1, full for the export of the entire database, in conjunction with ROWS=N, export the entire database structure.
such as: Exp userid=gwm/gwm file=/test.dmp log=test.log full=y rows=n direct=y
2, owner and tables, to define an exp exported object
such as: Exp userid=gwm/gwm file=/test.dmp log=test.log owner=gwm table= (table1,table2)
3, buffer and feedback if the export data is large, consider using these two parameters.
such as: Exp userid=gwm/gwm file=/test.dmp log=test.log feedback=10000 buffer=100000000 tables= (table1,table2)
4, file and log to specify the DMP name and log name of the backup
5, Compress does not compress the contents of the exported data, the default Y
6, FileSize If the exported data file is large, should use this parameter, limit file size not exceeding 2g
such as: Exp USERID=GWM/GWM file=/test1,test2,test3,test4,test5 filesize=2g log=test.log
This creates the test1.dmp,test2.dmp, and each file size is 2g.

IMP keyword Description
Keyword description (default value) keyword description (default)
-------------------------------------------------------------
USERID username/password Full import entire file (N)
Buffer data buffer size fromuser owner user Name list
File input files (expdat. DMP) touser List of user names
Show list only the contents of the file (N) table List of tables
IGNORE Ignore creation error (N) RecordLength IO record length
Grants Import permission (Y) Inctype incremental import type
INDEXES Import Index (Y) commit array Insert (N)
Rows Import data row (Y) parfile parameter file name
Log screen output CONSTRAINTS import restrictions (Y)

DESTROY Coverage Table space data file (N)
Indexfile writes table/index information to the specified file
Skip_unusable_indexes Skip maintenance for index not available (N)
FEEDBACK Show progress per x line (0)
Toid_novalidate to skip validation of the specified type ID
FILESIZE the maximum size of each dump file
STATISTICS always import the estimated statistical information
Resumable hangs (N) when it encounters a space-related error
Resumable_name a text string used to identify recoverable statements
Resumable_timeout Resumable's Waiting time
COMPILE compilation process, package and function (Y)
Streams_configuration the general metadata for the import stream (Y)
Streams_instantiation Import Stream instantiation metadata (N)

The following keywords are only available for use in a table space that can be transferred
Transport_tablespace import of removable tablespace metadata (N)
Tablespaces The table space to be transferred to the database
Datafiles data files to be transferred to the database
Tts_owners user notes with data in a removable tablespace set: Inctype incremental import type This parameter to 9i has been deprecated and I am using the oracle11g experiment to find that the parameter is no longer available. The following experiment shows: C:\users\thinkpad>imp Fyzh_ora/fyzh_ora file=rm_trs_seg.dmp Log=rm_trs_seg.log F
Romuser=ltwebgis Inctype=restore

Import:release 11.1.0.7.0-production on Tuesday January 10 22:18:14 2012
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connect to: Oracle Database 11g Enterprise Edition release 11.1.0.7.0-production
With the partitioning, OLAP, Data Mining and real application testing options
Export files created by export:v10.02.01 through a regular path
Warning: These objects are exported by Ltwebgis instead of the current user
Import in the ZHS16GBK character set and Al16utf16 NCHAR character set has been completed
Imp-00021:inctype parameter is obsolete
IMP-00083: The dump file does not contain an incremental export
IMP-00000: Failed to terminate import successfully

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.