The use of data pump really good, not only the export speed, and occupy small space, normal exp to a few hours as long as 10 minutes can be done, compression can reach more than
First, EXPDP and IMPDP instructions for use
Oracle Database 10g introduces the latest data Dump technology, the role of data Pump export import (EXPDP and IMPDP)
1) Implement logical backup and logical recovery.
2) Move objects between database users.
3) moving objects between databases
4) Implement table space removal.
Second, the data pump export import and traditional export import difference
Prior to 10g, traditional export and import using exp tools and IMP tools, starting with 10g, not only retained the original exp and IMP tools, but also provided the data Pump Export Import Tool EXPDP and IMPDP. Things to consider when using EXPDP and IMPDP:
1) Exp and IMP are client tool programs that can be used either on the client or on the server.
2) EXPDP and IMPDP are server-side tool programs that can only be used on the Oracle server and cannot be used on the client side.
3) IMP applies only to exp export file, not to EXPDP export file; IMPDP only applies to EXPDP export file, not to exp export file.
Data pump export includes export table, export scheme, export table space, export database 4 ways.
Three, EXPDP parameters
EXPDP command-line options can be viewed through EXPDP help=y:
E:\EMP>EXPDP help=y
Export:release 10.2.0.1.0-production on Sunday, 03 May, 2009 17:54:49
Copyright (c) 2003, 2005, Oracle. All rights reserved.
The Data Pump Export utility provides a way to transfer between Oracle databases
The mechanism of the data object. The utility can be invoked using the following command:
Example: EXPDP scott/tigerdirectory=dmpdir dumpfile=scott.dmp
You can control how the export runs. The method is: Enter after the ' EXPDP ' command
Various parameters. To specify each parameter, use the keyword:
Format: EXPDP keyword=value or keyword= (value1,value2,..., Valuen)
Example: EXPDP scott/tigerdumpfile=scott.dmp directory=dmpdir Schemas=scott
or tables= (T1:P1,T1:P2), if T1 is a partitioned table
The USERID must be the first parameter in the command line.
Keyword description (default)
------------------------------------------------------------------------------
ATTACH Connect to an existing job, such as ATTACH [= Job name].
COMPRESSION reduce the size of valid dump file contents
The keyword values are: (metadata_only) and NONE.
CONTENT specifies the data to unload, where the valid keywords are:
(all), Data_only and Metadata_only.
directory objects used by directory for dump files and log files.
DumpFile a list of destination dump files (expdat.dmp),
such as Dumpfile=scott1.dmp, Scott2.dmp,dmpdir:scott3.dmp.
Encryption_password The password keyword used to create encrypted column data.
ESTIMATE calculates the job estimate, where the valid keyword is:
(BLOCKS) and STATISTICS.
Estimate_only calculates job estimates without performing an export.
EXCLUDE excludes specific object types, such as exclude=table:emp.
FILESIZE specifies the size of each dump file in bytes.
FLASHBACK_SCN is used to set the session snapshot back to the SCN of the previous state.
The flashback_time is used to obtain the time of the SCN closest to the specified time.
Full exports the entire database (N).
Help displays the aid message (N).
Include includes specific object types, such as Include=table_data.
Job_name the name of the export job to create.
LOGFILE log file name (Export.log).
Network_link the name of the remote database that is linked to the source system.
Nologfile does not write to the log file (N).
PARALLEL changes the number of active workers for the current job.
PARFILE Specifies the parameter file.
QUERY a predicate clause that is used to export a subset of tables.
The percentage of data to be exported by SAMPLE;
SCHEMAS a list of scenarios to export (login scenario).
Status at the default value (0) will display when the new state is available,
The job status of the frequency (in seconds) to monitor.
TABLES identifies the list of tables to be exported-there is only one scenario.
Tablespaces identifies the list of tablespace to be exported.
Transport_full_check validates the bucket (N) of all tables.
Transport_tablespaces the list of tablespaces to unload metadata from.
Version of the object to be exported, where the valid keyword is:
(COMPATIBLE), LATEST, or any valid database version.
The following commands are valid in interactive mode.
Note: The use of abbreviations is allowed
Command description
------------------------------------------------------------------------------
Add_file adds a dump file to the dump file set.
Continue_client returns to record mode. If it is idle, the job will be restarted.
Exit_client exits the client session and puts the job in a running state.
FILESIZE The default file size (in bytes) for subsequent add_file commands.
Help summarizes the interactive commands.
Kill_job Detach and delete jobs.
PARALLEL changes the number of active workers for the current job.
Number of Parallel=<worker >.
Start_job Start/Resume current job.
Status at the default value (0) will display when the new state is available,
The job status of the frequency (in seconds) to monitor.
Status[=interval]
Stop_job closes the executed job and exits the client.
Stop_job=immediate will close immediately
Data pump operation.
1) ATTACH
This option is used to establish an association between a client session and an existing export. The syntax is as follows
Attach=[schema_name.] Job_name
The schema_name is used to specify the scheme name, and job_name is used to specify the export job name. Note that if you use the Attach option in addition to the connection string and the Attach option in the command line, you cannot specify any other options, such as the following example:
EXPDP Scott/tiger Attach=scott.export_job
2) CONTENT
This option is used to specify what to export. The default value is all
Content={all | data_only | Metadata_only}
When content is set to all, the object definition and all its data are exported. When data_only, only the object data is exported, and only the object definition is exported as Metadata_only
EXPDP Scott/tiger Directory=dumpdumpfile=a.dump
Content=metadata_only
3) DIRECTORY
Specify the directory where the dump files and log files reside
Directory=directory_object
The directory_object is used to specify the directory object name. Note that directory objects are objects created using the Create DIRECTORY statement, not the OS directory
EXPDP Scott/tiger Directory=dumpdumpfile=a.dump
To create a directory:
Sql> createdirectory dump_dir as ' d:\dump ';
The catalog has been created.
sql> grantread,write on directory Dump_dir to Scott;
Authorization is successful.
The query created those subdirectories:
SELECT * from Dba_directories;
4) DumpFile
Used to specify the name of the dump file, the default name is Expdat.dmp
Dumpfile=[directory_object:]file_name [,....]
The directory_object is used to specify the directory object name, and file_name is used to specify the dump file name. Note that if you do not specify Directory_object, the export tool automatically uses the directory object specified by the directory option
EXPDP Scott/tiger Directory=dump1dumpfile=dump2:a.dmp
5.) ESTIMATE
Specifies the method by which to estimate the disk space consumed by the exported table. The default value is blocks
Extimate={blocks | STATISTICS}
When set to blocks, Oracle calculates the object footprint based on recent statistics by multiplying the number of data blocks occupied by the target object by the size of the data block to estimate the amount of space the object occupies, when set to statistics
EXPDP Scott/tiger Tables=empestimate=statistics
Directory=dump Dumpfile=a.dump
6.) extimate_only
Specifies whether to estimate only the disk space occupied by the export job, the default value is N
Extimate_only={y | N
When set to Y, the export only estimates the disk space occupied by the object, not the export job, which, when N, not only estimates the disk space occupied by the object, but also performs an export operation.
EXPDP Scott/tiger Estimate_only=ynologfile=y
7.) EXCLUDE
This option is used to specify whether to dispose of object types or related objects when an operation is performed
Exclude=object_type[:name_clause] [,....]
The object_type is used to specify the type of object to exclude, and name_clause is used to specify the specific object to exclude. Exclude and include cannot be used at the same time
EXPDP Scott/tiger directory=dumpdumpfile=a.dup Exclude=view
8) FILESIZE
Specifies the maximum size of the exported file, which defaults to 0 (indicates no limit on file size)
9. Flashback_scn
Specify table data to export specific SCN moments
Flashback_scn=scn_value
The scn_value is used to identify the SCN value. FLASHBACK_SCN and Flashback_time cannot be used simultaneously
EXPDP Scott/tiger Directory=dumpdumpfile=a.dmp
flashback_scn=358523
) Flashback_time
Specify the export of table data at a specific point in time
Flashback_time= "To_timestamp (time_value)"
EXPDP Scott/tiger directory=dumpdumpfile=a.dmp flashback_time=
"To_timestamp (' 25-08-200414:35:00 ', ' dd-mm-yyyyhh24:mi:ss ')"
One) full
Specify database schema export, default = N
Full={y | N
Is Y, the identity performs a database export.
) help
Specifies whether to display help information for the EXPDP command-line option, default = N
When set to Y, the help information for the export option is displayed.
EXPDP help=y
) INCLUDE
Specify the type of objects to include in the export and related objects
INCLUDE = Object_type[:name_clause] [,...]
Job_name)
Specifies the name of the action to export, which defaults to Sys_xxx
Job_name=jobname_string
LOGFILE)
Specifies the name of the exported log file file, the default name is Export.log
Logfile=[directory_object:]file_name
The directory_object is used to specify the directory object name, and file_name is used to specify the export log file name. If you do not specify Directory_object. The export function automatically uses the appropriate option values for your directory.
EXPDP Scott/tiger directory=dumpdumpfile=a.dmp Logfile=a.log
Network_link)
Specifies the database chain name, which must be set if you want to export the remote database object to a dump file for a local routine.
Nologfile)
This option specifies that the export log file is prohibited from being generated and the default value is N.
) PARALLEL
Specifies the number of parallel processes that perform the export operation, with a default value of 1
Parfile)
Specify the name of the export parameter file
Parfile=[directory_path] File_name
) QUERY
Used to specify where conditions to filter the exported data
Query=[schema.] [TABLE_NAME:] Query_clause
Schema is used to specify the scheme name, TABLE_NAME is used to specify the table name, and Query_clause is used to specify a conditional restriction clause. The query option cannot be used in conjunction with options such as Connect=metadata_only,extimate_only,transport_tablespaces.
EXPDP Scott/tiger Directory=dumpdumpfiel=a.dmp
Tables=emp query= ' WHERE deptno=20 '
) SCHEMAS
This scenario is used to specify the execution scheme pattern export, which is implicitly considered the current user scenario.
STATUS
Specifies the verbose status of the export process, with a default value of 0
TABLES)
Specify table Schema Export
Tables=[schema_name.] table_name[:p artition_name][,...]
Schema_name is used to specify the scheme name, TABLE_NAME is used to specify the name of the exported table, and partition_name is used to specify the name of the partition to export.
) tablespaces
Specify the table space list to export
Transport_full_check)
This option is used to specify the check method for the moved table space and the non-moving Tablespace association relationship, by default, N.
When set to Y, the export checks the full association of the table space directly, and an error message is displayed if only one tablespace is moved in the table space where the tablespace or its index resides. When set to N, the export only checks for single-ended dependencies, if the table space of the index is removed, but the table space is not moved. An error message is displayed, and if the table space of the move table is not moved, the table space in which the index is located is not displayed.
Transport_tablespaces)
Specify to perform table space mode export
VERSION
Specifies the database version of the exported object, the default value is compatible.
version={compatible | LATEST |version_string}
When compatible, the object metadata is generated based on the initialization parameter compatible, and the object metadata is generated based on the actual version of the database when latest. Version_string is used to specify the database version string.
Iv. usage of EXPDP
When using the EXPDP tool, the dump file can only be stored in the OS directory of the directory object, not the OS directory where the dump file resides. Therefore, when using the EXPDP tool, You must first establish a directory object. And you need to grant database users permission to use directory objects.
CREATE DIRECTORY dump_dir as ' c:\emp ';
GRANT READ, WRITE on DIRECTORY Dump_dir to Scott;
1. Export table
EXPDP Scott/tiger Directory=dump_dir dumpfile=dept.dmptables=dept
Note: Under UNIX, be aware of the directory directory's read and write permissions issues, such as:
View the directory where the Dump_dir is located: View the data dictionary with the SYS user dba_directories
More permissions for this folder: Chown–r Oracle:dba/exp, problem solving
2. Export scheme
EXPDP Scott/tiger directory=dump_dirdumpfile=schema.dmp Logfile=schema.log Schemas=system
3. Export Table Space
EXPDP Scott/tiger directory=dump_dirdumpfile=tb.dmp Logfile=tb.log tablespaces=users
4. Export the database
EXPDP System/manager directory=dump_dirdumpfile=full.dmp full=y
EXPDP Scott/tiger directory=dump_dirdumpfile=full.dmp full=y
Note: The Scott user does not have the appropriate permissions, give Scott the appropriate permissions, or use System to do a full-library export.
Sql> Grant Exp_full_database to Scott;
Grant succeeded.
Then do a full-Library export:
Five, IMPDP parameters
Different parameters of IMPDP and EXPDP:
1, Remap_datafile
This option is used to convert the source data file name to the target data file name, which may be required when moving tablespaces between different platforms.
Remap_datafiel=source_datafie:target_datafile
2, Remap_schema
This option is used to load all objects of the source scheme into the target scenario.
Remap_schema=source_schema:target_schema
3, Remap_tablespace
Import all objects from the source table space into the target table space
Remap_tablespace=source_tablespace:target:tablespace
4, Reuse_datafiles
This option specifies whether to overwrite existing data files when the tablespace is established. Default = N
Reuse_datafiels={y | N
5, Skip_unusable_indexes
Specifies whether the import skips an index that is not available and defaults to n
6, Sqlfile
Specifies that the import of the index DDL operation to be specified is written to the SQL script
Sqlfile=[directory_object:]file_name
IMPDP Scott/tiger directory=dumpdumpfile=tab.dmp Sqlfile=a.sql
7, Streams_configuration
Specifies whether to import stream metadata (Streammatadata), with the default value Y.
8, Table_exists_action
This option specifies the action to be performed by the import job when the table already exists, and the default is skip
Tabble_exists_action={skip | APPEND | TRUNCATE | Frplace}
When this option is set to skip, the import job skips the existing table to process the next object, and when set to append, the data is appended, and when truncate, the import job truncates the table and appends new data to it, and when set to replace, the import job deletes the existing table. Rebuild the table and append the data, note that the TRUNCATE option is not applicable with the cluster table and the Network_link option
9, TRANSFORM
This option specifies whether to modify the DDL statement that establishes the object
Transform=transform_name:value[:object_type]
Transform_name is used to specify the conversion name, where segment_attributes is used to identify segment properties (physical properties, storage properties, tablespace, logs, and so on), storage is used to identify segment storage, and value is used to specify whether to include segment attributes or segment storage properties. The object_type is used to specify the object type.
IMPDP Scott/tiger Directory=dumpdumpfile=tab.dmp
Transform=segment_attributes:n:table
10, Transport_datafiles
This option is used to specify the data files to be imported to the target database when moving space
Transport_datafile=datafile_name
Datafile_name used to specify data files that are copied to the target database
IMPDP System/manager Directory=dumpdumpfile=tts.dmp
transport_datafiles= '/user01/data/tbs1.f '
Vi. usage of IMPDP
1. Import Table
IMPDP Hsiufo/hsiufo directory=dump_dirdumpfile=full.dmp tables=scott.emp Remap_schema=scott:scott
Note: In order to have a full library of logical backup FULL.DMP, and then remove the user Scott's EMP table, import EMP to user Scott in Full.dmp
IMPDP Hsiufo/hsiufo directory=dump_dirdumpfile=full.dmp tables=scott.test Remap_schema=scott:system
The first method is to import the EMP table into the Scott scenario, and the second method represents the system scenario in which the test table is imported.
Note that if you are importing a table into another scenario, you must specify the Remap schema option.
2. Import Scheme
IMPDP Hsiufo/hsiufodirectory=dump_dir dumpfile=full.dmp Schemas=scott
IMPDP System/manager Directory=dump_dirdumpfile=schema.dmp
Schemas=scott Remap_schema=scott:system
3. Import Table Space
IMPDP System/manager Directory=dump_dirdumpfile=tablespace.dmp
Tablespaces=user01
4. Import the database
IMPDP System/manager directory=dump_dirdumpfile=full.dmp full=y
Oracle Data Pump Detailed