Import (IMPDP) and export using Data pump (EXPDP)

Source: Internet
Author: User
Tags create directory import database

Data pump technology is a new technology in Oracle Database 10g, which is 15-45 times faster than the original import/export (IMP,EXP) technology. The increase in speed stems from the use of parallel techniques to read and write export dump files.

EXPDP Use

When using the EXPDP tool, its dump file can only be stored in the corresponding OS directory of the directory object, not directly to the OS directory where the dump file resides. So when using the EXPDP tool, you must first establish a directory object, and you need to grant database users permission to use the directory object.

First, you have to build your directory:

Sql> Conn/as SYSDBA

sql> CREATE OR REPLACE DIRECTORY dir_dump as '/u01/backup/';

Sql> GRANT read,write on DIRECTORY dir_dump to public;

1) Exporting Scott's entire schema

--Default Export login account schema

$ EXPDP Scott/tiger@db_esuiteparfile=/orahome/expdp.par

Expdp.par content:

Directory=dir_dump

Dumpfile=scott_full.dmp

Logfile=scott_full.log

--Other account login, specify schemas in Parameters

$ EXPDP System/oracle@db_esuiteparfile=/orahome/expdp.par

Expdp.par content:

Directory=dir_dump

Dumpfile=scott_full.dmp

Logfile=scott_full.log

Schemas=scott

2) to export the Dept,emp table under Scott

$ EXPDP Scott/tiger@db_esuiteparfile=/orahome/expdp.par

Expdp.par content:

Directory=dir_dump

Dumpfile=scott.dmp

Logfile=scott.log

Tables=dept,emp

3) Export the table under Scott except for EMP

$ EXPDP Scott/tiger@db_esuiteparfile=/orahome/expdp.par

Expdp.par content:

Directory=dir_dump

Dumpfile=scott.dmp

Logfile=scott.log

Exclude=table: "= ' EMP '"

4) Export the stored procedure under Scott

$ EXPDP Scott/tiger@db_esuiteparfile=/orahome/expdp.par

Expdp.par content:

Directory=dir_dump

Dumpfile=scott.dmp

Logfile=scott.log

Include=procedure

5) Export the table under Scott with ' E '

$ EXPDP Scott/tiger@db_esuiteparfile=/orahome/expdp.par

Expdp.par content:

Directory=dir_dump

Dumpfile=scott.dmp

Logfile=scott.log

Include=table: "Like ' E% '"//can be changed to not, to export a table that does not begin with E

6) with query export

$ EXPDP Scott/tiger@db_esuite Parfile=/orahome/expdp.par

Expdp.par content:

Directory=dir_dump

Dumpfile=scott.dmp

Logfile=scott.log

Tables=emp,dept

Query=emp: "Where empno>=8000"

Query=dept: "Where deptno>=10 and deptno<=40"

Note: If you are dealing with multiple table exports with queries, you may need to pay attention to whether the data filtered by the query criteria conforms to such an external health constraint, such as if a field in the EMP is DEPTNO and is the primary key in the associated dept if the "whereempno>=8000" DEPTNO=50, then your dept condition "where deptno>=10 and deptno<=40" does not contain deptno=50 data, then an error occurs when importing.

EXPDP Options

1. ATTACH

This option is used to establish an association between a client session and an existing export action. The syntax is as follows:

Attach=[schema_name.] Job_name

Schema_name is used to specify the scheme name, job_name to specify the export job name. Note that if you use the Attach option, you cannot specify any other options on the command line except for the connection string and the Attach option, as shown in the following example:

EXPDP Scott/tiger Attach=scott.export_job

2. CONTENT

This option specifies what to export. The default value is all. The syntax is as follows:

Content={all | data_only | Metadata_only}

When the content is set to all, the object definition and all its data are exported; When Data_only, only the object data is exported; When Metadata_only, only the object definition is exported, as shown in the following example:

EXPDP Scott/tiger directory=dump dumpfile=a.dump content=metadata_only

3. DIRECTORY

Specifies the directory where the dump files and log files are located. The syntax is as follows:

Directory=directory_object

Directory_object is used to specify the directory object name. Note that directory objects are objects created using the Create DIRECTORY statement, not an OS directory, as shown in the following example:

EXPDP Scott/tiger Directory=dump Dumpfile=a.dump

Create a directory:

CREATE DIRECTORY dump as ' d:dump ';

The query created those subdirectories:

SELECT * from Dba_directories;

4. DumpFile

Specifies the name of the dump file, with the default name Expdat.dmp. Syntax is as follows:

dumpfile=[directory_object:]file_name[,....]

Directory_object is used to specify the directory object name and file_name to specify the dump file name. Note that if you do not specify Directory_object, the export tool automatically uses directory objects specified by the directory option, as shown in the following example:

EXPDP Scott/tiger DIRECTORY=DUMP1 dumpfile=dump2:a.dmp

5. Estimate

Specifies a method that estimates the amount of disk space occupied by the exported table. The default value is blocks. Syntax is as follows:

Extimate={blocks | STATISTICS}

When set to blocks, Oracle calculates the space occupied by the target object by multiplying the number of blocks of data that it occupies, and when set to statistics, estimates the footprint of the object based on the most recent statistic, as shown in the following example:

EXPDP Scott/tiger tables=emp estimate=statistics directory=dumpdumpfile=a.dump

In general, when using the default value (blocks), the estimated file size in the log is larger than the actual EXPDP file, and the statistics will be similar to the actual size.

6. Extimate_only

Specifies whether to estimate only the disk space occupied by the export job, and the default value is N. syntax is as follows:

Extimate_only={y | N

When set to Y, the export action estimates only the disk space occupied by the object, and does not perform an export job, when n, not only estimating the disk space occupied by the object, but also performing an export operation, examples are as follows:

EXPDP Scott/tiger estimate_only=y Nologfile=y

7. EXCLUDE

This option specifies the object type or related object to exclude when performing an action. The syntax is as follows:

exclude=object_type[:name_clause][,....]

Object_type is used to specify the type of object to exclude, name_clause to specify the specific object to exclude. Exclude and include cannot be used at the same time, as shown in the following example:

EXPDP Scott/tiger directory=dump dumpfile=a.dup Exclude=view

In the EXPDP help file, you can see the existence of exclude and include parameters, the command format described in the two parameter documents is problematic, and the correct usage is:

exclude=object_type[:name_clause][,...]

include=object_type[:name_clause][,...]

Example:

EXPDP <other_parameters> schema=scottexclude=sequence,table: "In (' EMP ', ' DEPT ')"

IMPDP <other_parameters> Schema=scott include = Function,package, procedure, table: "= ' EMP '"

This is not enough, because the command contains a number of special characters, in different operating systems need to pass the escape characters to enable the above command to execute smoothly,

Such as:

Exclude=table: "In (' Bigtale ')"

8. FILESIZE

Specifies the maximum size of the exported file, which defaults to 0 (indicates that the file size is unlimited).

9. Flashback_scn

Specifies that table data is exported for a specific SCN time. The syntax is as follows:

Flashback_scn=scn_value

Scn_value is used to identify the SCN value. FLASHBACK_SCN and Flashback_time cannot be used at the same time, as shown in the following example:

EXPDP Scott/tiger directory=dump dumpfile=a.dmp flashback_scn=358523

Flashback_time

Specifies that table data for a specific point in time is exported. The syntax is as follows:

Flashback_time= "To_timestamp (time_value)"

Examples are as follows:

EXPDP Scott/tiger directory=dump dumpfile=a.dmp flashback_time = "To_timestamp (' 25-08-200414:35:00 ', ' DD-MM-YYYY HH24: Mi:ss ') "

One. Full

Specifies the database schema export, the default is N. syntax is as follows:

Full={y | N

When Y, the identity performs the database export.

Help

Specifies whether to display help information for the EXPDP command-line option, which defaults to N. When set to Y, Help for the export option is displayed, as shown in the following example:

EXPDP help=y

INCLUDE

Specifies the type of object to include in the export and related objects. The syntax is as follows:

include=object_type[:name_clause][,...]

Examples are as follows:

EXPDP Scott/tiger directory=dump dumpfile=a.dmp Include=trigger

1.1.2 EXPDP Option

Job_name

Specifies the name of the function to export, by default sys_xxx. Syntax is as follows:

Job_name=jobname_string

Examples are as follows:

EXPDP Scott/tiger directory=dump dumpfile=a.dmp Include=triggerjob_name=exp_trigger

When you want to temporarily stop the EXPDP task, you can press CTRL + C to exit the current interactive mode, and the export operation will not stop after the exit, which is different from the previous exp of Oracle. Previous EXP, if you exit interactive mode, an error terminates the export task. In oracle10g, because EXPDP is a task defined within a database, it has nothing to do with the client. After exiting the interaction, the command line mode is entered in export, and the status is supported for view commands:

export> status

If you want to stop changing the task, you can issue a stop_job command:

Export> Stop_job

If there is a command-line prompt: "Are you sure you want to stop the job ([y]/n):" or "Are you sure your wish to stop this job ([yes]/no):", the answer is yes or no, and the answer is yes will exit the current export interface .

You can then connect to this task again through the command line:

EXPDP TEST/TEST@ACF Attach=expfull

To restart the export with the Start_job command:

Export> Start_job

export> status

LOGFILE

Specifies the name of the export log file file, with the default name Export.log. Syntax is as follows:

Logfile=[directory_object:]file_name

Directory_object is used to specify the directory object name, file_name to specify the export log file name. If Directory_object is not specified. The Export action automatically uses the appropriate option values for your directory, as shown in the following example:

EXPDP Scott/tiger directory=dump dumpfile=a.dmp logfile=a.log

Network_link

Specifies the database chain name, and you must set this option if you want to export the remote database object to a dump file in a local routine.

The difference between using connection strings and Network_link in EXPDP:

EXPDP belongs to the server side tool, while exp belongs to the client tool, EXPDP generated files are stored on the server by default, while the files generated by exp are stored on the client.

EXPDP username/password@connect_string//For use of this format, directory is created using the source database and the generated files are stored on the server.

How do I put the generated files in the target database instead of the source database, and use Network_link in EXPDP. For example, on a native EXPDP remote server database, first create the Dblink on this computer to the service side, then create the directory and authorization, then EXPDP.

A. Creating a dblink to the service side

Conn AA/AACC

Create DATABASE link <link_name> connect to <username> identified by<password> using ' <connect_stri Ng> ';//username and password are server-side

B. Creating your directory

Conn/assysdba

Create or replace directory dir as '/home/oracle/dbbackup ';

Grant Read,write on directory dir to <username2>;

C. Export

Expdpusername2/password2 directory=dirnetwork_link=link_name ...//Here's the username2 with the Create dblink that user AA, Directory is also created by the target database

For example, in the native EXPDP remote server database, first on the local create to the service side of the Dblink, and then create directory and authorization, and then EXPDP useranme2/password2 ...

If you want to import a database directly without generating the DMP file, the principle is similar to the above, using the IMPDP band Network_link directly, so that you can impdp directly, bypassing the EXPDP steps

IMPDP Network_link=tolink Schemas=link Remap_schema=link:link2

Nologfile

This option specifies that the export log file is prevented from being generated, and the default value is N.

PARALLEL

Specifies the number of parallel processes that perform the export operation, with a default value of 1

Parfile

Specifies the name of the exported parameter file. The syntax is as follows:

Parfile=[directory_path:]file_name

QUERY

Used to specify where conditions for filtering exported data. The syntax is as follows:

Query=[schema.] [Table_name:]query_clause

Schema is used to specify the scenario name, table_name to specify the table name, and query_clause to specify the conditional restriction clause. The query option cannot be used at the same time as CONNECT = Metadata_only, extimate_only, transport_tablespaces, and so on, as shown in the following example:

EXPDP scott/tiger directory=dump dumpfiel=a.dmp tables=empquery= ' WHERE deptno=20 '

SCHEMAS.

This scenario is used to specify the execution scenario schema export, which defaults to the current user scenario.

STATUS

Specifies the detailed status of the export action process, with a default value of 0.

TABLES

Specifies the table schema export. The syntax is as follows:

Tables=[schema_name.] table_name[:p artition_name][,...]

Schema_name is used to specify the scenario name, table_name to specify the exported table name, partition_name to specify the partition name to export.

Tablespaces

Specifies that you want to export a table space list.

25.transport_full_check

This option specifies how to check the relationship between the moved tablespace and the unbound table space, by default N.

When set to Y, the export action checks the full association of the table space directly, and displays an error message if the table space or the tablespace in which the index is located has only one tablespace moved.

When set to N, the export only checks for single-ended dependencies, and if you move the tablespace of the index, but the table space is not moved, an error message is displayed, and if the table space in which the move table is located, the table space for the index is not moved.

26.transport_tablespaces

Specifies that the table space mode export is performed.

VERSION

Specifies the database version of the exported object, and the default value is compatible. The syntax is as follows:

version={compatible | LATEST |version_string}

When compatible, the object metadata is generated according to the initialization parameter compatible, and for latest, object metadata is generated based on the actual version of the database. Version_string is used to specify the database version string.

using IMPDP

The IMPDP command-line option has many of the same options as EXPDP:

1, Remap_datafile

This option is used to convert the source data file name to the destination data file name and may be required to move the table space between different platforms.

Remap_datafiel=source_datafie:target_datafile

2, Remap_schema

This option is used to load all objects of the source scheme into the target scenario.

Remap_schema=source_schema:target_schema

3, Remap_tablespace

Import all objects from the source table space into the destination table space

Remap_tablespace=source_tablespace:target_tablespace

4, Reuse_datafiles

This option specifies whether to overwrite a data file that already exists when the table space is created. The default is n

Reuse_datafiels={y | N

5, Skip_unusable_indexes

Specifies whether the import is skipping an unavailable index, and the default is n

6, Sqlfile

Specifies that the index DDL operation to be specified is written to the SQL script by the import

Sqlfile=[directory_object:]file_name

IMPDP Scott/tiger directory=dump dumpfile=tab.dmp sqlfile=a.sql

7, Streams_configuration

Specifies whether to import stream metadata (stream matadata) and the default value is Y.

8, Table_exists_action

This option specifies the action to be taken by the import job when the table already exists, and the default is skip

Tabble_exists_action={skip | APPEND | TRUNCATE | Frplace}

When this option is set to skip, the import job skips the existing table processing the next object, and when set to append, appends the data, and when it is truncate, the import job truncates the table and appends it with new data; When set to replace, the import job deletes the existing table. Rebuilding table sickness append data, note that the TRUNCATE option does not apply with the cluster table and the Network_link option

9, TRANSFORM

This option specifies whether to modify DDL statements that establish the object

Transform=transform_name:value[bject_type]

Transform_name is used to specify the name of the transformation, where segment_attributes is used to identify segment attributes (physical properties, storage properties, tablespace, logs, and so on), storage to identify segment storage properties, and value to specify whether to include segment attributes or segment storage properties , object_type is used to specify the object type.

IMPDP scott/tigerdirectory=dump dumpfile=tab.dmp Transform = segment_attributes:n:table

10, Transport_datafiles

This option is used to specify the data files to be imported into the target database when moving space

Transport_datafile=datafile_name

Datafile_name used to specify the data files to be replicated to the target database

Impdpsystem/manager directory=dump dumpfile=tts.dmp transport_datafiles = '/user01/data/tbs1.f '

Call IMPDP

1. Import Table

IMPDP Scott/tiger Directory=dump_dir dumpfile=tab.dmptables=dept,emp

IMPDP system/managedirectory=dump_dir dumpfile=tab.dmp tables=scott.dept,scott.emp REMAP_SCHEMA=SCOTT:SYSTEM

The first method represents importing the Dept and EMP Tables into the Scott scheme, and the second represents the system that imports the Dept and EMP Tables

Note that if you want to import a table into another scenario, you must specify the Remapschema option.

2. Import Scheme

IMPDP Scott/tiger Directory=dump_dir Dumpfile=schema.dmpschemas=scott

IMPDP System/manager Directory=dump_dir Dumpfile=schema.dmpschemas=scott Remap_schema=scott:system

3. Import Table Space

Impdpsystem/manager Directory=dump_dir dumpfile=tablespace.dmp Tablespaces=user01

4. Import Database

IMPDP System/manager Directory=dump_dir dumpfile=full.dmp full=y

--Giving the EXPDP,IMPDP permission

Sql> Grant Exp_full_database,imp_full_database to Susan;

Appendix Information:

Create a logical directory that does not create a real directory on the operating system, preferably with an administrator such as system.

Create directory dpdata1 as ' D:\test\dump ';

Second, view the Administrator directory (also see if the operating system exists, because Oracle does not care if the directory exists, if it does not exist, then an error)

SELECT * from Dba_directories;

Third, give Scott user in the specified directory operation permissions, preferably with system and other administrators to give.

Grant Read,write on directory dpdata1 to Scott;

Iv. Export of data

1) Guided by user

EXPDP SCOTT/TIGER@ORCL Schemas=scott dumpfile=expdp.dmpdirectory=dpdata1;

2) Parallel Process parallel

EXPDP SCOTT/TIGER@ORCL directory=dpdata1 dumpfile=scott3.dmpparallel=40 job_name=scott3

3) Guided by the name of the table

EXPDP SCOTT/TIGER@ORCL tables=emp,dept dumpfile=expdp.dmpdirectory=dpdata1;

4) According to the query conditions guide

EXPDP scott/tiger@orcl directory=dpdata1 dumpfile=expdp.dmptables=emp query= ' WHERE deptno=20 ';

5) According to the Table space Guide

EXPDP System/manager directory=dpdata1 dumpfile=tablespace.dmptablespaces=temp,example;

6) Guide the entire database

EXPDP System/manager directory=dpdata1 dumpfile=full.dmpfull=y;

Five, restore data

1) directed to the designated user

IMPDP Scott/tiger directory=dpdata1 Dumpfile=expdp.dmpschemas=scott;

2) Change the owner of the table

IMPDP System/manager directory=dpdata1 dumpfile=expdp.dmptables=scott.dept remap_schema=scott:system;

3) Import Table space

IMPDP System/manager directory=dpdata1 dumpfile=tablespace.dmptablespaces=example;

4) Import Database

impdb System/manager Directory=dump_dir dumpfile=full.dmpfull=y;

5) Append Data

IMPDP System/manager directory=dpdata1 Dumpfile=expdp.dmpschemas=system table_exists_action=append;

the usage difference between exp/imp and EXPDP/IMPDP

1: The user UserA object to the user UserB, the use of the difference is Fromuser=usera touser=userb, remap_schema= ' UserA ': ' UserA '. For example

Imp system/passwd fromuser=usera touser=userbfile=/oracle/exp.dmp log=/oracle/exp.log;

IMPDP system/passwd directory=expdp dumpfile=expdp.dmpremap_schema= ' UserA ': ' UserB ' logfile=/oracle/exp.log;

2: Replace the table space, with Exp/imp, to change the table in the table space, you need to manually to deal with,

An operation such as ALTER TABLE XXX move tablespace_new.

Use IMPDP as long as you use remap_tablespace= ' tabspace_old ': ' Tablespace_new '

3: When you specify some tables, when using Exp/imp, the use of tables is tables= (' table1 ', ' table2 ', ' table3 ').

The use of EXPDP/IMPDP is tables= ' table1 ', ' table2 ', ' table3 '

4: Whether you want to export rows of data

EXP (rows=y export rows of data, rows=n do not export rows of data)

EXPDP content (All: Object + Export data row, Data_only: Export only objects, Metadata_only: Only records that export data)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.