PLSQL_ Data Pump datapump Import and Export data IMPDP/EXPDP (concept) (Oracle Data Import Export tool)

Source: Internet
Author: User
Tags create directory log log

I. Summary

In the normal storage and database migration, when encountering a large database when using exp often takes a few hours, time consuming. oracle10g later can use EXPDP to export the database to spend much less time than Exp spent, and the file is much smaller.

ii. difference between Exp/imp and EXPDP/IMPDP

(1) The user UserA the object to the user UserB, the usage difference is Fromuser=usera touser=userb, remap_schema= ' UserA ': ' UserA '.

For example: Imp system/passwd fromuser=usera touser=userb file=/oracle/exp.dmp log=/oracle/exp.log;

IMPDP system/passwd directory=expdp dumpfile=expdp.dmp remap_schema= ' UserA ': ' UserB ' logfile=/oracle/exp.log;

(2) Replace the table space, with Exp/imp, in order to change the table space, you need to handle it manually,

Actions such as ALTER TABLE XXX move tablespace_new.

Use IMPDP as long as remap_tablespace= ' tabspace_old ': ' Tablespace_new '

(3) When specifying some tables, when using Exp/imp, the Tables usage is tables= (' table1′, ' table2′, ' table3′ ').

The usage of EXPDP/IMPDP is tables= ' table1′, ' table2′, ' table3′

(4) Whether to export data rows

EXP (rows=y Export data rows, rows=n do not export data rows)

EXPDP content (All: Object + Export data row, Data_only: Export only Object, Metadata_only: Only records that export data)

(5) EXPDP is a new feature of [10g] and can only be performed on the server. And the Exp/imp is universal.

(6) There is a new feature in oracle11g, when there is no data in the table, the segment is not allocated to save space, so exp cannot lead to empty table. The solution is to use EXPDP, of course, you can set the Deferred_segment_creation parameter or insert a row, and then rollback, but this is cumbersome.

Third, export data

1. The process of exporting expdb

(1). According to the User guide

EXPDP Scott/[email protected] Schemas=scott dumpfile=expdp.dmp directory=dpdata1;

(2). Parallel process Parallel

EXPDP Scott/[email protected] directory=dpdata1 dumpfile=scott3.dmp parallel=40 job_name=scott3

(3). by Table name

EXPDP Scott/[email protected] tables=emp,dept dumpfile=expdp.dmp directory=dpdata1;

(4). Guided by query criteria

EXPDP Scott/[email protected] directory=dpdata1 dumpfile=expdp.dmp tables=emp query= ' WHERE deptno=20 ';

(5). By Table Space Guide

EXPDP System/manager directory=dpdata1 dumpfile=tablespace.dmp tablespaces=temp,example;

(6). Guide the entire database

EXPDP System/manager directory=dpdata1 dumpfile=full.dmp full=y;

2. Parameter Description-Export expdb

(1). Content: This option specifies what you want to export. The default value is all

Content={all | data_only | Metadata_only}

When content is set to all, the object definition and all its data are exported. When data_only, only the object data is exported, and only the object definition is exported as Metadata_only

(2). Directory: Specify the directories where the dump and log files are located: Directory=directory_object

(3). EXCLUDE: This option specifies that the object type or related object to be excluded is disposed when the operation is performed

Exclude=object_type[:name_clause] [,....]

The object_type is used to specify the type of object to exclude, and name_clause is used to specify the specific object to exclude. Exclude and include cannot be used at the same time

EXPDP Scott/tiger directory=dump dumpfile=a.dup Exclude=view

(4). Include: include the specified type when exporting

(Example: Include=table_data,

Include=table: "Like ' tab% '"

Include=table: "Not like ' tab% '" ...)

EXCLUDE: Data types excluded from export (example: exclude=table:emp)

(5). FILESIZE: Specifies the maximum size of the exported file, which defaults to 0 (indicates no limit on file size) (in bytes).

(6). Job_name: The name used by this export process to facilitate tracking of queries (optional)

(7). FLASHBACK_SCN: Specify table data to export specific SCN moments

The flashback_scn=scn_value:scn_value is used to identify the SCN value. FLASHBACK_SCN and Flashback_time cannot be used simultaneously

EXPDP Scott/tiger Directory=dump dumpfile=a.dmp

flashback_scn=358523

(8). Flashback_time: Specify the export of table data at a specific point in time: Flashback_time= "To_timestamp (time_value)"

EXPDP scott/tiger directory=dump dumpfile=a.dmp flashback_time= "To_timestamp (' 25-08-2004 14:35:00 ', ' DD-MM-YYYY HH24: Mi:ss ') "

(9). Tablespace: Specifies a table space export.

(Ten). Query=[schema.] [TABLE_NAME:] Query_clause

Schema is used to specify the scheme name, TABLE_NAME is used to specify the table name, and Query_clause is used to specify a conditional restriction clause. The query option cannot be used in conjunction with options such as Connect=metadata_only,extimate_only,transport_tablespaces.

EXPDP scott/tiger directory=dump dumpfiel=a.dmp tables=emp query= ' WHERE deptno=20 '

(one). PARALLEL: Parallel operation: Specifies the number of parallel processes that perform the export operation, with a default value of 1

You can use more than one thread for the export by using the parallel parameter to significantly speed up the job. Each thread creates a separate dump file, so the parameter dumpfile should have as many projects as the degree of parallelism.

Instead of explicitly entering individual file names, you can specify wildcards as file names, for example:

EXPDP ananda/abc123 tables=cases directory=dpdata1 dumpfile=expcases_%u.dmp parallel=4 job_name=Cases_Export

Note: The DumpFile parameter has a wildcard%u, which indicates that the file will be created as needed, in the format of Expcases_nn.dmp, where nn starts from 01 and then increases as needed.

In parallel mode, the status screen displays four worker processes. (In the default mode, only one process is visible) all the worker processes synchronize the data out and display their progress on the status screen.

It is important to separate the input/output channels that access the data files and dump the directory file system. Otherwise, the overhead associated with maintaining the data Pump job might outweigh the benefits of parallel threading and therefore degrade performance. Parallel mode is only valid if the number of tables is larger than the parallel value and the table is large.

Four, restore data

1. The process of importing IMPDP

(1). Under the specified user

IMPDP Scott/tiger directory=dpdata1 dumpfile=expdp.dmp Schemas=scott;

(2). Change the owner of the table

IMPDP system/manager directory=dpdata1 dumpfile=expdp.dmp tables=scott.dept remap_schema=scott:system;

(3). Import Table Space

IMPDP System/manager directory=dpdata1 dumpfile=tablespace.dmp tablespaces=example;

(4). Import the database

impdb System/manager directory=dump_dir dumpfile=full.dmp full=y;

(5). Append Data

IMPDP system/manager directory=dpdata1 dumpfile=expdp.dmp schemas=system table_exists_action

2. Parameter Description-Import IMPDP

(1). Tabble_exists_action={skip | APPEND | TRUNCATE | Frplace}

When this option is set to skip, the import job skips the existing table to process the next object;

When set to append, data is appended;

When set to truncate, the import job truncates the table and appends new data to it;

When set to replace, the import job deletes the existing table and rebuilds the table sickness append data;

Note that the TRUNCATE option is not applicable with the cluster table and the Network_link option;

(2). Remap_schema

This option is used to load all objects of the source scheme into the target scenario: Remap_schema=source_schema:target_schema

(3). Remap_tablespace

Import all objects from the source table space into the target tablespace: remap_tablespace=source_tablespace:target:tablespace

(4). Remap_datafile

This option is used to convert the source data file name to the target data file name, which may be required when moving tablespaces between different platforms.

Remap_datafiel=source_datafie:target_datafile

Iv. cases-Export data from PROD, then import into the SIT environment

Step1. Create a logical directory that does not create a real directory in the operating system, preferably created by an administrator such as system.

Create directory Dpdata as '/home/oracle/expdb ';

Step2. View the Management Manager directory (also see if the operating system exists, because Oracle does not care if the directory exists, and if it does not exist, an error occurs)

SELECT * from Dba_directories;

Step3. Give the Scott user permission to operate on the specified directory, preferably given by an administrator such as system.

Grant Read,write on directory dpdata to Oracle;

Step4. Import 10 records in the PROD test table, and export

CREATE TABLE Scott.emp (  emp_id number,  name VARCHAR2 (),       sex varchar (2), age number  ,  country varchar (ten),  salary number);

Begin  INSERT INTO scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 10000);  INSERT into scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 20000);  INSERT into scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 30000);  INSERT into scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 40000);  INSERT into scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 50000);  INSERT into scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 60000);  INSERT into scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 70000);  INSERT into scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 80000);  INSERT into scott.emp values (1, ' baoxinjian1 ', ' M ', +, ' China ', 90000); end;commit;

Step5. Export as dmp file, upload to sit

EXPDP Sys/[email protected] tables=scott.emp dumpfile=empexpdp.dmp directory=dpdata;

Step6. View Export Log Content

Step7. Uploading data files to the SIT environment

Step8. To upload

IMPDP sys/oracle directory=dpdata dumpfile=empexpdp.dmp Schemas=sys;

Step9. View import log log contents

Step10. To see if a database table is successful

Reference: http://blog.chinaunix.net/uid-16844439-id-3213672.html

Reference: http://www.2cto.com/database/201202/120126.html

Reference: http://czmmiao.iteye.com/blog/2041703

PLSQL_ Data Pump datapump Import and Export data IMPDP/EXPDP (concept) (Oracle Data Import Export tool) (GO)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.