Oracle 11g Data Pump details

Source: Internet
Author: User

Oracle 11g Data Pump details

Oracle Database 10 Gb introduces the latest data pump technology, the role of Data Pump Export Import (EXPDP and IMPDP)

1. Logical backup and logical recovery

2. move objects between database users

3. move objects between databases

4. Table space migration

Precautions when using EXPDP and IMPDP for Data Pump:

EXP and IMP are client tool programs that can be used either on the client or on the server.

EXPDP and IMPDP are tool programs on the server. They can only be used on the ORACLE server, but not on the client.

IMP only applies to EXP exported files, not EXPDP exported files; IMPDP only applies to EXPDP exported files, not EXP exported files.

1. You can use the data pump through dblink, which saves the Export Steps and simplifies the operation. Example:

Impdp system/manager123 network_link = pairemap_tablespace = DIC_DATA: DICTEST remap_schema = DICTEST_ZH_CN: DICTEST_ZH_CN tables = td_pay_bank, td_common_content TABLE_EXISTS_ACTION = replace;

Ii. exported content

1) by User Guide

Expdpscott/tiger @ orclschemas = scott dumpfile = expdp. dmp DIRECTORY = dir logfile = expdp. log

2) parallel process parallel

Expdpscott/tiger @ orcldirectory = dir dumpfile = scott3.dmp parallel = 40 job_name = scott3

3) import by table name

Expdpscott/tiger @ orclTABLES = emp, dept dumpfile = expdp. dmp DIRECTORY = dir;

4) export by query Conditions

Expdpscott/tiger @ orcldirectory = dir dumpfile = expdp. dmp Tables = emp query = 'where deptno = 20 ';

5) export by tablespace

Expdp system/manager DIRECTORY = dir DUMPFILE = tablespace. dmp TABLESPACES = temp, example;

6) import the entire database

Expdp system/manager DIRECTORY = dir DUMPFILE = full. dmp FULL = y;

Iii. exported content:

CONTENT: specifies the CONTENT to be exported. The default value is ALL.
CONTENT = {ALL | DATA_ONLY | METADATA_ONLY}
When the CONTENT is set to ALL, only the object data is exported when the definition of the exported object and ALL its data is set to DATA_ONLY. If the value is METADATA_ONLY, only the object definition is exported.
Example: export only the table structure

Expdp system/"manager (*) 0202" directory = bak dumpfile = fig = test_nav_zh_tw, test_nav_zh_cn test_nav_en_us, test_nav_ar_sa CONTENT = METADATA_ONLY

Iv. directory:

Check the Created directory. If the directory parameter is not used, the directory DATA_PUMP_DIR is used.

SQL> select * from dba_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH
-------------------------------------------------------------------------------------

Sys subdir/u01/app/oracle/product/11.2.0/db/demo/schema/order_entry // 2002/Sep
SYS SS_OE_XMLDIR/u01/app/oracle/product/11.2.0/db/demo/schema/order_entry/
SYS LOG_FILE_DIR/u01/app/oracle/product/11.2.0/db/demo/schema/log/
SYS MEDIA_DIR/u01/app/oracle/product/11.2.0/db/demo/schema/product_media/
Sys xmldir/u01/app/oracle/product/11.2.0/db/rdbms/xml
SYS DATA_FILE_DIR/u01/app/oracle/product/11.2.0/db/demo/schema/sales_history/
SYS DATA_PUMP_DIR/u01/app/oracle/admin/tj01/dpdump/
SYS ORACLE_OCM_CONFIG_DIR/u01/app/oracle/product/11.2.0/db/Cr/state
 

Note: Through the query, we can see that all directories belong to SYS users, and no matter which user is created, this directory object DATA_PUMP_DIR has been created in the database in advance. If you do not specify the directory object parameter when using expdp for export, Oracle uses the default database directory DATA_PUMP_DIR. However, if you want to use this directory, you must have exp_full_database permissions.
 

5. Uncommon Parameters;
1. EXTIMATE_ONLY

Specifies whether to estimate only the disk space occupied by the Export job. The default value is N.

2. EXTIMATE_ONLY = {Y | N}

When it is set to Y, the export function only estimates the disk space occupied by the object, and does not execute the export job. When it is N, it not only estimates the disk space occupied by the object, the export operation is also performed.

Expdp scott/tiger ESTIMATE_ONLY = yNOLOGFILE = y

3. EXCLUDE

This option is used to specify the release object type or related objects to be excluded when the operation is executed.

EXCLUDE = object_type [: name_clause] [,…]

Object_type is used to specify the object type to be excluded. name_clause is used to specify the specific object to be excluded. EXCLUDE and INCLUDE cannot be used at the same time.

Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dup EXCLUDE = VIEW

4. FILESIZE

Specifies the maximum size of the exported file. The default value is 0, indicating that the file size is unlimited)

5. FLASHBACK_SCN

Specifies the time when table data is exported for a specific SCN.

FLASHBACK_SCN = scn_value

Scn_value is used to identify the SCN value. FLASHBACK_SCN and FLASHBACK_TIME cannot be used at the same time.

Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dmp

FLASHBACK_SCN = 358523

10) FLASHBACK_TIME

Export table data at a specific time point

FLASHBACK_TIME = "TO_TIMESTAMP (time_value )"

Expdp scott/tiger DIRECTORY = dumpDUMPFILE = a. dmp FLASHBACK_TIME =

"TO_TIMESTAMP ('25-08-200414: 35: 00', 'dd-MM-YYYYHH24: MI: ss ')"
 
6. When one schema is used to overwrite another schemas:

[Oracle @ rac1 ~] $ Impdp system/manager123 network_link = dblink_to_myself remap_tablespace = users: users remap_schema = liuwenhe: liuhe schemas = liuwenhe;

If the liuhe user exists, after the operation is completed, the liuhe user permission and password remain unchanged.

If the liuhe user does not exist, a liuhe user will be created after the operation is completed without any permissions. It is worth noting that the password is the same as the liuwenhe password.

7. When importing schemas, all objects belonging to this user, including SEQUENCE, FUNCTION, PROCEDURE, primary key, and index, will be passed together.

[Oracle @ rac1 ~] $ Impdp system/manager123 network_link = dblink_to_myself remap_tablespace = users: users remap_schema = liuwenhe: liuhe tables = liuwenhe. liuwenhe table_exists_action = replace;

The primary key and index of the liuwenhe. liuwenhe table will also pass. The name is the same as that of liuwenhe. liuwenhe.

8: This option is used to specify the operations to be performed when the table already exists. The default value is SKIP.
TABBLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | FRPLACE}

When this option is set to SKIP, the import job skips existing tables and processes the next object. When it is set to APPEND, data is appended. When it is set to TRUNCATE, the import job truncates the table, then append new data to it. When set to REPLACE, the import job will delete the existing table, recreate the table, and append data. Note that the TRUNCATE option is not applicable to the cluster Table and NETWORK_LINK options.
 
9: Note: This permission is required for normal users to perform full database export:
SQL> grant exp_full_database to scott;

10. REMAP_TABLE Parameters

Map source table data to different target tables
Eg: impdp orcldev/oracle DIRECTORY = backup_path dumpfile = maid. dmp remap_table = TAB_TEST: TEST_TB
The data is imported to the TEST_TB table, but the index and other information of the table are not created. You need to manually initialize it.
 
11. REMAP_DATAFILE Parameters

Syntax: REMAP_DATAFILE = source_datafile: target_datafile
Oracle_Online:
Remapping datafiles is useful when you move databases between platforms that have different file naming conventions. the source_datafile and target_datafile names shocould be exactly as you want them to appear in the SQL statements where they are referenced. oracle recommends that you enclose datafile names in quotation marks to eliminate ambiguity on platforms for which a colon is a valid file specification character

13: Let's take a look at the experiment below to verify that the data can be restored only after being exported. Even if the table structure has changed, it can restore the corresponding columns,

1) SQL> desc liuwenhe. liuwenhe;

Name Null? Type

-----------------------------------------------------------------------------

X not null number (38)

Y number (38)

2) SQL> select * from liuwenhe. liuwenhe;

X Y

--------------------

1 3

3 3

2 4

4 5

5 8

6 9

6 rows selected.

3) [oracle @ rac1 expdp] $ expdp system/manager123 directory = bak dumpfile = hhhf. dmp tables = liuwenhe. liuwenhe;

Export: Release 11.2.0.3.0-Production on Mon Jul 6 11:52:56 2015

Copyright (c) 1982,201 1, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0-64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
And Real Application Testing options
Starting "SYSTEM". "SYS_EXPORT_TABLE_03": system/********* directory = bak dumpfile = hhhf. dmp tables = liuwenhe. liuwenhe
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
.. Exported "LIUWENHE". "LIUWENHE" 5.492 KB 6 rows
Master table "SYSTEM". "SYS_EXPORT_TABLE_03" successfully loaded/unloaded
**************************************** **************************************
Dump file set for SYSTEM. SYS_EXPORT_TABLE_03 is:
/Backup/expdp/hhhf. dmp
Job "SYSTEM". "SYS_EXPORT_TABLE_03" successfully completed at 11:53:10

4) SQL> alter table liuwenhe. liuwenhe drop column y;

Table altered.

5) SQL> truncate table liuwenhe. liuwenhe;

Table truncated.

6) [oracle @ rac1 expdp] $ impdp system/manager123 directory = bak dumpfile = hhhf. dmp tables = liuwenhe. liuwenhe content = data_only;

Import: Release 11.2.0.3.0-Production on Mon Jul 6 11:55:07 2015

Copyright (c) 1982,201 1, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0-64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

And Real Application Testing options

Master table "SYSTEM". "SYS_IMPORT_TABLE_04" successfully loaded/unloaded

Starting "SYSTEM". "SYS_IMPORT_TABLE_04": system/********* directory = bak dumpfile = hhhf. dmp tables = liuwenhe. liuwenhe content = data_only

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

.. Imported "LIUWENHE". "LIUWENHE" 5.492 KB 6 rows

Job "SYSTEM". "SYS_IMPORT_TABLE_04" successfully completed at 11:55:13
7) SQL> select * from liuwenhe. liuwenhe;

X

----------

1

2

3

4

5

6

6 rows selected.

Summary: Oracle Data Pump, logical backup and recovery tools, which operate on the logical plane directly, cannot be imagined as a simple insert, and only export data, and then restore data, you do not need to write the remap_tablespace parameter.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.