Expdp and impdp parameter usage

Source: Internet
Author: User

Expdp and impdp parameter usage

1. Create a logical directory. This command does not create a real directory in the operating system. It is best to create a directory as an administrator such as system.
Create directory dpdata as '/opt ';
2. Check the Administrator directory (check whether the operating system exists at the same time because Oracle does not care whether the directory exists. If the directory does not exist, an error occurs)
Select * from dba_directories;
3. Grant scott the operation permission on the specified directory. It is best to grant permissions to the system administrator.
Grant read, write on directory dpdata1 to scott;

Iv. Export data
1) by User Guide
Expdp scott/tiger @ orcl schemas = scott dumpfile = expdp. dmp DIRECTORY = dpdata1;
2) parallel process parallel
Expdp scott/tiger @ orcl directory = dpdata1 dumpfile = scott3.dmp parallel = 40 job_name = scott3
3) import by table name
Expdp scott/tiger @ orcl TABLES = emp, dept dumpfile = expdp. dmp DIRECTORY = dpdata1;
4) export by query Conditions
Expdp scott/tiger @ orcl directory = dpdata1 dumpfile = expdp. dmp Tables = emp query = 'where deptno = 20 ';
5) export by tablespace
Expdp system/manager DIRECTORY = dpdata1 DUMPFILE = tablespace. dmp TABLESPACES = temp, example;
6) import the entire database
Expdp system/manager DIRECTORY = dpdata1 DUMPFILE = full. dmp FULL = y;

V. Restore data
1) Export to a specified user
Impdp scott/tiger DIRECTORY = dpdata1 DUMPFILE = expdp. dmp SCHEMAS = scott;
2) Change the table owner.
Impdp system/manager DIRECTORY = dpdata1 DUMPFILE = expdp. dmp TABLES = scott. dept REMAP_SCHEMA = scott: system;
3) Import tablespace
Impdp system/manager DIRECTORY = dpdata1 DUMPFILE = tablespace. dmp TABLESPACES = example;
4) import the database
Impdb system/manager DIRECTORY = dump_dir DUMPFILE = full. dmp FULL = y;
5) append data
Impdp system/manager DIRECTORY = dpdata1 DUMPFILE = expdp. dmp SCHEMAS = system TABLE_EXISTS_ACTION
Vi. parameter description ----- import impdp
1. TABBLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | FRPLACE}
When this option is set to SKIP, the import job skips existing tables and processes the next object. When it is set to APPEND, data is appended. When it is set to TRUNCATE, the import job truncates the table, then, append new data to it. When it is set to REPLACE, the import job will delete the existing table and re-create the table disease append data. Note that the TRUNCATE option is not applicable to the cluster Table and NETWORK_LINK options;
2. REMAP_SCHEMA
This option is used to load all objects in the source scheme to the Target Scheme: REMAP_SCHEMA = source_schema: target_schema
3. REMAP_TABLESPACE
Import all objects in the source tablespace to the target tablespace: REMAP_TABLESPACE = source_tablespace: target: tablespace
4. REMAP_DATAFILE
This option is used to convert the source data file name to the target data file name. This option may be required when the tablespace is moved between different platforms.
REMAP_DATAFIEL = source_datafie: target_datafile

VII. parameter description ----- export expdp
1. CONTENT: This option is used to specify the CONTENT to be exported. The default value is ALL.
CONTENT = {ALL | DATA_ONLY | METADATA_ONLY}
When the CONTENT is set to ALL, only the object data is exported when the definition of the exported object and ALL its data is set to DATA_ONLY. If the value is METADATA_ONLY, only the object definition is exported.
2. DIRECTORY: Specifies the DIRECTORY where the dump and log files are located: DIRECTORY = directory_object
3. EXCLUDE: This option is used to specify the object type or related objects to be excluded when the operation is released.
EXCLUDE = object_type [: name_clause] [,…]
Object_type is used to specify the object type to be excluded. name_clause is used to specify the specific object to be excluded. EXCLUDE and INCLUDE cannot be used at the same time.
Expdp scott/tiger DIRECTORY = dump DUMPFILE = a. dup EXCLUDE = VIEW
4. INCLUDE: the specified type is included in the export.
(Example: INCLUDE = TABLE_DATA,
INCLUDE = TABLE: "LIKE 'tab % '"
INCLUDE = TABLE: "not like 'tab % '"...)
EXCLUDE: Data Type excluded during export (for example, EXCLUDE = TABLE: EMP)
5. FILESIZE: specifies the maximum size of the exported file. The default value is 0, indicating that the file size is unlimited ).
6. JOB_NAME: name used by the export process to facilitate tracking and query (optional)
7. FLASHBACK_SCN: Specifies the table data at a specific SCN time point to be exported.
FLASHBACK_SCN = scn_value: Scn_value is used to identify the SCN value. FLASHBACK_SCN and FLASHBACK_TIME cannot be used at the same time.
Expdp scott/tiger DIRECTORY = dump DUMPFILE = a. dmp
FLASHBACK_SCN = 358523
8. FLASHBACK_TIME: Specify the table data to be exported at a specific time point: FLASHBACK_TIME = "TO_TIMESTAMP (time_value )"
Expdp scott/tiger DIRECTORY = dump DUMPFILE = a. dmp FLASHBACK_TIME = "TO_TIMESTAMP ('25-08-2004 14:35:00 ', 'dd-MM-YYYY HH24: MI: ss ')"
9. TABLESPACE: specify a TABLESPACE for export.
10. QUERY = [schema.] [table_name:] query_clause
Schema is used to specify the solution name, table_name is used to specify the table name, And query_clause is used to specify the condition restriction clause. The QUERY option cannot be used with CONNECT = METADATA_ONLY, EXTIMATE_ONLY, TRANSPORT_TABLESPACES and other options at the same time.
Expdp scott/tiger directory = dump dumpfiel = a. dmp Tables = emp query = 'where deptno = 20'
 
11. PARALLEL: PARALLEL Operation: specify the number of PARALLEL processes that execute the export operation. The default value is 1.
You can use more than one thread for export through the PARALLEL parameter to significantly accelerate the job. Each thread creates a separate dump file, so the parameter dumpfile should have the same project as the degree of parallelism. You can specify a wildcard as a file name instead of explicitly entering each file name. For example:
Expdp ananda/abc123 tables = CASES directory = DPDATA1 dumpfile = expCASES _ % U. dmp parallel = 4 job_name = Cases_Export
Note: The dumpfile parameter has a wildcard % U, indicating that the file will be created as needed. The format will be expCASES_nn.dmp, where nn starts from 01 and then increases as needed.
In parallel mode, the status screen displays four working processes. (In the default mode, only one process is visible) All worker processes synchronously retrieve data and display their progress on the status screen.
It is important to separate the input/output channels for accessing data files and dumping directory file systems. Otherwise, the overhead related to Data Pump job maintenance may exceed the benefits of parallel threads and thus reduce performance. Parallelism is effective only when the number of tables exceeds the number of parallel values and the number of tables is large.

About the DIRECOTRY parameter:
This directory must be created by sysdba users. For more information about the syntax, see the beginning of this article. After the directory is created, grant the read and write permissions to other users and create a physical directory.
Speaking of this, it is necessary to note that after the DB is created, the system has several default directories.

SQL> select * from dba_directories;
OWNER DIRECTORY_NAME
------------------------------------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------
SYS ORACLE_OCM_CONFIG_DIR
F: \ app \ Roman \ product \ 11.2.0 \ dbhome_1/Cr/state

SYS DATA_PUMP_DIR
F: \ app \ Roman/admin/orcl/dpdump/


OWNER DIRECTORY_NAME
------------------------------------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------
SYS XMLDIR
C: \ ade \ aime_dadvfh0169 \ oracle/rdbms/xml

The directory DATA_PUMP_DIR is the default directory of the system. If you do not specify a directory separately, the dmp file will be located here. By default, this directory is not authorized by other users.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.