An error occurred while importing oracle database with impdp. How can this problem be solved ?, Oracleimpdp

Source: Internet
Author: User

An error occurred while importing oracle database with impdp. How can this problem be solved ?, Oracleimpdp

Note: Some commands have not been verified.

The following problem occurs when imp is used to restore the database. This problem occurs when you export data using expdp but import data using the client imp. You can import data using impdp instead.

When impdp imports (impdp EMMS/EMMS123 directory = expdir dumpfile = EMMS2.dmp), IMPDP uses the tool IMPDP to import data when the ORA-39002, ORA-39070 error troubleshooting where the RA-39070 error indicates that there is no expdir object.

Expdir needs to be re-created as follows:

Sqlplus/nolog conn/as sysdba

SQL> create or replace directory expdir as 'e:/kxdb ';

The directory has been created.

Note the following differences between the Oracle expdp/impdp Export and Import commands and database backup:

Precautions when using EXPDP and IMPDP:

EXP and IMP are client tool programs that can be used either on the client or on the server.

EXPDP and IMPDP are tool programs on the server. They can only be used on the ORACLE server, but not on the client.

IMP only applies to EXP exported files, not EXPDP exported files; IMPDP only applies to EXPDP exported files, not EXP exported files.

When you run the expdp or impdp command, you can leave the username/password @ Instance name as the identity, and then enter it as prompted, such:

Expdp schemas = scott dumpfile = expdp. dmp DIRECTORY = dpdata1;

1. Create a logical directory. This command does not create a real directory in the operating system. It is best to create a directory as an administrator such as system.

Create directory dpdata1 as 'd: \ test \ dump ';

2. Check the Administrator directory (check whether the operating system exists at the same time because Oracle does not care whether the directory exists. If the directory does not exist, an error occurs)

Select * from dba_directories;

3. Grant scott the operation permission on the specified directory. It is best to grant permissions to the system administrator.

Grant read, write on directory dpdata1 to scott;

Iv. Export data

1) by User Guide

Expdp scott/tiger @ orcl schemas = scott dumpfile = expdp. dmp DIRECTORY = dpdata1;

2) parallel process parallel

Expdp scott/tiger @ orcl directory = dpdata1 dumpfile = scott3.dmp parallel = 40 job_name = scott3

3) import by table name

Expdp scott/tiger @ orcl TABLES = emp, dept dumpfile = expdp. dmp DIRECTORY = dpdata1;

4) export by query Conditions

Expdp scott/tiger @ orcl directory = dpdata1 dumpfile = expdp. dmp Tables = emp query = 'where deptno = 20 ';

5) export by tablespace

Expdp system/manager DIRECTORY = dpdata1 DUMPFILE = tablespace. dmp TABLESPACES = temp, example;

6) import the entire database

Expdp system/manager DIRECTORY = dpdata1 DUMPFILE = full. dmp FULL = y;

V. Restore data

1) Export to a specified user

Impdp scott/tiger DIRECTORY = dpdata1 DUMPFILE = expdp. dmp SCHEMAS = scott;

2) Change the table owner.

Impdp system/manager DIRECTORY = dpdata1 DUMPFILE = expdp. dmp TABLES = scott. dept REMAP_SCHEMA = scott: system;

3) Import tablespace

Impdp system/manager DIRECTORY = dpdata1 DUMPFILE = tablespace. dmp TABLESPACES = example;

4) import the database

Impdb system/manager DIRECTORY = dump_dir DUMPFILE = full. dmp FULL = y;

5) append data

Impdp system/manager DIRECTORY = dpdata1 DUMPFILE = expdp. dmp SCHEMAS = system TABLE_EXISTS_ACTION

The parameters are as follows:

ATTACH connects to an existing job, for example, ATTACH [= job name].

CONTENT specifies the data to be loaded. The valid keyword is:

(ALL), DATA_ONLY and METADATA_ONLY.

DATA_OPTIONS indicates the data layer tag. the only valid value is:

SKIP_CONSTRAINT_ERRORS-the constraint condition error is not serious.

DIRECTORY is the DIRECTORY object used for dumping files, log files, and SQL files.

DUMPFILE: the list of dump files to be imported from (expdat. dmp,

For example, DUMPFILE = scott1.dmp, scott2.dmp, dmpdir: scott3.dmp.

ENCRYPTION_PASSWORD is the key word used to access the encrypted column data.

This parameter is invalid for network import jobs.

ESTIMATE calculates the estimated job value. The valid keyword is:

(BLOCKS) and STATISTICS.

EXCLUDE is used to EXCLUDE specific object types, such as EXCLUDE = TABLE: EMP.

FLASHBACK_SCN is used to set the session snapshot back to the SCN in the previous state.

FLASHBACK_TIME is used to obtain the SCN time closest to the specified time.

FULL import all objects from the source (Y ).

HELP displays the HELP message (N ).

INCLUDE includes specific object types, such as INCLUDE = TABLE_DATA.

The name of the import job to be created in JOB_NAME.

LOGFILE log File Name (import. log ).

The name of the remote database to which NETWORK_LINK is linked.

NOLOGFILE does not write log files.

PARALLEL changes the number of active workers of the current job.

PARFILE specifies the parameter file.

PARTITION_OPTIONS specifies how to convert partitions, where

Valid keywords: DEPARTITION, MERGE, and (NONE)

QUERY is the predicate clause used to import a subset of a table.

REMAP_DATA specifies the data conversion function,

For example, REMAP_DATA = EMP. EMPNO: REMAPPKG. EMPNO

REMAP_DATAFILE redefines data file references in all DDL statements.

REMAP_SCHEMA loads objects in one scheme to another.

REMAP_TABLE table name is remapped to another table,

For example, REMAP_TABLE = EMP. EMPNO: REMAPPKG. EMPNO.

REMAP_TABLESPACE remaps a tablespace object to another tablespace.

REUSE_DATAFILES if the tablespace already exists, initialize it (N ).

List of SCHEMAS import schemes.

SKIP_UNUSABLE_INDEXES skips indexes that are set to useless indexes.

SQLFILE writes all SQL DDL statements to the specified file.

When the default value (0) shows the new STATUS when available,

The frequency (in seconds) job status to be monitored.

STREAMS_CONFIGURATION enable stream metadata Loading

The operation performed when the TABLE_EXISTS_ACTION import object already exists.

Valid keywords: (SKIP), APPEND, REPLACE, and TRUNCATE.

TABLES identifies the list of TABLES to be imported.

TABLESPACES identifies the list of TABLESPACES to be imported.

TRANSFORM must be applied to the metadata conversion of applicable objects.

Valid conversion keywords: SEGMENT_ATTRIBUTES, STORAGE,

OID and PCTSPACE.

TRANSPORTABLE is used to select the option for transferring data movement.

Valid keywords: ALWAYS and (NEVER ).

Valid only for the NETWORK_LINK mode import operation.

TRANSPORT_DATAFILES: list of data files imported in transmission mode.

TRANSPORT_FULL_CHECK verifies the storage segments of all tables (N ).

The list of tablespaces from which TRANSPORT_TABLESPACES will load metadata.

Valid only for the NETWORK_LINK mode import operation.

The VERSION of the object to be exported. The valid keyword is:

(COMPATIBLE), LATEST or any valid database version.

Valid only for NETWORK_LINK and SQLFILE.

The following commands are valid in interactive mode. Note: abbreviations are allowed.

Command description (default)

CONTINUE_CLIENT returns to record mode. If the job is idle, the job is restarted.

EXIT_CLIENT exits the Client Session and keeps the job running.

HELP summarizes interactive commands.

KILL_JOB separates and deletes jobs.

PARALLEL changes the number of active workers of the current job.

PARALLEL =.

START_JOB start/restore the current job.

START_JOB = SKIP_CURRENT will be skipped before the job starts

Any operation performed when the job is stopped.

When the default value (0) shows the new STATUS when available,

The frequency (in seconds) job status to be monitored.

STATUS [= interval]

STOP_JOB closes the executed job in sequence and exits the client.

STOP_JOB = IMMEDIATE will be closed immediately

Data pump operation.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.