Oracle Data Pump (Data Dump) Error collection, Volume ledump

Source: Internet
Author: User
Tags xsl

Oracle Data Pump (Data Dump) Error collection, Volume ledump

Oracle Data Pump (Data Dump) often encounters some strange error cases during use. The following summarizes some problems encountered during the use of Data pump (Data Dump) and their solutions. All problems encountered during use will be added here.

Case 1:

ORA-39065: Unexpected master process Exception error in DISPATCH; ORA-44002: the object name is invalid

Windows platform error prompt:

Linux platform error message

Solution:

[oracle@DB-Server admin]$ oerr ora 39065
 
39065, 00000, "unexpected master process exception in %s"
 
// *Cause: An unhandled exception was detected internally within the master
 
// control process for the Data Pump job. This is an internal error.
 
// messages will detail the problems.
 
// *Action: If problem persists, contact Oracle Customer Support.
 
 
 
 
[oracle@DB-Server admin]$ oerr ora 39097
 
39097, 00000, "Data Pump job encountered unexpected error %s"
 
// *Cause: An unexpected, potentially non-fatal error occurred while
 
// processing a Data Pump job.
 
// *Action: Contact Oracle Customer Support.
 

Run catmet2. SQL and utlrp. SQL in the $ ORACLE_HOME/rdbms/admin directory to solve this problem.

[oracle@DB-Server ~]$ cd $ORACLE_HOME/rdbms/admin
 
[oracle@DB-Server admin]$ sqlplus / as sysdba
 
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Sep 7 08:24:23 2014
 
Copyright (c) 1982, 2005, Oracle. All rights reserved.
 
Connected to an idle instance.
 
SQL> @catmet2.sql 
 
SQL> @utlrp.sql 
 

Case 2:

ORA-39006: internal error; ORA-39213: Metadata processing is notavailable error when using expdp/impdp

 

[Oracle @ DB-Server backup] $ impdp frnt/frnt directory = dum_dir dumpfile = 20120420. FRNT_02.dmp tablespaces = TBS_TR_DATA;

Import: Release 10.2.0.1.0-64bit Production on Wednesday, 25 minutes l, 2012 14:41:48

Copyright (c) 2003,200 5, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0-64bit Production
With the Partitioning, OLAP and Data Mining options
ORA-39006: internal error
ORA-39213: Metadata processing is not available.

 

Solution:

The error cause is as follows, because Data Pump cannot use the Metadata API because XSL stylesheets is not correctly set. Run dbms_metadata_util.load_stylesheets with SYSDBA

[oracle@DB-Server admin]$ oerr ora 39213
 
39213, 00000, "Metadata processing is not available"
 
// *Cause: The Data Pump could not use the Metadata API. Typically,
 
// this is caused by the XSL stylesheets not being set up properly.
 
// *Action: Connect AS SYSDBA and execute dbms_metadata_util.load_stylesheets
 
// to reload the stylesheets.
 

SQL> exec dbms_metadata_util.load_stylesheets

Case 3:

The error is as follows:

Etl @ NMS_ODS_NDB1 [/jkfile/klbtmp] # expdp userid = username/password @ tnsname dumpfile = ref. dmp directory = DUMP_TEST schemas = ref content = all logfile = ref. log

Export: Release 10.2.0.3.0-64bit Production on Saturday, 02 March, 2013 10:28:25

Copyright (c) 2003,200 5, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0-64bit Production

With the Partitioning, Real Application Clusters, OLAP and Data Mining options

ORA-39002: invalid operation.

ORA-39070: Unable to open the log file.

ORA-29283: invalid file operation

ORA-06512: at "SYS. UTL_FILE", line 475

ORA-29283: invalid file operation

Solution:

This error occurs. Check the DIRECTORY. When creating a DIRECTORY, ORACLE does not check whether the operating system DIRECTORY already exists.

First, check the DIRECTORY DUMP_TEST's DIRECTORY_PATH, and then check the operating system to verify whether the DIRECTORY exists.

SQL> SELECT * FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME='DUMP_TEST';
 
OWNER                            DIRECTORY_NAME           DIRECTORY_PATH
 
------------------------ ------------------------------ -----------------
 
SYS                                DUMP_TEST              /jkfile/klbtmp/
 
SQL> 
 

If the directory is deleted, the problem can be solved by creating the directory.

In another case, if a line break occurs during DIRECTORY creation, the above error message will also appear, which is difficult to find and difficult to find. So you need to be very careful.

SQL> create directory DUMP_TEST as '/jkfile/klbtmp/

';

Case 4:

As follows:

Etl @ NMS_ODS_NDB1 [/jkfile/work/klb] # expdp userid = etl/sunrise @ wgods1 dumpfile = ref. dmp directory = DUMP_DIR schemas = ref content = all logfile = ref. log

Export: Release 10.2.0.3.0-64bit Production on Saturday, 02 March, 2013 10:50:25

Copyright (c) 2003,200 5, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0-64bit Production

With the Partitioning, Real Application Clusters, OLAP and Data Mining options

ORA-31631: privileges are required.

ORA-39109: Unprivileged users may not operate upon other users 'schemas

Solution:

This problem is mainly solved when the etl account lacks the exp_full_database permission and grants the exp_full_database permission to the etl account.

# Su-oracle

$ Sqlplus/as sysdba

SQL * Plus: Release 10.2.0.3.0-Production on Sat Mar 2 10:58:37 2013

Copyright (c) 1982,200 6, Oracle. All Rights Reserved.

Connected:

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0-64bit Production

With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> grant exp_full_database to etl;

Grant succeeded.

Case 5:

Etl @ NMS_ODS_NDB1 [/etljavascx/etl] expdp etl/sunrise @ wgods1 dumpfile = DM. TM_ALARM_LOG.dmp directory = DUMP_DIR tables = DM. TM_ALARM_LOG query = 'date _ CD> = 20121201 AND DATE_CD <= 100 ';

LRM-00101: unknown parameter name '>'

Solution:

Etl @ NMS_ODS_NDB1 [/etljavascx/etl] expdp etl/sunrise @ wgods1 dumpfile = DM. TM_ALARM_LOG.dmp directory = DUMP_DIR tables = DM. TM_ALARM_LOG query = "DATE_CD \>\= 20121201 AND DATE_CD \<\= 20130131 ";

Case 6:

LRM-00121: 'Data _ only' is not an allowable value for 'compression '. As follows:

Etl @ NMS_ODS_NDB1 [/etljavascx/etl] # expdp etl/sunrise @ wgods1 dumpfile = ref. dmp directory = DUMP_DIR dumpfile = DM. TM_ALARM_LOG201212.dmp tables = DM. TM_ALARM_LOG COMPRESSION = DATA_ONLY query = 'date _ CD> = 20121201 AND DATE_CD <= 100 ';

LRM-00121: 'Data _ only' is not an allowable value for 'compression'

Solution:

First, check the EXPDP tool version, as shown below:

Etl @ NMS_ODS_NDB1 [/etl1_cx/etl] # expdp version

Export: Release 10.2.0.3.0-64bit Production on Monday, 04 March, 2013 14:46:47

Copyright (c) 2003,200 5, Oracle. All rights reserved.

Password:

Note that in ORACLE 10g, COMPRESSION only has two options: METADATA_ONLY and NONE, and DATA_ONLY is available in ORACLE 11g. Therefore, the above error is reported. Before using EXPDP, pay attention to the EXPDP tool version.

Case 7:

[Oracle @ DB-Server] $ expdp system/***** TABLES = export rty. INV_STK_HD dumpfile = INV_STK_HD.dmp logfile = 1.log DIRECTORY = CUR_DUMP_DIR

Export: Release 10.2.0.4.0-Production on Sunday, 14 July, 2013 8:27:16

Copyright (c) 2003,200 7, Oracle. All rights reserved.

Connected to: Oracle Database 10g Release 10.2.0.4.0-Production

ORA-31626: job does not exist

ORA-31637: cannot create job SYS_EXPORT_TABLE_01 for user SYSTEM

ORA-06512: at "SYS. DBMS_SYS_ERROR", line 95

ORA-06512: at & quot; SYS. KUPV $ FT_INT & quot;, line 600

ORA-39080: failed to create queues "KUPC $ c_1_30714082716" and "KUPC $ S_1_20130714082716" for Data Pump job

ORA-06512: at "SYS. DBMS_SYS_ERROR", line 95

ORA-06512: at "SYS. KUPC $ QUE_INT", line 1606

ORA-00832: no streams pool created and cannot automatically create one

Solution:

This case I have introduced before, For details refer to my blog Expdp derivative error ORA-00832

Case 8:

[Oracle @ testlnx01 u03] $ cd tmp/

[Oracle @ testlnx01 tmp] $ expdp system/***** directory = DUMPDIR dumpfile = ESCMUSER. dmp schemas = ESCMUSER logfile = ESCMUSER. log

Export: Release 10.2.0.4.0-64bit Production on Wednesday, 27 August, 2014 16:30:46

Copyright (c) 2003,200 7, Oracle. All rights reserved.

Connected to: Oracle Database 10g Release 10.2.0.4.0-64bit Production

ORA-31626: job does not exist

ORA-31633: unable to create master table "SYSTEM. SYS_EXPORT_SCHEMA_05"

ORA-06512: at "SYS. DBMS_SYS_ERROR", line 95

ORA-06512: at & quot; SYS. KUPV $ FT & quot;, line 871

ORA-00959: tablespace 'tool' does not exist

Solution:

This case is a bit special. When I first met it, it was a bit confusing. after hard work and proof, I found that there was a table space for TOOLS, and I don't know which of them was a hot-headed person, the SYSTEM user's default tablespace is specified as TOOLS, but the tablespace is deleted again. So there is such a case.

SQL> ALTER USER SYSTEM DEFAULT TABLESPACE SYSTEM;

User altered.

After modifying the default tablespace of the user SYSTEM, the problem is solved.


ORACLE Database Data Pump usage Problems

Exp hr/hr @ ORCL owner = hr file = hr_db_20151117.dmp log = hr_db_20151117.log
Imp hr/hr file = hr_db_20151117.dmp log = hr_db_20151117.log full = y

You do not need to export the data of all accounts in a database. You only need to export the data of one account you need.

How to import data from ORACLE Data Pumps

NNC_DATA01
NNC_DATA02
NNC_DATA03
NNC_INDEX01
NNC_INDEX02
NNC_INDEX03
Nc has a total of six tablespaces. You should create these six tablespaces before importing. You do not need to consider the tablespace settings when importing them.
 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.