Some time ago, I helped a friend recover a damaged dmp file, which is more than 100 Gb. I recorded the recovery process and briefly summarized it.I. DescriptionThis dmp file is exp from an Oracle 9i Database. Two types of errors may occur randomly when importing the Oracle 11g version, as shown below (1) when the dmp file is imported, it remains unchanged on a table for two or three days. The import operation cannot be performed, as shown below:
Imported 0 rows
.. Importing table "D_xxxxxx"
.. Importing table "D_xxxxx2"
Imported 0 rows
... Importing table "D_xxxxx3"
........................................ ...................................
........................................ ...................................
Imported 246844 rows
.. Importing the table "D_xxxx4" to 0 rows
.. Importing table "D_xxx5 ".................................. .........................................
........................................ ................................... The import operation has been stuck on the D_xxx5 table for a few days. Note: The table name has been blurred (2) when the dmp file is imported, the following error occurs: IMP-00058: ORACLE error 1403ORA-01403: No data found
Ii. Analysis(1) At the time of claim, since the structure of the dmp file is "table structure + Data" and stored in this way, when a table is imported into the hang, it is difficult to extract the subsequent content of the dmp file, but I am still lucky to save it and try to export the table structure content in the dmp file. But unfortunately, it is like an error in the Problem description, the imp process stays in the place of the Table, as follows: imp dh/dh file = D: \ xxxx. DMP log = D: \ dh88888.log rows = n statistics = none indexes = n grants = n constraints = n full = y show = y ignore = Y (2) use other methods, for example, directly extracting the subsequent tables (communicating with people and other tables in the dmp file), setting 10046 and other events to observe or observe, does not work, or keep hang in that place, in addition, the imp process is displayed in the database as "SQL * Net message from client" idle waiting (3) Considering that the file is exp from the 9i database, therefore, a 9i Database is installed for imp import, but the problem still persists. (4) it seems that the dmp file has been damaged and there is no other method to skip the hang place, only the last method can be used to extract data from the dmp file through tools.
Iii. SolutionSince the dmp file is more than 100 GB, it cannot be transferred into your computer for operations. You can only perform remote operations. Therefore, many operations are not recorded. Here we briefly describe the solution process, simulate a similar experiment to reproduce the problem and solve it (1) import the table data that can be imported into the dmp file through imp, as shown in the following imp xxx/xxxx file = D: \ xxxx. DMP log = D: \ dh. log full = y indexes = n feedback = 1024 commit = y ignore = y buffer = 1000000 STATISTICS = NONE Note: here you must set the commit = y parameter (2) by setting the commit = y parameter, we can import the table data in the hang Part into some notes: this is also a small trick. In my actual case, the table imported by imp to hang is very large (that is, the table that has been stuck by imp For A Long Time). In this way, we find that this table has been imported for more than 200 GB. Therefore, it can be determined that the subsequent number of damaged parts of the dmp File There is not much data. (3) use a tool to scan all the table structures in the dmp file and determine the number of tables in the dmp file that are not properly imported (4) extract the table data that is not normally imported from the dmp file, save as text and use sqlldr to import tables (5) damaged in dmp files. Data is also extracted, compare the extracted data with some data imported into the database in step 2, remove duplicate data, and finally import the data. (In fact, the damaged table data is more than GB. Fortunately, we have put most of the data into the database normally. After removing the duplicate data, there is only a small portion left, And sqlldr will soon be completed)
Iv. Experiment Simulation(1) Export two tables through exp, as shown below:E: \> exp dbmon/dbmon file = e: \ test. dmp tables = dh_ SQL, dh_statExport: Release 11.2.0.1.0-Production on Tuesday June 17 10:45:22 2014 Copyright (c) 1982,200 9, Oracle and/or its affiliates. all rights reserved. connect to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0-ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing options the ZHS16GBK Character Set and AL16UTF16 NCHAR character set to be exported The table uses the regular path... export table DH_ SQL to export row 62613... export table DH_STAT to export row 72612 successfully. No warning is displayed.
(2) destroy the dmp file (various tools, dd and ue can all be used), and try imp import to make sure the file is corrupt, as shown below:E: \> imp dbmon/dbmon file = e: \ test. dmp fromuser = dbmon touser = dhImport: Release 11.2.0.1.0-Production on Tuesday June 17 10:48:57 2014 Copyright (c) 1982,200 9, Oracle and/or its affiliates. all rights reserved. connect to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0-ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing options through the conventional path by EXPORT: the exported file created in V11.02.00 has been imported into the ZHS16GBK Character Set and AL16UTF16 NCHAR character set. importing the DBMON object to DH .. importing TABLE "DH_ SQL" illegal lob length marker 49920 bytesread = 00000000000 TABLE = DH_SQLIMP-00098: INTERNAL ERROR: impgst2IMP-00008: Unrecognized statement in the export file: IMP-00008: unrecognized statement in the exported file: IMP-00008: Unrecognized statement in the exported file: IMP-00008: Unrecognized statement in the exported file: IMP-00008: the exported file contains unrecognized statements :... omitting the subsequent part .......
(3) Use a tool to extract the table structure from the dmp file, as shown below:DUL> scan dump file e: \ test. dmp; 0: CSET: 852 (ZHS16GBK) 3: seal export: V11.02.0020: USER DBMON28: TYPE TABLES2317: TABLE "DH_ SQL" 2332: create table "DH_ SQL" ("ID" NUMBER, "NAME" VARCHAR2 (128), "TYPE" VARCHAR2 (19) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE (INITIAL 4194304 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 freelist groups 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS2597: insert into "DH_ SQL" ("ID", "NAME", "TYPE") VALUES (: 1,: 2,: 3) 2661: BIND information for 3 columns col [1] type 2 max length 22 col [2] type 1 max length 128 cset 852 (ZHS16GBK) form 1 col [3] type 1 max length 19 cset 852 (ZHS16GBK) form 1 Conventional export2687: start of table data2781632: TABLE "DH_STAT" 2781648: create table "DH_STAT" ("ID" NUMBER, "NAME" VARCHAR2 (128), "TYPE" VARCHAR2 (19 )) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE (INITIAL 4194304 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 freelist groups 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS2781914: insert into "DH_STAT" ("ID", "NAME", "TYPE") VALUES (: 1,: 2,: 3) 2781979: BIND information for 3 columns col [1] type 2 max length 22 col [2] type 1 max length 128 cset 852 (ZHS16GBK) form 1 col [3] type 1 max length 19 cset 852 (ZHS16GBK) form 1 Conventional export2782005: start of table data5898003: create index "IND111" ON "DH_STAT" ("NAME ") PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 3145728 NEXT 1048576 MINEXTENTS 1 FREELISTS1 freelist groups 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING5898339: create index "IND_2" ON "DH_STAT" ("NAME", "ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 4194304 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 freelist groups 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING5900617: TABLE "DH_STAT" 5900705: ENDTABLE5900740: EXIT5900745: EXIT
(4) extract table data from the dmp file as follows:1. extract "DBMON ". "DH_ SQL" table data DUL> unexp table "DBMON ". & quot; DH_ SQL & quot; (& quot; ID & quot; NUMBER, & quot; NAME & quot; VARCHAR2 (128), & quot; TYPE & quot; VARCHAR2 (19) dump file e: \ test. dmp from 0; DUL: Error: column 1 length 771 exceeds max bind size 220000000000 running 5631312e 30322e30 texp ort: V11. 54455850 300a5544 running d4f4e 0a525441 running c4553 0.UD BMON. RTA BLES0000000032 0a323034 255.a300a. 204 8.0.0: column 1 type NUMBER siz E 771 failed0: row 1 failedrow conversion failure, retrying from offset 12687: succesful conversion 2687 bytes skipped due to conversion problemsDUL: Error: Bad digit in number 222 (0xdf) 0000000000 c3df03d2 .... 2897: column 1 type NUMBER size 4 failed2897: row 7 failedrow conversion failure, retrying from offset 28622925: row 7 partial success2960: succesful conversion 63 bytes skipped due to conv Ersion problemsDUL: Error: column 1 length 3844 exceeds max bind size 220000000000 ddf20406 13d05359 535f594f 49443030 ...... SY S_YO id1_00000016 d0303f31 30333034 24040054 59504500. 0? 1 0304 $ .. T YPE.0000000032 000400c3 0204050c ........ 3109: column 1 type NUMBER size 3844 failedDUL: Error: column 1 length 21835 exceeds max bind size 220000000000 245f524d 47525f50 4c414e5f 54040054 $ _ RM GR_P LAN _ T .. t0000000016 59504500 000400c3 02050f13 00535953 YPE .......... SYS0000000032 limit 44303030 _ YOI D0003109: column 1 type NUMBER size 21835 failed3109: row 12 failedrow conversion failure, retrying from offset 30853144: succesful conversion 35 bytes skipped due to conversion problemsDUL: Error: column 1 length 223 exceeds max bind size 220000000000 c302040a 13005359 535f594f 49443030 ...... SY S_YO id1_00000016 3030f031 30333038 24040054 f9d04500 00.1 0308 $ .. T .. e.20.000032 df04d0c3 0204f9df ........ 3235: column 1 type NUMBER size 223 failed3235: row 15 failedrow conversion failure, retrying from offset 32023236: row 15 partial success3337: row 15 partial success3454: succesful conversion 219 bytes skipped due to conversion problemsUnloaded 62604 rows, end of table marker at 27816312. extract "DBMON ". "DH_STAT" table data DUL> unexp table "DBMON ". "DH_STAT" ("ID" NUMBER, "NAME" VARCHAR2 (128), "TYPE" VARCHAR2 (19) dump file e: \ test. dmp from 2781632; DUL: Error: column 1 length 16724 exceeds max bind size 220000000000 running c4520 2244485f 53544154 220a4352 BLE "DH _ STAT ". CR0000000016 45415445 20544142 4c452022 44485f53 eate tab le "DH_S0000000032 54415422 20282249 TAT" ("I2781632: column 1 type NUMBER size 16724 rows: row 1 failedrow conversion failure, retrying from offset 27816332782005: succesful conversion 373 bytes skipped due to conversion problemsUnloaded 72612 rows, end of table marker at 5898002DUL: Warning: Recreating file "DBMON_DH_STAT.ctl"
(5) import data to the database through sqlldr, as shown below:1. Delete table data firstSQL> conn dbmon/dbmon is connected. SQL> delete dh_ SQL; 62613 rows have been deleted. SQL> commit;
2. sqlldr ImportE: \ test> sqlldr dbmon/dbmon control = e: \ test \ DBMON_DH_ SQL .ctlSQL * Loader: Release 11.2.0.1.0-Production on Tuesday May 1 11:16:43 2007 Copyright (c) 1982,200 9, oracle and/or its affiliates. all rights reserved. reached the submission point-logical record count 64 reached the submission point-logical record count 128 reached the submission point-logical record count 192 reached the submission point-logical record count 256 reached the submission point-logical record count 320 reached the submission point-logical record count 384 reached the submission point-logical record count 448 reached the submission point-logical record count 512 reached the submission point-logical record count 576 reached the submission point-logical record count 640 reached the submission point-logical record count 704 reached the submission point-logical record count 768 reached the submission point-logical record count 832 reached the submission point-logical record count 896 reached the submission point-logical record count 960 reached commit point-logical record count 1024... omit part ....... reached the submission point-logical record count 62426 reached the submission point-logical record count 62490 reached the submission point-logical record count 62554 reached the submission point-logical record count 626063. confirm that the data has been importedSQL> select count (*) from dh_ SQL; COUNT (*) ---------- 62605 -- less than the number of records exported, you can see that some data has been lost due to damage.(6) process other tables. Follow the steps (1)-(5.