Oracle IMP and exp and import common errors ____oracle

Source: Internet
Author: User

11 Database object already exists

In general, you should delete the table, sequence, function/process, triggers, etc. before the data is imported.

The database object already exists, and the default IMP parameter will import the failed

If the parameter ignore=y is used, the data content in the exp file is imported

If the table has a constraint on a unique keyword, the condition will not be imported

If the table does not have a unique keyword constraint, it will cause the record to repeat

(2) Database objects have primary foreign key constraints

Data will fail to import if the primary foreign key constraint is not met

Workaround: Import primary table First, then import dependency table

Disable the primary foreign KEY constraint on the target import object, and then enable them after importing the data

(3) Insufficient authority

If you want to import a user's data into a B user, a user needs to have Imp_full_database permissions

(4) When importing large tables (greater than 80M), storage allocation fails

The default exp, compress = Y, is to compress all the data on a block of data.

When imported, failure is imported if there is no contiguous large block of data.

When you export a large table above 80M, remember compress= N, this error is not caused.

(5) IMP and exp use different character sets

If the character set is different, the import fails, and you can change the UNIX environment variable or the NT registry Nls_lang related information.

When the import is complete, change it back.

(6) IMP and EXP versions are not compatible

IMP can successfully import the files generated by the low version exp and cannot import the files generated by the high version exp

Table Mode Backup:

[Oracle@roy orcl]$ exp david/david rows=y indexes=n compress=n buffer=65536 file=exp_tianle_090101.dmp log=exp_tianle_ 090101.log tables= (Tianle);

User-mode backup:

[Oracle@roy orcl]$ exp david/david owner=david rows=y indexes=n compress=n buffer=65536 file=exp_david__090101.dmp log= Exp_david_090101.log;

Full Mode backup:

[Oracle@roy orcl]$ exp david/david rows=y indexes=n compress=n buffer=65536 full=y file=exp_fulldatabase_090101.dmp log= Exp_fulldatabase_090101.log;

Table Mode Recovery:

[Oracle@roy orcl]$ imp david/david fromuser=david touser=david rows=y indexes=n commit=y buffer=65536 file=exp_tianle_09 0101.dmp log=imp_tianle_090101.log tables= (Tianle);

User Mode recovery:

[Oracle@roy orcl]$ imp david/david fromuser=david touser=david rows=y indexes=n commit=y buffer=65536 file=exp_tianle_ 090101.dmp Log=exp_tianle_090101.log;

Full-Library Mode recovery:

[Oracle@roy orcl]$ imp david/david rows=y indexes=n commit=y full=y ignore=y buffer=65536 file=/tmp/exp_fulldatabase_090 101.dmp Log=/tmp/imp.log;

The difference between 1.4 Exp/imp and EXPDP/IMPDP function

(1) The user UserA object to the user UserB, the usage difference is Fromuser=usera touser=userb, remap_schema= ' UserA ': ' UserB '. For example

Imp system/passwd fromuser=usera touser=userb file=/oracle/exp.dmp log=/oracle/exp.log;

IMPDP system/passwd directory=expdp dumpfile=expdp.dmp remap_schema= ' UserA ': ' UserB ' logfile=/oracle/exp.log;

(2) Replace the table space, when using exp/imp, to change the table space, you need to manually to deal with, such as ALTER TABLE XXX move tablespace_new and so on. Use IMPDP as long as you use remap_tablespace= ' tabspace_old ': ' Tablespace_new '

(3) When you specify some tables, use EXP/IMP, and tables are used by tables= (' table1 ', ' table2 ', ' table3 '). EXPDP/IMPDP usage is tables= ' table1 ', ' table2 ', ' table3 '.

(4) Whether to export rows of data

EXP (rows=y export rows of data, rows=n do not export rows of data)

EXPDP content (All: Object + Export data row, Data_only: Export only objects, Metadata_only: Only records that export data)

Two. Optimization issues in use

2.1 Exp

Through the above analysis, we know that using direct path can improve the export speed. Therefore, when using EXP, direct path mode can be used. This pattern has 2 related parameters: DIRECT and RecordLength parameters.

The direct parameter defines whether the export is using either the immediate path (direct=y) or the general path method (Direct=n). General path Export uses SQL SELECT statements to extract data from a table, direct path export is to read the data directly from disk to the PGA and then write to the export file, thus avoiding the SQL command processing layer of data conversion process, greatly improve the export efficiency. In the case of large amount of data, the efficiency advantage of direct path derivation is more obvious, which can be three times times faster than conventional method.

The RecordLength parameter is used in conjunction with Direct=y, which defines the size of the export I/o buffer, which is similar to the buffer parameter used for the normal path export. It is recommended that you set the RecordLength parameter to be the maximum I/O buffer, i.e. 65535 (64KB). Its usage is as follows:

such as: Exp userid=system/manager full=y direct=y recordlength=65535 file=exp_full.dmp log=exp_full.log

Some of the restrictions are as follows:

You are cannot use the Direct=y parameter when exporting in Tablespace-mode (i.e. when specifying the parameter Transport_tab  lespaces=y). You can use the Direct=y parameter when exporting in full, user or table mode (i.e.: when specifying full=y or Owner=scott or tables=scott.emp).

--Direct path cannot be used in Tablespace-mode

The parameter QUERY applies only to conventional path Export. It cannot is specified in a direct path export (direct=y).

--Direct path does not support query parameters. Query can only be used in conventional path mode.

In versions of Export prior to 8.1.5, you are could not with direct path Export for tables containing objects and lobs.

-If the exp version is less than 8.1.5, you cannot import Biao with LOB fields by using exp. But now there are very few 8 versions of the database. This can be ignored.

The BUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export. This buffer parameter specifies the size (in bytes) of the buffer used to fetch rows.  It determines the maximum number of rows in a array, fetched by Export. For direct path Export, with the RecordLength parameter to specify the size of the ' buffer ' that export uses for writing to T He export file.

--The buffer option is only valid for conventional path EXP. Has no effect on the direct path. For direct paths, you should set the RecordLength parameter.

The RecordLength parameter specifies the length (in bytes) to the file record. You can use this parameter to specify the size of the Export I/O buffer (highest value is).  Changing the RecordLength parameter affects only the size of data of this accumulates before to disk.  It does not affect the operating system file block size. If you don't define this parameter, it defaults to your Platform-dependent value for Bufsiz (1024 bytes in most cases).

Invoking a Direct path Export with a maximum I/O buffer of 64KB can improve the performance of the export with almost 50%. This can is achieved by specifying the additional Export parameters DIRECT and RecordLength

For the direct path, the RecordLength parameter is proposed to be set to 64k (65535).  This value is much better for performance. Such as:

> Exp System/manager file=exp_full.dmp log=exp_full.log

Full=y direct=y recordlength=65535

> Imp system/manager file=exp_full.dmp log=imp_full.log

Full=y recordlength=65535

2.2 IMP

The Oracle import process takes several times more time than the export process to import data into the database. Some critical moments, the import is to respond to the database of emergency recovery. To reduce downtime, it is important to speed up the import speed. There is no special effect to speed up the import of a large amount of data, but we can make some appropriate settings to reduce the entire import time.

(1) Avoid I/O competition

Import is an I/o intensive operation, avoiding I/O competition can speed up the import speed. If possible, do not import data at the peak of the system, and do not run potentially competitive system resources, such as jobs, when importing data.

(2) Increasing the sorting area

The Oracle import process imports data and then creates an index, and the index of the primary key is bound to be created regardless of the indexes value set to Yes or No. When you create an index, you need to use the sort area and, when the memory size is low, use the temporary tablespace to sort the disks, due to a number of levels of disk sorting efficiency and memory sorting efficiency. Increasing the sorting area can greatly increase the efficiency of creating indexes, thereby speeding up the import speed.

(3) Adjust the buffer option

IMP parameter buffer defines the amount of data that is read from the exported file each time, and the larger it is, the less times the import process reads the data, thereby increasing the efficiency of the import. The size of the buffer depends on the application of the system, the size of the database, and in general, a trillion is sufficient. Its usage is as follows:

Imp user/pwd fromuser=user1 touser=user2 file=/tmp/imp_db_pipe1 commit=y feedback=10000 buffer=10240000

(4) using the COMMIT=Y option

Commit=y means that each data buffer is filled and submitted once, rather than after a table is submitted. This will greatly reduce the system rollback segment and other resources consumption, for the smooth completion of the import is beneficial.

(5) using the INDEXES=N option

When we talk about increasing the sort area, the IMP process imports the data before creating the index. It takes a lot of time to establish a user-defined index during the import process, especially if there are multiple indexes on the table or when the datasheet is particularly large. In some cases, the data needs to be imported as quickly as possible, and when the index is allowed to be built, we can use Indexes=n to import only the data without creating an index, which speeds the import.

We can use the Indexfile option to generate the DLL script to create the index, and then manually create the index. We can also use the following method to import two times, the first time to import data, the second import index. Its usage is as follows:

Imp user/pwd fromuser=user1 touser=user2 file=/tmp/imp_db_pipe1 commit=y feedback=10000 buffer=10240000 ignore=y rows=y Indexes=n

Imp user/pwd fromuser=user1 touser=user2 file=/tmp/imp_index_pipe1 commit=y feedback=10000 buffer=10240000 ignore=y Rows=n indexes=y

(6) Increase large_pool_size

If parameters such as Mts_service,mts_dispatchers are configured in Init.ora, and there is no (server=dedicated) configuration in Tnsnames.ora, then the database uses Shared server mode. In MTS mode, the Exp/imp operation uses Large_pool, and it is recommended that large_pool_size be adjusted to 150M.

Check to see if the database is in MTS mode:

Sql>select distinct server from v$session;

If the return value appears none or shared, the description enables MTS.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.