Performance Tuning of Oracle expimp import and export tool

Source: Internet
Author: User

1 exp Tuning

1.1 Use direct and recordlength options

The direct parameter defines whether to use direct path (direct = y) or conventional path (direct = n) for export ). Conventional path export uses the SQL SELECT statement to extract data from the table. direct path export reads data directly from the disk to the PGA and writes the data to the export file as is, this avoids the data conversion process at the SQL command processing layer and greatly improves the export efficiency. In the case of a large amount of data, direct path export has a more efficient advantage, which is three times faster than conventional methods.

The recordlength parameter is used with direct = Y. It defines the size of the export I/O buffer, which is similar to the buffer parameter used for normal path export. We recommend that you set the recordlength parameter to the maximum I/O buffer, that is, 65535 (64 KB ). The usage is as follows:

Exp userid = system/manager full = y direct = y recordlength = 65535 file = exp_full.dmp log = exp_full.log

Direct path export varies with Oracle versions, which has some restrictions. Important restrictions are that 8i and earlier versions do not support Character Set conversion between the export client and the database. Therefore, you must ensure that the nls_lang setting is correct before export; 8.1.5 and later versions do not support exporting tables with lobs objects; query parameters are not supported.

1.2 pipeline technology

Pipeline is fromProgramOne-way information transfer from a process to another. Generally, an MPS queue transmits the output of a process to another process as the input. If the exported data volume is large, you can use the pipeline to directly generate the final compressed file, which takes the same time as the direct export without compression. This not only solves the problem of insufficient disk space, but also saves time for File compression. If you need to transfer and export files, you can also reduce the network transmission time. For example, it may take more than half an hour to compress a 10 Gb file. Although the pipeline technology cannot directly shorten the time of exp/IMP itself, the compression time saved is considerable. The pipeline and exp can be used in combination as follows:

Data export example:

% Mknod/tmp/exp_pipe P # Make the pipe

% Compress </tmp/exp_pipe> export. dmp. Z & # background Compress

% Exp file =/tmp/exp_pipe # Export to the pipe

2 imp Optimization

The Oracle import process takes several times longer to import data to the database than the export process. In some critical moments, the import is to respond to the emergency fault recovery of the database. To reduce downtime, it is essential to speed up the import. There is no special effect to accelerate the import of a large amount of data, but we can make some appropriate settings to reduce the entire import time.

2.1 pipeline technology

The preceding section describes how to use MPs queues during exp operations. The MPs queues play the same role during import. This not only solves the problem of insufficient disk space, but also saves the time to decompress files separately. We recommend that you use pipelines when importing and exporting large amounts of data.

Import Data example:

2.2 avoid I/O Competition

Import is an I/O intensive operation, which can accelerate the import speed to avoid I/O competition. If possible, do not import data during system peaks, and do not run jobs or other operations that may compete for system resources when importing data.

2.3 Add a sorting area

The Oracle import process first enters the data and then creates an index. no matter whether the indexes value is set to yes or no, the primary key index will be created. When creating an index, you need to use the sorting area. When the memory size is insufficient, you can use a temporary tablespace to sort disks. the disk sorting efficiency is several orders of magnitude different from the memory sorting efficiency. Adding a sorting area can greatly improve the efficiency of index creation and speed up the import.

8i and earlier versions: Increase the database sort_area_size before importing data, which can be set to 5-10 times the normal value. However, this value affects all sessions. If this value is set too high, paging and swapping may occur if the memory is insufficient. A more secure approach is to export only data without importing indexes for large tables and tables with a large number of indexes. After importing data, create a session, set sort_area_size of the current session to a value that is large enough, and then create an index manually.

9i: in the case of workarea_size_policy = Auto, the UGA of all sessions share the memory defined by pga_aggregate_target. You do not need to set sort_area_size separately. Increase the pga_aggregate_target size before importing data. If the machine memory is large enough, increase from the usual set MB to 1-2 GB. The pga_aggregate_target size can be dynamically adjusted. After the import is complete, the original value can be adjusted back online.

2.4 adjust the buffer Option

The IMP Parameter Buffer defines the data volume of each exported file read. The larger the value is, the more data the import process reads, the more efficient the import. The buffer size depends on the system application and database size. Generally, setting it to MB is enough. The usage is as follows:

IMP user2/pwd fromuser = user1 touser = user2 file =/tmp/imp_db_pipe1 commit = y feedback= 10000 buffer = 10240000

2.5 use the commit = Y option

Commit = y indicates that each data buffer is submitted once, instead of importing a table. This will greatly reduce the consumption of resources such as system rollback segments, and will be helpful for the smooth completion of the import.

2.6 Use the indexes = n Option

When we talk about adding a sorting area, it means that the IMP process will first import data and then create an index. It takes a lot of time to create a user-defined index during the import process, especially when multiple indexes or data tables on the table are large. In some cases, you need to import data as quickly as possible, and the index can be created later, so that you can use indexes = n to import data only without creating indexes, thus accelerating the import speed.

You can use the indexfile option to generate the index creation dll script, and then manually create the index. We can also use the following method to import data twice, the first time to import data, and the second time to import indexes. The usage is as follows:

IMP user2/pwd fromuser = user1 touser = user2 file =/tmp/imp_db_pipe1 commit = y feedback = 10000 buffer = 10240000 ignore = y rows = y indexes = N

IMP user2/pwd fromuser = user1 touser = user2 file =/tmp/imp_index_pipe1 commit = y feedback = 10000 buffer = 10240000 ignore = y rows = n indexes = y

2.7 increase large_pool_size

If mts_service, mts_dispatchers, and other parameters are configured in init. ora, and tnsnames. ora does not have (Server = dedicated) configuration, the database uses the Shared Server mode. In MTS mode, the exp/IMP operation uses large_pool. We recommend that you adjust the size of large_pool_size to 150 m.

Check whether the database is in MTS mode:

SQL> select distinct server from V $ session;

If none or shared is returned, MTS is enabled.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.