Oracle Pipeline Solution Exp/imp A large number of data processing problems _oracle

Source: Internet
Author: User
Tags oracle database

By exp, the data is unloaded to the file system, a. dmp file is generated, and then, when necessary, the data is loaded into the database by Imp. For the general small and medium-sized databases, the full database of exp produced by the DMP file may be less than 2GB, but for a slightly larger database, exp generated data from dozens of to hundreds of gigabytes. At present, most operating systems are 32 bits and the maximum file system allowed is 2GB. This obviously cannot be stored by the file system for exp generated data. This is one of the problems. Another problem is that as the database grows, Exp takes longer and is actually difficult to implement. This paper discusses the corresponding countermeasures for the above two problems.

Let's take Unix for example first to see how to go beyond the 2GB limit. This requires the use of common UNIX technologies, such as pipelines (named pipe), data Copy tool DD, and data compression (compress). These techniques are discussed separately below.

• A pipe is a pseudo file. It exists in memory for fast I/O operations. The pipeline's buffer uses the advanced first-out mechanism, that is, the write pipeline process writes to the buffer head and reads the pipeline process to read the pipe tail. The command to establish the pipe is "Mknod filename p".
DD allows us to copy data from one device to another device.
Compress is a UNIX data compression tool.
Before implementing Exp, we can first check the size of the resulting DMP file. The following steps can be implemented,
1. Build Pipeline:
$ mknod/tmp/mypipe P
2. Exp results output to the pipeline
$ exp file=/tmp/mypipe .... Other Exp Options > &
3. Read the pipeline and output the results to the pipeline, only to view the amount of data:
$ dd If=/tmp/mypipe Of=/dev/null
The result returns the exp (. dmp) file size, in blocks (bytes).
Now we can start talking about concrete methods,
1. File compression
2. Direct exp to tape
3. Integrated approach
First look at the file compression method. Using file compression We can try to avoid the DMP file size exceeding the 2GB limit. We take advantage of the UNIX pipeline technology, as follows:
1. Start the compress process, which reads the data from the pipeline and outputs it to the disk file.
$ mknod/tmp/exp_pipe P
$ compress </tmp/exp_pipe > Export.dmp.z &
2. Exp to pipe.
$ exp File=/tmp/exp_pipe ... &
IMP is a similar situation.
Then look at the direct exp to tape method.
$ exp file=/dev/rmt/0m volsize=4g
$ imp file=/dev/rmt/0m volsize=4g
Finally look at the method of exp to the bare device,
$ exp FILE=/DEV/RDSK/C0T3D0S0 volsize=4g
Imp similar. All of these methods allow us to avoid the 2GB limit.
The following is a discussion of how to shorten the EXP cable time requirement. Most users now use EXP to tape only because there is not enough space left for them to drive. Most of the time spent on the EXP process is written on tape. Through our discussion of the UNIX tools above, we can make the first exp to the pipeline, then compress the pipeline data and output it to the tape. This can greatly reduce the amount of data that is written to tape, thereby shortening the tape-writing time to a considerable extent. Also from Oracle's internal perspective, Oracle allows users to do direct path export from the 7.3 release, that is, to skip Oracle
$ make-f $ORACLE _home/rdbms/lib/oracle.mk expst
$ make-f $ORACLE _home/rdbms/lib/oracle.mk impst
The resulting executable file EXPST (exp single Task) and Impst are exactly the same as exp/imp usage. Using them can save up to 30% more time than using Exp/imp. However, when using them, it is important to ensure that the Oracle database is not available to any other user when using Expst/impst, or the database can have unpredictable consequences or even damage.
A: Yesterday on the internet a friend asked me how to import a file through pipe exported and compressed, I did a test:
Oracle exports and imports do not directly reference standard output and input, but are implemented through UNIX pipe, which speeds up exp/imp speed through pipelines
Exporting data through pipelines
1. Establishing pipelines through MKNOD-P
Mknod/home/exppipe P--Create a pipe under the directory/home exppipe note parameter p
2. Export data through EXP and gzip to established pipelines and compress
Exp TEST/TEST@ORCL File=/home/exppipe &
Gzip 3. Delete the established pipeline after successful export
Rm-rf/home/exppipe
According to my test, the compressed export file is only one-tenth of the normal size
To import generated files through pipelines
1. Establishing pipelines
Mknod/home/exppipe P
2. Import the generated compressed file
Imp SYSTEM/PASSWD@ORCL file=/home/exppipe fromuser=test Touser=macro &
Gunzip < exp.dmp.gz >/home/exppipe
This makes the import successful, then deletes the pipeline
3. Remove Pipeline
 
Rm/home/exppipe
The above summary, pipe is really good, can speed up, but also can avoid a single file greater than 2G limit.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.