A few days ago, Daniel made a joke because he had no knowledge or experience. Today, we still need to rush to the imperial capital from Changsha, which is far away, because of Daniel's unintentional mistake.
The process is as follows:
The pilot project of golden tax phase III in Chongqing was launched more than January. The National Audit Administration needs to audit the data of various manufacturers of the project, so the boss asked me to integrate our most comprehensive recent database full database
The exported result is a joke.
After receiving the task, I ran the expdp directly to the server without saying anything. Just like the ideal one, the dump export would take at least 700 GB, so my heart would be cool.
At ordinary times, the total volume used to export dump is only 500 GB, let alone the remaining space is only 300 GB. What should I do? After trying every means, I finally had to rogue a few TB of Storage
Oracle 11g software is installed on the FTP server of the space and exported with exp.
The data was finally exported, but the boy from the Audit Office was hurt. It took a few days to download the data from the FTP server from the 25 th to today, and several errors occurred in the middle.
I am complaining when I leave today. I am worried that I will not be transferred back to Changsha, and I will go back to the imperial capital ......
Come back in the evening and check whether data pump can split big data into small files for export.
Oracle? Database utilities11g Release 2 (11.2:
Multiple files can be specified in the dumpfile parameter of the expdp job. Files are separated by commas:
Dumpfile = [directory_object:] file_name [,...]
You can also specify the file name format of multiple dump files generated by replacing the variable % u. Example:
Expdp system/oracle123 dumpfile = test % u. dmp logfile = exp_test.log directory = dpump_dir schemas = Scott content = data_only filesize = 10 m
If % u is used to replace the variable, replace the % u placeholder in the generated file name with the two fixed-length increasing integers from 01 to 99. For example, test % u. dmp will be exported
Test01.dmp, test02.dmp...
If The filesize parameter is also specified, each exported dump file has a maximum size and cannot be expanded. If the dump file set requires more
And % u is used to replace the variable. As long as the storage device has enough space, a new dump file with the size specified by The filesize parameter is automatically created.
The expdp tool processes the files according to the file sequence specified in the dumpfile parameter. If the expdp job requires more files or
When a parallel job is executed, the extra files are created automatically when the % u file template is specified.
Although we can specify multiple files in the dumpfile parameter, the export job may only need a part of these files to store exported data.
The dump file set displayed at the end of the export job is the actually used file. These files are used when the dump file set only needs to be imported.
Required. Any files that are not backed up can be discarded.
Immdp import example:
Impdp system/oracle123 dumpfile = test % u. dmp directory = dpump_dir schemas = Scott logfile = imp_test.log
For reprint, please indicate the author's source and original article link:
Http://blog.csdn.net/xiangsir/article/details/8729037