Export data remotely using EXPDP (non-local)

Source: Internet
Author: User
Tags create directory

background :

Some time ago, the test data needs to be imported from a test database (a table) to the local library, the table data volume of about 5 million, more than 160 fields, began to use the Exp/imp way, the speed is very slow, can not endure, so instead of using EXPDP/IMPDP way.


EXPDP/IMPD Introduction :

Starting with the 10g, Oracle provides a EXPDP/IMPDP data pump import and export tool in addition to the traditional Exp/imp import and export tool.

From the official documentation (HTTP://DOCS.ORACLE.COM/CD/E11882_01/SERVER.112/E22490/DP_OVERVIEW.HTM#SUTIL802), the Oracle data pump consists of three parts:

>the command-line clients, EXPDP and IMPDP

>the Dbms_datapump PL/SQL package (also known as the Data Pump API)

>the Dbms_metadata PL/SQL package (also known as the METADATA API)


The data Pump clients, EXPDP and IMPDP, invoke the data Pump Export utility and Data Pump Import utility, respectively.
The data pump data Pump export/import Utility application tool is called by the EXPDP/IMPDP, respectively.


The EXPDP and IMPDP clients use the procedures provided in the Dbms_datapump PL/SQL package to execute export and import C Ommands, using the parameters entered at the command line. These parameters enable the exporting and importing of data and metadata for a complete database or for subsets of a datab Ase.
The EXPDP/IMPDP client uses the stored procedure provided by the Dbms_datapump PL/SQL Package to execute the export/import command, and can add parameters to the command line that import data and metadata from the exported database or part of it.

When metadata are moved, Data Pump uses functionality provided by the Dbms_metadata PL/SQL package. The Dbms_metadata package provides a centralized facility for the extraction, manipulation, and re-creation of dictionary Metadata.

If you need to import and export metadata, the data pump uses the functions provided by the Dbms_metadata PL/SQL package. The Dbms_metadata package provides a convenient way to extract, control, and reconstruct data dictionary metadata.


The dbms_datapump and Dbms_metadata PL/SQL packages can be used independently of the Data Pump clients.

The PL/SQL packages for Dbms_datapump and dbms_metadata can be used independently of the data pump client.


All Data Pump Export and Import processing, including the reading and writing of the dump files, is do on the system (server ) selected by the specified database connect string. This means. unprivileged users, the database administrator (DBA) must create directory objects for the Data Pump F Iles that is read and written on the server file system. (For security reasons, DBAs must ensure the only approved users is allowed access to directory objects.) For privileged users, a default directory object is available. See ' Default Locations for Dump, Log, and SQL Files ' For more information about directory objects.

All data pump import and export processes, including read and write dump files, are on the database server (server). This means that for a non-authorized user, the DBA must create a server-readable directory object for the data pump file, and the DBA must ensure that only authorized users can access the directory objects for security reasons. For authorized users, the default directory object can be used.


Summary :

The Data Pump tool EXPDP/IMPDP tool, when executed is actually called the two PL/SQL packets (a corresponding data, a corresponding metadata), read and write dump files are stored in the folder corresponding to the directory directory on the server.


The problem is, if there is no server account, even if you can export, but how to get to the exported dump it?

The need now is to use EXPDP from the local server to export a table of data from a remote server and then use IMPDP to import to the local library without a remote server's login account.

A Remote data table import and export problem, of course, here use exp/imp completely can do, now see how EXPDP/IMPDP do.


Solution : Import and export remote data via Dblink.

1. Local library creation Dblink point to the remote library.

Create DATABASE link gf_local connect to username identified by password using ' tnsname ';


2. The local library creates the directory directory object.

Create directory Expdp_dir as '/home/oracle11g/expdp_dir/';


3. Local library Create data pump export parameter file.

VI Exp.par:

Userid=username/password///local library user name password, the Dblink and directory directories created above can be used.

Directory=expdp_dir//Directory directory created by the local library.

Dumpfile=gf_expdp.dump//dump file name.

Logfile=gf_expdp.log//log file name.

TABLES=DEPKF15.T_CP//the table to export.

query= ' where rownum<1000001 '//Export condition, first 1 million rows.

Network_link=gf_local//dblink name.

Note that there is a small problem to note,TABLES=DEPKF15.T_CP, if the table to be exported and the user is different, then you need to add the exported object schema (user), otherwise it will be an error.

4. Execute the Export command.

EXPDP parfile=Exp.par

Start an error, prompt:

ORA-31631: Requires permission

ORA-39149: Unable to link an authorized user to a non-authorized user

You need to grant the remote database user Exp_full_database permissions at this time:

GRANT exp_full_database to username;

The next step is to wait for the dump file to be stored in the local expdp_dir pointing path.


Summary :

1. EXPDP/IMPDP is the data pump tool provided above 10g, the runtime will call two PL/SQL packages, of course, you can also directly run the two PL/SQL package, have not tried, if you tried, you can share it.

2. Import the database account needs to have the Imp_full_database permission, the export database account needs to have the exp_full_database permission.

3. EXPDP is better than exp, there are some places, such as from Help=y can see EXPDP have more parameters optional, wherein EXPDP has compression compression parameter optional, explained as follows:

Reduce the size of a dump file.

Valid keyword values Are:all, data_only, [metadata_only] and NONE.

Refer to Secooler's article (http://blog.itpub.net/519536/viewspace-630005/), which is a comparison of the compression ratios of four different parameters.

4. Because of my experiment, the network between the remote library and the local library is not very good, so the actual effect, exp and EXPDP are relatively slow, but also because of the dblink way, there is no further depth, here just to illustrate EXPDP/IMPDP how to achieve remote database export.

After a colleague test, a 800MB file, using exp export for about 1 hours, EXPDP only need less than 10 minutes, please note that the compression parameter is not used here.

Some people say that this is due to the EXP and EXPDP export mechanism, the EXP export process is actually loaded data executed by select, placed in buffer cache, and then uploaded to the export client to write the dump file. EXPDP is a direct path mode, read directly from the disk, write to the PGA, and then upload to the export client to write the dump file. The buffer cache is not cached to a certain extent to determine his export speed.

5. Online someone said "EXPDP/IMPDP is a service-side program, affecting his only disk IO", from the above introduction, is not completely accurate, EXPDP/IMPDP is the client tool, the execution of the call is the database server side of the two PL/SQL package, The above experiment is also a remote import via Dblink and is not used on the server side.

Export data remotely using EXPDP (non-local)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.