Use the Oracle DataPump API to export data

Source: Internet
Author: User

The Oracle DataPump API is implemented based on PL/SQL and is complementary to the command line interface. DataPump API can be used to integrate its logical backup feature into applications,
Interface-based implementation facilitates simplified management. This document describes how to use the DataPump API to export data in different situations.

1. demonstrate how to use DataPump API to export data

-- 1. Export Schema (Schema mode) </P> <p> declare <br/> l_dp_handle number; <br/> l_last_job_state varchar2 (30): = 'undefined '; <br/> l_job_state varchar2 (30): = 'undefined'; <br/> l_sts Ku $ status; <br/> begin <br/> -- sepcified operation, job mode <br/> l_dp_handle: = <br/> dbms_datapump.open (Operation => 'Port' <br/>, job_mode => 'scheme' <br/>, remote_link => null <br/>, job_name => 'job _ exp1' <br/>, V Ersion => 'latest '); <br/> -- specified dumpfile and dump directory <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ schema. DMP '<br/>, directory => 'db _ dump_dir' <br/>, filetype => dbms_datapump.ku $ file_type_dump_file ); <br/> -- specified log file and dump directory <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ schema. lo G' <br/>, directory => 'db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_log_file ); <br/> -- specified fliter for schema <br/> DBMS_DATAPUMP. <br/> metadata_filter (handle => l_dp_handle <br/>, name => 'schema _ expr' <br/>, value => 'in (''scott '') '); <br/> dbms_datapump.start_job (l_dp_handle); <br/> dbms_datapump.detach (l_dp_handle); <br/> end; <br/>/</P> <p> -- 2. Export a specific table (table mode) </P> <p> Declare <br/> l_dp_handle number; <br/> l_last_job_state varchar2 (30): = 'undefined'; <br/> l_job_state varchar2 (30): = 'undefined '; <br/> l_sts Ku $ status; <br/> begin <br/> l_dp_handle: = <br/> dbms_datapump.open (Operation => 'Port' <br/>, job_mode => 'table' <br/>, remote_link => null <br/>, job_name => 'job _ exp2 '<br/>, version => 'latest '); <br/> DBMS_DATAPUMP. <br/> add_file (handle => L_dp_handle <br/>, filename => 'emp_tbl. DMP '<br/>, directory => 'db _ dump_dir' <br/>, filetype => dbms_datapump.ku $ file_type_dump_file); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'emp_tbl. log' <br/>, directory => 'db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_log_file); <br/> --> If the account is not the current one, use the following filter condition, that is, a specific table in a specific schema. For example, if it is the current account, this filter condition can be omitted <br/> dbms_dat Apump. <br/> metadata_filter (handle => l_dp_handle <br/>, name => 'schema _ expr' <br/>, value => 'in (''scott '') '); <br/> DBMS_DATAPUMP. <br/> metadata_filter (handle => l_dp_handle <br/>, name => 'name _ expr' <br/>, value => 'in (''emp '') '); <br/> dbms_datapump.start_job (l_dp_handle); <br/> dbms_datapump.detach (l_dp_handle); <br/> end; <br/>/</P> <p> -- 3. Export the schema and filter out the specific table (the schema should be filtered when exporting using a non-current account) </P> <p> Declare <br/> l_dp_handle number; <br/> begin <br/> l_dp_handle: = <br/> dbms_datapump.open (Operation => 'Port ', job_mode => 'scheme'); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ filter. DMP '<br/>, directory => 'db _ dump_dir' <br/>, filetype => dbms_datapump.ku $ file_type_dump_file); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, fi Lename => 'Scott _ filter. log' <br/>, directory => 'db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_log_file); <br/> DBMS_DATAPUMP. <br/> metadata_filter (handle => l_dp_handle <br/>, name => 'schema _ list' <br/>, value => '''scott '''); <br/> dbms_datapump.metadata_filter (handle => l_dp_handle <br/>, name => 'name _ expr' <br/>, value => '! = ''Emp'' <br/>, object_type => 'table'); <br/> dbms_datapump.start_job (l_dp_handle); <br/> end; <br/>/</P> <p> -- 4. Export all tables in the current schema and filter specific tables. </P> <p> declare <br/> l_dp_handle number; <br/> begin <br/> l_dp_handle: = <br/> dbms_datapump.open (Operation => 'Port', job_mode => 'table'); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ filter_2.dmp '<br/>, directory => 'Db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_dump_file); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ filter_2.log '<br/>, directory => 'db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_log_file); <br/> dbms_datapump.metadata_filter (handle => l_dp_handle <br/>, name => 'name _ expr' <br/>, value => '! = ''Emp''); <br/> dbms_datapump.metadata_filter (handle => l_dp_handle <br/>, name => 'name _ expr' <br/>, value => '! = ''Dept'); <br/> dbms_datapump.start_job (l_dp_handle); <br/> dbms_datapump.detach (l_dp_handle); <br/> end; <br/>/</P> <p> -- 5. Batch filter specific tables of the current user </P> <p> declare <br/> l_dp_handle number; <br/> begin <br/> l_dp_handle: = <br/> dbms_datapump.open (Operation => 'Port', job_mode => 'table'); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ filter_3.dmp '<br/>, directory => 'db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_dump_file); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ filter_3.log' <br/>, directory => 'db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_log_file); <br/> dbms_datapump.metadata_filter (handle => l_dp_handle <br/>, name => 'name _ expr' <br/>, value => 'Not like 't% '''); <br/> dbms_datapump.start_job (l_dp_handle ); <br/> dbms_datapump.detach (l_dp_handle); <br/> end; <br/>/</P> <p> /*************************** * *********************/<br/>/* Author: robinson Cheng */<br/>/* blog: http://blog.csdn.net/robinson_0612 */<br/>/* MSN: robinson_0612@hotmail.com */<br/>/* QQ: 645746311 */<br/> /******************************** * *****************/<br/>-- 6. Filter specific rows in a specific table <br/> -- hiredate on tb_emp is of the date type, filter by date <br/> Scott @ cnmmbo> DESC tb_emp <br/> name null? Type <br/> bytes -------- -------------------------- <br/> empno number (4) <br/> ename varchar2 (10) <br/> job varchar2 (9) <br/> Mgr number (4) <br/> hiredate varchar2 (10) <br/> Sal number (7,2) <br/> comm number (7,2) <br/> deptno number (2) </P> <p> Scott @ cnmmbo> select empno, ename, hiredate from tb_emp; </P> <p> empno ename hiredate <br/> ---------- <br/> 9999 Ro. ch <br/> 7369 Smith 19801217 <br/> 7499 Allen 19810220 <br/> 7521 ward 19810222 <br/> 7566 Jones 19810402 <br/> 7654 Martin 19810928 <br/> 7698 Blake 19810501 <br/> 7782 Clark 19810609 <br/> 7788 Scott 19870419 <br/> 7839 King 19811117 <br/> 7844 Turner 19810908 <br/> 7876 Adams 19870523 <br /> 7900 James 19811203 <br/> 7902 Ford 19811203 <br/> 7934 Miller 19820123 </P> <p> 15 rows selected. <br/> Scott @ cnmmbo> select count (*) from tb_emp where hiredate> = '000000'; </P> <p> count (*) <br/> ---------- <br/> 11 </P> <p> declare <br/> l_dp_handle number; <br/> begin <br/> l_dp_handle: = <br/> dbms_datapump.open (Operation => 'Port', job_mode => 'table'); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ tb_emp.dmp '<br/>, directory => 'db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_dump_file); <br/> DBMS_DATAPUMP. <br/> add_file (handle => l_dp_handle <br/>, filename => 'Scott _ tb_emp.log '<br/>, directory => 'db _ dump_dir '<br/>, filetype => dbms_datapump.ku $ file_type_log_file); <br/> dbms_datapump.metadata_filter (handle => l_dp_handle <br/>, name => 'name _ expr' <br/>, value => '= ''tb _ emp''' <br/>, object_type => 'table '); <br/> dbms_datapump.data_filter (handle => l_dp_handle <br/>, name => 'subquery' <br/>, value => 'where hiredate> = '000000''' <br/>, table_name => 'tb _ emp'); <br/> dbms_datapump.start_job (l_dp_handle ); <br/> dbms_datapump.detach (l_dp_handle); <br/> end; <br/>/* <br/> Oracle @ szdb: /u02/database/cnmmbo/BNR/dump> More scott_tb_emp.log <br/> starting "Scott ". "sys_export_table_01": <br/> estimate in progress using blocks method... <br/> processing object type table_export/table/table_data <br/> total Estimation Using blocks method: 64 kB <br/> processing object type table_export/table/statistics/table_statistics <br/> .. exported "Scott ". "tb_emp" 7.695 kb 11 rows <br/> master table "Scott ". "sys_export_table_01" successfully Loaded/unloaded <br/> ***************************** **************************************** * ******** <br/> dump file set for Scott. sys_export_table_01 is: <br/>/u02/database/cnmmbo/BNR/dump/scott_tb_emp.dmp <br/> job "Scott ". "sys_export_table_01" successfully completed at 17:33:23 */</P> <p> -- 7. Filter specific rows in a specific table in batches <br/> -- include the following code in PL/SQL block, use a cursor loop to pass the name of the table to be filtered to generate multiple filtering conditions <br/> -- In the PL/SQL block below, all the tables containing the arc characters whose specific date v_split_date records can be exported <br/> for tab_cur in (select table_name, num_rows <br/> from dba_tables <br/> where table_name like '% arc %' and owner = 'goex _ admin') <br/> loop <br/> DBMS_DATAPUMP. <br/> data_filter (<br/> handle => hand, <br/> name => 'subquery ', <br/> value => 'where business_date> = ''' | v_split_date | '''', <br/> table_name => ''| tab_cur.table_name |''); <br/> end loop; </P> <p> -- 8. Error Handling <br/> -- If job_name is defined, the following errors are often encountered. If job_name is not specified, job_name is automatically generated, and the system automatically manages job_name </P> <p> declare <br/> * <br/> error at line 1: <br/> ORA-31634: job already exists <br/> ORA-06512: At "sys. dbms_sys_error ", line 79 <br/> ORA-06512: At" sys. DBMS_DATAPUMP ", line 911 <br/> ORA-06512: At" sys. DBMS_DATAPUMP ", line 4354 <br/> ORA-06512: at Line 7 </P> <p> Scott @ cnmmbo> Ho oerr ora 31634 <br/>/* <br/> 31634,000 00, "job already exists" <br/> // * cause: job creation or restart failed because a job having the selected <br/> // name is currently executing. this also generally indicates that <br/> // a master table with that job name exists in the user schema. refer <br/> // to any following error messages for clarification. <br/> // * Action: select a different job name, or stop the currently executing job <br/> // and re-try the operation (may require a drop on the master table ). */</P> <p> Scott @ cnmmbo> select table_name from user_tables where table_name like 'job % '; </P> <p> table_name <br/> ---------------------------- <br/> job_exp </P> <p> Scott @ cnmmbo> drop table job_exp; <br/> drop table job_exp <br/> * <br/> error at line 1: <br/> ORA-00054: resource busy and acquire with Nowait specified </P> <p> Scott @ cnmmbo> select distinct object_name <br/> 2 | ''<br/> 3 | locked_mode <br /> 4 | ''<br/> 5 | ctime <br/> 6 |'' <br/> 7 | C. sid <br/> 8 | ''<br/> 9 | serial # <br/> 10 from V $ locked_object A, dba_objects B, V $ lock C, V $ Session D <br/> 11 Where. object_id = B. object_id <br/> 12 and C. SID =. session_id <br/> 13 and C. SID = D. sid; </P> <p> object_name | ''| locked_mode |'' | ctime | ''| C. sid | ''| serial # <br/> interval <br/> job_exp 3 552 1075 799 </P> <p> Scott @ cnmmbo> alter system kill session '100 '; </P> <p> system altered. </P> <p> Scott @ cnmmbo> drop table job_exp purge; --> Delete the table and export it again </P> <p> table dropped. </P> <p> 9. Use views to monitor DataPump status <br/> Scott @ cnmmbo> Col owner_name format A15 <br/> Scott @ cnmmbo> Col operation format A15 <br /> Scott @ cnmmbo> Col state format A20 <br/> Scott @ cnmmbo> select owner_name, job_name, operation, job_mode, state, degree from dba_datapump_jobs; </P> <p> owner_name job_name operation job_mode state degree <br/> running --------------- ---------- running ---------- <br/> Scott job_exp1 export schema executing 1 </P> <p> 10. Use the following procedure to set the degree of parallelism <br/> dbms_datapump.set_parallel (hand, 1); </P> <p> 11. Demo environment where the preceding operation is located <br/> Scott @ cnmmbo> select * from V $ version where rownum <2; </P> <p> banner <br/> ---------------------------------------------------------------- <br/> Oracle Database 10g Release 10.2.0.3.0-64bit Production

Ii. Notes
1. When using Schema mode for export, if the exported schema is the current schema, you do not need to specify the schema filter conditions. Otherwise, you need to filter the schema.
2. When using Table mode for export, if the exported table is the current schema, you do not need to specify the schema filter conditions. Otherwise, you need to filter the schema.
3. You can use multiple SQL expressions such as like, not like, in, not in, =, for specific records in the filter table ,! = Symbols
4. Pay attention to the use of single quotes, especially when the data type of the simplified type is used, two single quotes represent one quotation mark.
5. If the same dump file and log file exists during export, the PL/SQL block will fail to be executed. Delete the block or determine whether the file exists by writing PL/SQL, if overwrite exists or not
6. If job_name is specified, a prompt is displayed when the current job fails to be executed again. We recommend that the system automatically generate job_name for simplified management.

Iii. More references

Use of Data Pump expdp export tool

Use of Data Pump impdp import tool

Use of exclude/include in expdp impdp

Http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_datpmp.htm
Https://forums.oracle.com/forums/thread.jspa? Threadid = 837324
Http://psoug.org/reference/dbms_datapump.html

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.