How to export/import data in Oracle databases

Source: Internet
Author: User

So far, the export/import toolset is still the preferred utility for transferring data across multiple platforms with the minimum labor intensity, although it is often complained that it is too slow. Import only reads each record from the export dump file, and then inserts it INTO the target table using the common insert into command. Therefore, the import process may be slow, this is not surprising.

Go to Oracle Data Pump. Similar Tools with faster update of the export/import toolkit in Oracle Database 10g are designed to speed up the process exponentially.

Data Pump reflects the complete innovation in the entire export/import process. Instead of using common SQL commands, it uses dedicated APIs to load and unload data at a much faster speed. In my tests, I saw that the export performance was 10-15 times higher than that in direct mode, and the import process performance was 5 times higher. In addition, unlike the export utility, it can only retrieve specific types of objects (such as procedures ).

Data Pump Export

This new utility is called expdp, which is separated from the original export exp. In this example, we will use Data Pump to export a large table CASES with a size of about 3 GB. Data Pump uses file processing on the server to create and read files. Therefore, the directory is used as the location. In this case, we will use the File System/u02/dpdata1 to save the dump file.


  
Create directory dpdata1 as/u02/dpdata1;
Grant read, write on directory dpdata1 to ananda;
 
 

 
Next, we will export the data:


  
Expdp ananda/abc123 tables = CASES directory = DPDATA1
Dumpfile = expCASES. dmp job_name = CASES_EXPORT
 
Let's analyze each part of the command. The User ID/password combination, table, and dump file parameters are significant. Unlike the original export, files are created on the server (not a client. The location is specified by the DIRECTORY parameter value DPDATA1, which points to the created/u02/dpdata1. This process also creates a log file (also on the server) at the location specified by the DIRECTORY parameter ). By default, this process uses a directory named DPUMP_DIR; therefore, it can be created to replace DPDATA1.

Note that the preceding parameter job_name is a special parameter and does not exist in the original export. All Data Pump jobs are completed through jobs. Data Pump job-different from DBMS job-only a server process, which represents the master process processing Data. The master process (called the Master Control Process) coordinates this work through the advanced Queue (AQ), which is achieved by creating a special table (called the master table) during the runtime. In our example, if you check the ANANDA mode when the expdp is running, you will notice the existence of a table CASES_EXPORT (corresponding to job_name ). When expdp ends, the table is discarded.

Export Monitoring

When Data Pump Export (DPE) is running, press Control-C; To prevent messages from being displayed on the screen, but do not stop the Export process itself. Instead, it displays the DPE prompt (as shown below ). Processes are now considered in "interactive" Mode:

Export>

This method allows you to enter several commands on the DPE job. To view the summary, run the STATUS command at the prompt:


  
Export> status
Job: CASES_EXPORT
Operation: EXPORT
Mode: TABLE
State: EXECUTING
Degree: 1
Job Error Count: 0
Dump file:/u02/dpdata1/expCASES. dmp
Bytes written = 2048
  
Worker 1 Status:
State: EXECUTING
Object Schema: DWOWNER
Object Name: CASES
Object Type: TABLE_EXPORT/TBL_TABLE_DATA/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Completed Rows: 4687818
 
 

 
Remember, this is only the status display. Export in the background. To continue viewing messages on the screen, run the "CONTINUE_CLIENT" command from the Export> prompt.

Parallel Operation

You can use more than one thread for export through the PARALLEL parameter to significantly accelerate the job. Each thread creates a separate dump file, so the parameter dumpfile should have the same project as the degree of parallelism. You can specify a wildcard as a file name instead of explicitly entering each file name. For example:


  
Expdp ananda/abc123 tables = CASES directory = DPDATA1
Dumpfile = expCASES _ % U. dmp parallel = 4 job_name = Cases_Export

 


Note that the dumpfile parameter has a wildcard % U, which indicates that the file will be created as needed. The format will be expCASES_nn.dmp, where nn starts from 01 and then increases as needed.

In parallel mode, the status screen displays four working processes. (In the default mode, only one process is visible .) All working processes synchronously retrieve data and display their progress on the status screen.

It is important to separate the input/output channels for accessing data files and dumping directory file systems. Otherwise, the overhead related to Data Pump job maintenance may exceed the benefits of parallel threads and thus reduce performance. Parallelism is effective only when the number of tables exceeds the number of parallel values and the number of tables is large.

Database monitoring

You can also obtain more information about running Data Pump jobs from the database view. The main view of the monitoring job is DBA_DATAPUMP_JOBS, which tells you how many worker processes (column DEGREE) are working on the job. Another important view is DBA_DATAPUMP_SESSIONS. When it is combined with the preceding view and V $ SESSION, the session sid of the main foreground process is given.


  
Select sid, serial #
From v $ session s, dba_datapump_sessions d
Where s. saddr = d. saddr;
 
 

 
This command displays the Sessions of the foreground process. More useful information can be obtained from alarm logs. When a process starts, the MCP and working process are shown in the alarm log as follows:


  
Kupprdp: master process DM00 started with pid = 23,
OS id = 20530 to execute-
SYS. KUPM $ MCP. MAIN (CASES_EXPORT, ANANDA );
  
Kupprdp: worker process DW01 started with worker
Id = 1, pid = 24, OS id = 20532 to execute-
SYS. KUPW $ WORKER. MAIN (CASES_EXPORT, ANANDA );
  
Kupprdp: worker process DW03 started with worker
Id = 2, pid = 25, OS id = 20534 to execute-
SYS. KUPW $ WORKER. MAIN (CASES_EXPORT, ANANDA );
 
 

 
It is displayed as the PID of the session started by the data pump operation. You can use the following query to find the actual SID:


  
Select sid, program from v $ session where paddr in
(Select addr from v $ process where pid in (23, 24, 25 ));
 
 

 
The PROGRAM column displays the name of the Process DM (master process) or DW (Worker Process) in the corresponding Alert Log File ). If a worker process uses parallel queries, such as SID 23, you can view it in view V $ PX_SESSION and find it. It will show you all parallel query sessions running in the working process represented by SID 23:


  
Select sid from v $ px_session where qcsid = 23;
 
 

 
Other useful information can be obtained from view V $ SESSION_LONGOPS to predict the time it will take to complete the job.


  
Select sid, serial #, sofar, totalwork
From v $ session_longops
Where opname = CASES_EXPORT
And sofar! = Totalwork;
 
 

 
The totalwork column shows the total workload. The number of sofar columns in this column is added to the current time point-you can use it to estimate how long it will take.

Data Pump Import

However, the Data import performance is really outstanding for Data Pump. To import the previously exported data, we will use


  
Impdp ananda/abc123 directory = dpdata1
Dumpfile = expCASES. dmp job_name = cases_import

 


The default act of the import process is to create a table and all related objects, and an error occurs when the table already exists. If you want to add data to an existing table, you can use TABLE_EXISTS_ACTION = APPEND in the preceding command line.

As with Data Pump Import, pressing Control-C in the process will enter the interaction mode of Date Pump Import (DPI). Similarly, the prompt is Import>.

Process specific objects

Have you ever had a situation where you only need to export a specific process from a user to re-create these processes in a different database or user? Unlike traditional export utilities, Data Pump allows you to export only specific types of objects. For example, the following command allows you to export only the process without exporting anything else-without exporting tables, views, and even functions:


  
Expdp ananda/iclaim directory = DPDATA1
Dumpfile = expprocs. dmp include = PROCEDURE
 
 

 
To export only some specific objects-for example, function FUNC1 and process PROC1-you can use

 

 
Expdp ananda/iclaim directory = DPDATA1 dumpfile = expprocs. dmp
Include = PROCEDURE: "= PROC1 \", FUNCTION: "= FUNC1 \"
 

  
This dump file acts as a backup of the source object. You can even use it to create DDL scripts for later use. A special parameter called SQLFILE allows creation of DDL script files.

 

 
Impdp ananda/iclaim directory = DPDATA1
Dumpfile = expprocs. dmp sqlfile = procs. SQL
 

  
This command creates a file named procs. SQL in the directory specified by DPDATA1, and contains the script of the object in the export dump file. This method helps you quickly create a source object in another mode.

The INCLUDE parameter allows you to define objects to be included or excluded from the dump file. You can use the clause INCLUDE = TABLE: "like tab %" to export only tables whose names start with TAB. Similarly, you can use the INCLUDE = TABLE: "NOT LIKE

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.