Expdp/impdp Usage Details

Source: Internet
Author: User

I. Notes about expdp and impdp when using expdp and impdp:
Exp and IMP are client tool programs that can be used either on the client or on the server.
Expdp and impdp are tool programs on the server. They can only be used on the Oracle server, but not on the client.
IMP only applies to exp exported files, not expdp exported files; impdp only applies to expdp exported files, not exp exported files.
When you run the expdp or impdp command, you can leave the username/password @ Instance name as the identity, and then enter it as prompted, such:
Expdp schemas = Scott dumpfile = expdp. dmp directory = dpdata1;
1. Create a logical directory. This command does not create a real directory in the operating system. It is best to create a directory as an administrator such as system.
Create directory dpdata1 as 'd: \ test \ dump ';
2. Check the Administrator directory (check whether the operating system exists at the same time because Oracle does not care whether the directory exists. If the directory does not exist, an error occurs)
Select * From dba_directories;
3. Grant Scott the operation permission on the specified directory. It is best to grant permissions to the system administrator.
Grant read, write on directory dpdata1 to Scott;
Iv. Export data
1) by User Guide
Expdp Scott/[email protected] schemas = Scott dumpfile = expdp. dmp directory = dpdata1;
2) parallel process parallel
Expdp Scott/[email protected] Directory = dpdata1 dumpfile = scott3.dmp parallel = 40 job_name = scott3
3) import by table name
Expdp Scott/[email protected] tables = EMP, DEPT dumpfile = expdp. dmp directory = dpdata1;
4) export by query Conditions
Expdp Scott/[email protected] Directory = dpdata1 dumpfile = expdp. dmp tables = EMP query = 'where deptno = 20 ';
5) export by tablespace
Expdp system/manager directory = dpdata1 dumpfile = tablespace. dmp tablespaces = temp, example;
6) import the entire database
Expdp system/manager directory = dpdata1 dumpfile = full. dmp full = y;
V. Restore data
1) Export to a specified user
Impdp Scott/tiger directory = dpdata1 dumpfile = expdp. dmp schemas = Scott;
2) Change the table owner.
Impdp system/manager directory = dpdata1 dumpfile = expdp. dmp tables = Scott. Dept remap_schema = SCOTT: system;
3) Import tablespace
Impdp system/manager directory = dpdata1 dumpfile = tablespace. dmp tablespaces = example;
4) import the database
Impdb system/manager directory = dump_dir dumpfile = full. dmp full = y;
5) append data
Impdp system/manager directory = dpdata1 dumpfile = expdp. dmp schemas = system table_exists_action

2. Additional instructions for parallel operations (parallel)
You can use more than one thread for export through the parallel parameter to significantly accelerate the job. Each thread creates a separate dump file, so the parameter dumpfile should have the same project as the degree of parallelism. You can specify a wildcard as a file name instead of explicitly entering each file name. For example:
Expdp Ananda/ABC123 tables = cases directory = dpdata1 dumpfile = expcases _ % u. dmp parallel = 4 job_name = cases_export
Note: The dumpfile parameter has a wildcard % u, indicating that the file will be created as needed. The format will be expcases_nn.dmp, where NN starts from 01 and then increases as needed.
In parallel mode, the status screen displays four working processes. (In the default mode, only one process is visible) All worker processes synchronously retrieve data and display their progress on the status screen.
It is important to separate the input/output channels for accessing data files and dumping directory file systems. Otherwise, the overhead related to data pump job maintenance may exceed the benefits of parallel threads and thus reduce performance. Parallelism is effective only when the number of tables exceeds the number of parallel values and the number of tables is large.
Database monitoring
You can also obtain more information about running Data Pump jobs from the database view. The main view of the monitoring job is dba_datapump_jobs, which tells you how many worker processes (column degree) are working on the job.
Another important view is dba_datapump_sessions. When it is combined with the preceding view and V $ session, the session Sid of the main foreground process is given.
Select Sid, serial # from V $ session S, dba_datapump_sessions d Where S. saddr = D. saddr;
This command displays the Sessions of the foreground process. More useful information can be obtained from alarm logs. When a process starts, the MCP and working process are shown in the alarm log as follows:
Kupprdp: Master process dm00 started with PID = 23, OS id = 20530 to execute-sys. KUPM $ MCP. main ('cases _ export', 'ananda '); kupprdp: Worker Process dw01 started with worker id = 1, pid = 24, OS id = 20532 to execute-sys. KUPW $ worker. main ('cases _ export', 'ananda '); kupprdp: Worker Process dw03 started with worker id = 2, pid = 25, OS id = 20534 to execute-sys. KUPW $ worker. main ('cases _ export', 'ananda ');
It is displayed as the PID of the session started by the data pump operation. You can use the following query to find the actual Sid:
Select Sid, program from V $ session where paddr in (select ADDR from V $ process where PID in (23, 24, 25 ));
The program column displays the name of the Process DM (master process) or dw (Worker Process) in the corresponding Alert Log File ). If a worker process uses parallel queries, such as Sid 23, you can view it in view v $ px_session and find it. It will show you all parallel query sessions running in the working process represented by SID 23:
Select Sid from V $ px_session where qcsid = 23;
Other useful information can be obtained from view v $ session_longops to predict the time it will take to complete the job.
Select Sid, serial #, SOFAR, totalwork from V $ session_longops where opname = 'cases _ export' and SOFAR! = Totalwork;
The totalwork column shows the total workload. The number of SOFAR columns in this column is added to the current time point-you can use it to estimate how long it will take.
Three Oracle 10g and 11g mutual Import and Export 1) You can use a 10g client to connect 11 databases that export 11g, You can import 10G 2) use expdp, impdp, such:
On the 11g server, use the expdp command to back up data

Expdp userid = 'sys/[email protected] As sysdba 'schemas = sybj directory = data_pump_dir dumpfile = AA. dmp logfile = AA. Log version = 10.2.0.1.0

Use the impdp command to restore data on a 10 Gb Server

Preparations: 1. Create a database 2. Create a tablespace 3. Create a user and authorize 4. Copy AA. dmp to the 10 Gb dpdump directory

Impdp userid = 'sys/[email protected] As sysdba 'schemas = sybj directory = data_pump_dir dumpfile = AA. dmp logfile = AA. Log version = 10.2.0.1.0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.