Oracle 10G Data Pump (part I)

Source: Internet
Author: User
Tags log new features variables reserved import database oracle database
Oracle
Oracle 10G Data Pump (part I)

Author: fenng
Source: Http://www.DBAnotes.net

Oracle 10G Data Pump technology enables high-speed movement of databases and metadata among different databases. This technique is based on two data movement.
Tools: Data Pump Export and data Pump Import.

Oracle's data Pump is implemented through a Pl/sql package: Dbms_datapump (also known as the Data Pump API). Data Pump Use
Direct path loading and external table mechanisms for data movement. The Data pump uses the Dbms_metadata Pl/sql package, including the ETL process
All database object operations.

The Data Pump is the basis for some of Oracle's other key features, such as streaming replication, logical standby, and so on.

The Data pump feature is integrated into the Oracle database 10G, but the standard version 10G parallelism is only 1.


Key Concepts

Master Table
Master Table (MT) is the core of data pump technology. Master Table is used to hold the details of the entire process (or some
Meta-data information). With MT, the export or import reboot becomes possible. These details include:
A report that generates a job status restarts the job to locate all objects in the dump file.
The primary table is created in the user mode of the current export or imported operation. The user must have sufficient space. The name of the primary table and the job name that created it
Same. This means that you cannot explicitly specify a data Pump Job that has the same name as an existing table or view.

When exported, the primary table is created and written to the dump file when it is finished. When you start the import, the primary table is loaded into the database from the dump file.
and is used to control the order of operations. The primary table can also be initialized with some parameters. Note that the primary table cannot be stored across files. So, the specified dump
The size of the file must be at least able to hold Mt.

The main table is either preserved or deleted as follows:
The job completes successfully and MT is deleted. If the job was stopped by using the stop_job Interaction command, MT will be reserved for restarting the job. If the job is killed using the kill_job interaction command, MT will be deleted and the job cannot be restarted. If the job terminates unexpectedly, MT is always reserved.
Master Process main processes
Each exported or imported job produces a main process. The master process controls the entire job, including communicating with the client, creating and controlling the worker process,
and log operations.

Worker Process
The Worker process is the process of parallelization for the actual unloading and loading of metadata and table data. The number of processes is equal to the value specified by parallel
This number can be adjusted throughout the job. As we mentioned earlier, the standard version has a parallelism of 1.

Network Export and Import
When you run the import operation over the network, the source is another system, not a dump file set.

Server-based processing
Unlike the original export/import approach, the Data pump are exported and imported basically on the server. This means:
Because most of the read and write files are on the server side, the specification of the path and the parsing of the database directory objects are for the server rather than the customer
End because access to the file is in operating system mode, there may be data security issues. If you use the default directory information, you may have file conflict issues.
Default location for Dump, log, and SQL files
The default location for Dump, log, and SQL files is server-based. When you run data Pump export or data Pump import,
A directory object must be created by a DBA or a user who has permission to create any directory. After the object is created, you need to
Authorize the Read/write object to manipulate permissions.

GRANT READ on DIRECTORY data_pump_dir to Foo;

Note that the directory is available only if the database server has Read permissions.

The Data pump determines the location of the file according to the following order:

1. The directory object specified for the file;
2. If you do not specify a directory object for a file, the object specified by the directory in the initialization parameter is used by default.

Privileged and non-privileged users
The Data pump identifies two types of users: privileged (privileged) and unprivileged (nonprivileged) users.
Privileged users have exp_full_database and Imp_full_database permissions. Non-privileged users do not have these permissions.

Privileged users can work with the following:
Exports import database objects owned by other users. Exports import objects that are not based on schemas (such as table space and schema definitions). Perform some action on the other user-initiated job.

Data pump How do I access it?

Immediate path (direct path, DP) and external tables (External tables, ET). Use Oracle_datapump drive in ET mode and use direct
Path API internal Data flow format. Oracle says data pump automatically chooses the quickest method.


file allocation

Data Pump Job manages three types of files:
The dump file log file records an operation's related information SQL file records a sql_file operation output record


Specify files and additional attachment files
For the export operation, you can specify the dump file, and as the operation progresses you find that there is not enough file space, you can use the interactive mode of Add_file
command to add additional files. For an imported operation. All dump files must be specified in the job definition. log files and SQL files will overwrite the existing
File. Dump files never overwrite files that already exist.


Set the degree of parallelism
For export and import, the degree of parallelism set (specified by the PARALLEL parameter) should be less than or equal to the number of dump files. If the number of files is insufficient, the sex
can be reduced (multithreading simultaneously accesses a dump file). As mentioned earlier, the standard version 10G parallelism is only 1, so this parameter is actually for the enterprise
Version of the 10G is valid.

Use substitution variables
In export and import operations, you can use%u substitution variables to match file names, which is a lot easier in some cases.


New features of export and import in Data pump

There are many new features of export and import in the old Exp/imp,data pump, including:

1 Data Pump The ability to specify the maximum number of threads by the time the job executes to half.
The 2 Data Pump job is completely restarted.
3 Detach from a long-running job or be reattached without affecting the job itself.
4 supports export import operations over the network.
5 Remap_datafile.
6 supports perfect granularity object selection.
7 support for job interaction command-line mode monitoring and interaction.
8 to treat the space evaluation of the imported objects.
9 has the ability to specify the database version object mobility.


To be Continued

Description of MT structure attached



Sql> desc sys_export_schema_01
is the name empty? Type
----------------------------------------- -------- -------------------

Process_order number
DUPLICATE number
Dump_fileid number
Dump_position number
Dump_length number
Dump_allocation number
Completed_rows number
Error_count number
Elapsed_time number
Object_type_path VARCHAR2 (200)
Object_path_seqno number
Object_type VARCHAR2 (30)
In_progress CHAR (1)
object_name VARCHAR2 (500)
Object_schema VARCHAR2 (30)
Partition_name VARCHAR2 (30)
FLAGS number
Completion_time DATE
Object_tablespace VARCHAR2 (30)
Size_estimate number
Object_row number
Processing_state CHAR (1)
Processing_status CHAR (1)
Base_object_type VARCHAR2 (30)
Base_object_name VARCHAR2 (30)
Base_object_schema VARCHAR2 (30)
Parallelization number
Unload_method number
Granules Number
SCN number
Domain_index VARCHAR2 (30)
Domain_index_schema VARCHAR2 (30)
Grantor VARCHAR2 (30)
NAME VARCHAR2 (30)
value_t VARCHAR2 (4000)
Value_n number
Is_default number
File_type number
User_directory VARCHAR2 (4000)
User_file_name VARCHAR2 (4000)
file_name VARCHAR2 (4000)
Extend_size number
File_max_size number
Extend_active number
Overflow_to number
Process_name VARCHAR2 (30)
Last_update DATE
Work_item VARCHAR2 (30)
Non_transactional CHAR (1)
Object_number number
Completed_bytes number
Total_bytes number
Metadata_io number
Data_io number
Cumulative_time number
Old_value VARCHAR2 (4000)
SEED number
Last_file number
User_name VARCHAR2 (30)
OPERATION VARCHAR2 (30)
Job_mode VARCHAR2 (30)
VERSION number
Db_version VARCHAR2 (30)
State VARCHAR2 (30)
PHASE number
GUID RAW (16)
Start_time DATE
Block_size number
Metadata_buffer_size number
Data_buffer_size number
DEGREE number
LANGUAGE VARCHAR2 (30)
PLATFORM VARCHAR2 (100)
Abort_step number
INSTANCE VARCHAR2 (16)





Reference documentation


1.oracle10i Database Utilities

2.Data Pump in Oracle Database 10g from OTN







Original source:
<a href= "http://www.dbanotes.net/Oracle/10G-Data-Pump-PartI.htm" >http://www.dbanotes.net/oracle/ 10g-data-pump-parti.htm</a>



Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.