Oracle Data Pump: impdp and expdp

Source: Internet
Author: User


Oracle Data Pump: impdp and expdp 1. Create a logical directory. This command does not create a real directory in the operating system. It is best to create a directory as a system administrator. Create directory dpdata as '/opt'; 2. Check the Administrator directory (check whether the operating system exists because Oracle does not care whether the directory exists. If not, an error occurs) www.2cto.com select * from dba_directories; 3. Grant the scott user the operation permission on the specified directory. It is best to assign the permission to the system administrator. Grant read, write on directory dpdata1 to scott; IV. Export data 1) Export expdp scott/tiger @ orcl schemas = scott dumpfile = expdp by user. dmp DIRECTORY = dpdata1; 2) parallel process parallel expdp scott/tiger @ orcl directory = dpdata1 dumpfile = scott3.dmp parallel = 40 job_name = scott33) export expdp scott/tiger @ orcl TABLES = emp, dept dumpfile = expdp by table name. dmp DIRECTORY = dpdata1; 4) Export expdp scott/tiger @ orcl directory = dpdata1 dumpfile = expdp according to query conditions. dmp Tables = emp query = 'where deptno = 20'; 5) Export expdp system/manager DIRECTORY = dpdata1 DUMPFILE = tablespace by tablespace. dmp TABLESPACES = temp, example; 6) export the entire database expdp system/manager DIRECTORY = dpdata1 DUMPFILE = full. dmp FULL = y; www.2cto.com v. Restore data 1) Import impdp scott/tiger DIRECTORY = dpdata1 DUMPFILE = expdp to the specified user. dmp SCHEMAS = scott; 2) Change the table owner impdp system/manager DIRECTORY = dpdata1 DUMPFILE = expdp. dmp TABLES = scott. dept REMAP_SCHEMA = scott: system; 3) import the tablespace impdp system/manager DIRECTORY = dpdata1 DUMPFILE = tablespace. dmp TABLESPACES = example; 4) import database impdb system/manager DIRECTORY = dump_dir DUMPFILE = full. dmp FULL = y; 5) append data impdp system/manager DIRECTORY = dpdata1 DUMPFILE = expdp. dmp SCHEMAS = system TABLE_EXISTS_ACTION 6. parameter description ----- import impdp1, TABBLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | FRPLACE} when this option is set to SKIP, the import job skips existing tables and processes the next object. When it is set to APPEND, data is appended. When it is set to TRUNCATE, the import job truncates the table and then adds new data to it; when it is set to REPLACE, the import job will delete the existing table and re-create the table disease append data. Note that the TRUNCATE option is not applicable to the cluster Table and NETWORK_LINK options; 2. REMAP_SCHEMA this option is used to load all objects in the source scheme to the Target Scheme: REMAP_SCHEMA = source_schema: target_schema3 and REMAP_TABLESPACE import all objects in the source tablespace to the target tablespace: REMAP_TABLESPACE = source_tablespace: target: tablespace
4. REMAP_DATAFILE: This option is used to convert the source data file name to the target data file name. This option may be required when the tablespace is moved between different platforms. REMAP_DATAFIEL = source_datafie: target_datafile www.2cto.com 7. parameter description ----- export expdp1 and CONTENT: This option is used to specify the CONTENT to be exported. the default value is ALLCONTENT = {ALL | DATA_ONLY | METADATA_ONLY}. When CONTENT is set to ALL, the object definition and ALL its data are exported. if it is DATA_ONLY, only the object data is exported. If it is METADATA_ONLY, only the object definition is exported.
2. DIRECTORY: Specify the DIRECTORY where the dump and log files are located: DIRECTORY = directory_object3 and EXCLUDE: This option is used to specify the release object type to EXCLUDE when the operation is executed or the related object EXCLUDE = object_type [: name_clause] [,...] Object_type is used to specify the object type to be excluded. name_clause is used to specify the specific object to be excluded. EXCLUDE and INCLUDE cannot both use Expdp scott/tiger DIRECTORY = dump DUMPFILE =. dup EXCLUDE = VIEW www.2cto.com 4. INCLUDE: contains the specified type during export (for example, INCLUDE = TABLE_DATA, INCLUDE = TABLE: "LIKE 'tab % '" INCLUDE = TABLE: "not like 'tab % '"...) EXCLUDE: Data Types excluded during export (for example, EXCLUDE = TABLE: EMP) 5. FILESIZE: specifies the maximum size of the exported file. The default value is 0, indicating that the file size is unlimited) (Unit: bytes ). 6. JOB_NAME: name used by the export process to facilitate tracking and query (optional) 7. FLASHBACK_SCN: Specify FLASHBACK_SCN = scn_value: Scn_value indicates the SCN value for table data at a specific SCN time point. FLASHBACK_SCN and FLASHBACK_TIME cannot both use Expdp scott/tiger DIRECTORY = dump DUMPFILE =. dmp FLASHBACK_SCN = 358523 8. FLASHBACK_TIME: Specify the table data to be exported at a specific time point: FLASHBACK_TIME = "TO_TIMESTAMP (time_value)" Expdp scott/tiger DIRECTO RY = dump DUMPFILE =. dmp FLASHBACK_TIME = "TO_TIMESTAMP ('25-08-2004 14:35:00 ', 'dd-MM-YYYY HH24: MI: ss')" 9. TABLESPACE: Specifies a TABLESPACE for export. 10. QUERY = [schema.] [table_name:] query_clause Schema is used to specify the solution name, table_name is used to specify the table name, And query_clause is used to specify the condition limit clause. the QUERY option cannot be used with options such as CONNECT = METADATA_ONLY, EXTIMATE_ONLY, and TRANSPORT_TABLESPACES. www.2cto.com Expdp scott/tiger directory = dump dumpfiel =. dmp Tables = emp query = 'where deptno = 20' 11, PAR ALLEL: PARALLEL Operation: specify the number of PARALLEL processes that execute the export operation. The default value is 1. You can use the PARALLEL parameter to export more than one thread to significantly accelerate the job. Each thread creates a separate dump file, so the parameter dumpfile should have the same project as the degree of parallelism. You can specify a wildcard as a file name instead of explicitly entering each file name, for example, expdp ananda/abc123 tables = CASES directory = DPDATA1 dumpfile = expCASES _ % U. dmp parallel = 4 job_name = Cases_Export Note: The dumpfile parameter has a wildcard % U, indicating that the file will be created as needed. The format will be expCASES_nn.dmp, where nn starts from 01, and then add as needed. In parallel mode, the status screen displays four working processes. (In the default mode, only one process is visible) All worker processes synchronously retrieve data and display their progress on the status screen.
It is important to separate the input/output channels for accessing data files and dumping directory file systems. Otherwise, the overhead related to Data Pump job maintenance may exceed the benefits of parallel threads and thus reduce performance. Parallelism is effective only when the number of tables exceeds the number of parallel values and the number of tables is large.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.