ORACLE backup recovery (expdp/impdp)

Source: Internet
Author: User

1. Create a directory

SQL> create directory dump_file as '/db/backup ';

2. Grant Directory Permissions

SQL> grant read, write on directorydump_file to bam;

View directory

SQL> select * from dba_directories;

3. Backup

SQL> expdp user/pwd directory = dump_filedumpfile = expdp _ % U. dmp logfile = expdp. log filesize = 20G

4. Restore

SQL> impdp user/pwd directory = dump_filedumpfile = expdp _ % U. dmp logfile = impdp. log

 

Backup content (content ):

(1) Back Up all (object structure and data): content = all (default)

(2) logical backup (only object structure): content = metadata_only

(3) Data Backup (only data): content = data_only (ensure that the data definition already exists during import. If yes, it is best to specify data_only for import)

 

Filter existing data (table_exists_action ):

(1) table_exists_action = skip: skip this table and process the next table. (If the CONTENT parameter Data_only is specified at the same time, the skip parameter is invalid. The default value is append)

(2) table_exists_action = append: add data

(3) table_exists_action = truncate: truncates the table and then adds data.

(4) table_exists_action = replace: delete and recreate the table, and then add data. (If the CONTENT parameter is specified as Data_only, the replace parameter is invalid)

 

User to which the definition object belongs (Remap_schema ):

Syntax: REMAP_SCHEMA = Source_schema: Target_schema [, Source_schema: Target_schema]

(Remap_schema = a: B, c: d, but remap_schema = a: B, a: c .)

 

Remap_tablespace ):

REMAP_TABLESPACE = Source_tablespace: Target_tablespace [, Source_tablespace: Target_tablespace]

 

Optimize the import/export efficiency (parallel ):

(1) for export, because the dump file can only be operated by one thread (including I/O processing), if there is only one output DUMP file, even if you specify more parallelism, the actual work is still one, and it will trigger a ORA-39095 error. Therefore, we recommend that you set this parameter to be smaller than or equal to the number of DUMP files generated. How can we control the number of DUMP files generated?

The EXPDP command provides a FILESIZE parameter to specify the maximum capacity of a single DUMP file. to effectively use the parallel parameter, The filesize parameter is required.

For example, a user object occupies about 4 GB of space and the exported DUMP file is about 3 GB. We try to specify the degree of parallelism as 4 when exporting the user, set parallel = 4 if the size of a single file cannot exceed 500 mb;

(2) For the import, using the parallel parameter is much simpler. I think the import can better reflect the advantages of the parallel parameter. If the parameter is set to a few, the contents of several tables are imported to the database at the same time.

For example, a dmp file contains 200 tables. If we try to set the degree of parallelism to 10 when importing the DMP file, set parallel to 10.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.