1. Create a Directory
Sql> Create directory Dump_file as '/db/backup ';
2. Directory-assigned Permissions
Sql> Grant Read,write on Directorydump_file to BAM;
View Catalog
Sql> select * from Dba_directories;
3. Backup
SQL>EXPDP user/pwd directory=dump_filedumpfile=expdp_%u.dmp Logfile=expdp.log Filesize=20G
4. Restore
SQL>IMPDP user/pwd directory=dump_filedumpfile=expdp_%u.dmp Logfile=impdp.log
Backup contents (content):
(1) Back up all (object structure, data): Content = All (default)
(2) Logical backup (object structure only): Content = Metadata_only
(3) Data backup (data only): Content = data_only (when importing, it is necessary to ensure that the data definition already exists, if it already exists, it is best to specify data_only when importing)
Filter the data that already exists (Table_exists_action):
(1) table_exists_action = skip: Skips the table and continues processing the next table (if the content parameter is also specified as Data_only, the skip parameter is invalid and defaults to append)
(2) table_exists_action = append: Continue adding data
(3) Table_exists_action = truncate: TRUNCATE TABLE, add data
(4) table_exists_action = Replace: Delete and rebuild the table and add data (the Replace parameter is invalid if the content parameter is also specified as Data_only)
To redefine the user (Remap_schema) to which the object belongs:
Syntax: Remap_schema=source_schema:target_schema[,source_schema:target_schema]
(Remap_schema=a:b,c:d this can, but cannot specify REMAP_SCHEMA=A:B,A:C in the same IMPDP command.)
Redefine the object's Tablespace (remap_tablespace):
Remap_tablespace=source_tablespace:target_tablespace[,source_tablespace:target_tablespace]
Optimize import/export efficiency (parallel):
(1) for export, because the dump file can only be operated by one thread (including I/O processing), if the output dump file only one, even if you specify more parallel, the actual work is still one, but also trigger ORA-39095 error. Therefore, it is recommended to set this parameter less than or equal to the number of dump files generated. So, how to control the number of dump files generated?
The EXPDP command provides a filesize parameter to specify the maximum capacity of a single dump file, and the FileSize parameter is necessary to effectively utilize the parallel parameter.
Example: A user object takes up about 4G of space, the actual exported dump file is about 3G, we try to export the user to specify a degree of parallelism of 4, set a single file does not exceed 500M, then set: parallel = 4;
(2) For the import, the use of parallel parameters is much simpler, I think the import more can reflect the advantages of parallel parameters. parameter is set to several, the contents of several tables are considered to be imported into the library at the same time.
Example: A DMP file contains 200 tables, and we try to specify a degree of parallelism of 10 when importing the DMP file, set: Parallel = 10.