Export Template Userid = "/as sysdba"Directory = fwy Dumpfile = expdp % U. dmp Logfile = expdp. log Tables = ( A. table1 A. table2 ) Query = ( GOLAPDATA. EPAY_PA_PAYMENT_INFO: "where PAYMENT_DATE> = date '2017-01-01 '" GOLAPDATA. NSHOP_APPLY_BASE_INFO_N: "where APPLY_TIME> = date '2017-01-01 '" ) Filesize = 4096 M Parallel = 4 Job_name = expdp_fwy
Import Template
Userid = "/as sysdba"
Directory = fwy
Dumpfile = impdp % U. dmp
Logfile = impdp. log
Tables = (
A. table1
A. table2
B. table3
)
Query = (
GOLAPDATA. EPAY_PA_PAYMENT_INFO: "where PAYMENT_DATE> = date '2017-01-01 '"
GOLAPDATA. NSHOP_APPLY_BASE_INFO_N: "where APPLY_TIME> = date '2017-01-01 '"
GOLAPDATA. NSHOP_POLICY_BASE_INFO_N: "where CALCU_DATE> = date '2017-01-01 '"
)
Parallel = 4
Job_name = impdp_fwy
TABLE_EXISTS_ACTION = REPLACE
NETWORK_LINK import Template
First, establish a dblink in the target database as the sys user to connect to the source database.
Userid = "/as sysdba"
Directory = fwy
Logfile = impdp. log
Tables = (
A. table1
A. table2
B. table3
)
Query = (
GOLAPDATA. EPAY_PA_PAYMENT_INFO: "where PAYMENT_DATE> = date '2017-01-01 '"
GOLAPDATA. NSHOP_APPLY_BASE_INFO_N: "where APPLY_TIME> = date '2017-01-01 '"
GOLAPDATA. NSHOP_POLICY_BASE_INFO_N: "where CALCU_DATE> = date '2017-01-01 '"
)
Job_name = impdp_fwy
Network_link = to_t1lass
Parallel = 4
TABLE_EXISTS_ACTION = REPLACE
# NETWORK_LINK: it can be used for partitioned tables. It cannot be used for tables with LONG fields.
# Remember to delete the dblink that is connected to the production database.
Parameters
1. Import what?
Full = (N) | whether Y is exported to the database. For import, all dmp file content is imported.
Tablespaces = (tbs1, tbs2)
Schemas = (a, B, c)
Content = (all) (both export table rows and export metadata) | data_only | metadata_only, which is equivalent to exp's rows = y | n, etc.
# Exclude removes the relevant content and exports it within a limited range.
Exclude = (
Table: "LIKE 'tmp % '", -- You can select a table named TMP % to be excluded.
Table: "in ('ployees', 'Demo')" -- you can select these two tables to be excluded.
Index -- you can select an index to be excluded.
)
Schemas = hr
Exclude = (
SCHEMAS: "= 'hr' -- you can select SCHEMA to exclude HR.
)
Full = Y
Include = TABLE: "in ('tab1', 'tab2')" -- only tables TAB1 and TAB2 are included.
Include = procedure -- contains only the process
Include = function
Include = package
Include = view: "like 'product % '"
Include = index: "like 'emp' % '" -- only indexes starting with EMP are included.
Include = (
Package,
Function
)
# Include: only a group of specific objects are allowed to be exported or imported.
-- Export the schema objects under the hr and oe schema
Include =
(
Function,
Procedure,
Package,
View: "like 'product % '"
Table: "= 'T '"
Table: "like 'Sam % '"
)Include and EXCLUDE can only be specified in one job.
Include and exclude. The range can be full database level, schema level, or table level.
You can exclude specific database objects from the export or import operations. If the schemas parameter is set, only the objects that meet the conditions in schemas are not exported. If the tables parameter is set, non-conforming objects in tables are not exported, and others are exported. For example, a package does not belong to a specific table but belongs to the schema level. If the content above is data_only, no objects except row data are exported. In this case, you cannot use the exclude and include parameters. When using the EXCLUDE and INCLUDE parameters to filter metadata, remember that all objects whose dependencies are filtered will also be filtered out. For example, when EXCLUDE is used to remove a table, indexes, constraints, and triggers related to the table can also be automatically excluded.
2. How to export data?
Directory = fwy -- create directory fwy as '/paic/app/oracle/base/fwy'; create, export, and import.
COMPRESSION = ALL, DATA_ONLY, [METADATA_ONLY] and NONE. (11 GB)
Estimate = blocks (default) | use data blocks or table statistics during statistics space Estimation
Estimate_only = N (default) | Y -- only estimate, do not start the export function.
# Sample Parameters
SAMPLE = hr. tab: 30 -- in this case, the hr tab exports only 30%, while the other hr tables export 100%.
Schema = hr
SAMPLE = 30, only 30% of the table content is exported.
TABLE_EXISTS_ACTION = (SKIP) | APPEND | REPLACE | TRUNCATE
SKIP is skipped if an existing object exists;
APPEND is to add (APPEND) based on the original data. At this time, all related metadata will not be imported, but only the APPEND data will be imported;
REPLACE: first DROP the table, then create the table, and finally complete the data insertion (REPLACE); the tested replace cannot be used for SEQUENCE
TRUNCATE is the first TRUNCATE, and then the data is inserted (TRUNCATE ).
Remap function (usually used for input)
Remap_table = hr. emp1: emp2 -- rename the table as emp2
Remap_schema = A: X, B: Y
Remap_tablespace = A: When B is imported, all objects originally belonging to table space A are placed in Table Space B, which is generally used in combination with the preceding remap_schema, because the user's default table space is different.
Remap_datafile = '/data/oradata/jssweb':'/data/oradata/newdb' can change the data file path when importing a tablespace. It can also be used between platforms of different file systems.
Remap_datafile = 'db2 $: [HRDATA. PAYROLL] users. F': '/DB2/hrdata/payroll/users. F' escape from VMS File System to unix File System
Derivative considerations
According to [Article ID 780465.1], do not add spaces anywhere in the derivative expdp10g version, especially at the end and middle of the QUERY.
Impdp first creates all table structures, imports all table data, and then processes authorization, creates indexes, constraints, index statistics, comment, and table statistics.
When impdp is used, pay attention to whether the undo space is large enough.
If parallel parameters are added to impdp, check whether db_writer_processes can match.
The RAC database derivative is either parallel or cluster = n.
This issue should be noted if the 11g is RAC and the concurrency is to be enabled. Even if cluterter = N, an error is reported if the concurrency is too large.
Monitor datapump jobs
1. select * from dba_datapump_jobs;
2. dba_datapump_sessions
Select B. sid, B. username, B. event from dba_datapump_sessions a join v $ session B
On a. saddr = B. SADDR;
3. view long transaction sessions
Select * from v $ session_longops;
Interaction: dba_datapump_jobs view to check the running status of these jobs
Impdp attach = sys. impdp_fwy
/As sysdba
Status -- return the job and Process status
Stop_job -- stop in an orderly manner
Stop_job = immediate -- stop now
Start_job -- restore a stopped job
Kill_job -- release and delete a job
Parallel -- change the degree of parallelism of the current job