Detailed understanding and actual combat of IMP/EXP command

Source: Internet
Author: User
Tags rollback

Exp/imp Backup (export/import Backup)
Exp Hely=y Description:
USERID User name/password
Full export entire file (N)
Size of buffer data buffers
Owner User Name list
File output files (expdat. DMP)
Table List of tables
COMPRESS Import a range (Y)
Length of RecordLength IO record
Grants Export Permission (Y)
Inctype Incremental Export Type
INDEXES Export Index (Y)
Record tracking incremental export (Y)
Rows export rows of data (Y)
Parfile parameter filename
CONSTRAINTS Export Limit (Y)
Consistent Cross Table consistency
Log screen output logs file
STATISTICS Analysis Object (estimate)
Direct directly path (N)
Triggers export triggers (Y)
FEEDBACK shows the progress of each x row (0)
FILESIZE the maximum size of each dump file
QUERY Select a clause that exports a subset of the table

The following keywords are only available for use in a table space that can be transferred
Transport_tablespace export of removable tablespace metadata (N)
tablespaces List of table spaces to be transferred
Imp hely=y Description:
USERID User name/password
Full import entire file (N)
Buffer Data buffers Size
Fromuser List of all user names
File input files (expdat. DMP)
Touser List of user names
Show lists only the contents of the file (N)
Table List of tables
IGNORE Ignore creation error (N)
Length of RecordLength IO record
Grants Import permission (Y)
Inctype Incremental Import Type
INDEXES Import Index (Y)
Commit commit array Insert (N)
Rows Import data row (Y)
Parfile parameter filename
Log screen output logs file
CONSTRAINTS Import Limit (Y)
DESTROY Coverage Table space data file (N)
Indexfile writes table/index information to the specified file
Skip_unusable_indexes Skip maintenance for index not available (N)
ANALYZE execute the ANALYZE statement in the dump file (Y)
FEEDBACK shows the progress of each x row (0)
Toid_novalidate skips validation of the specified type ID
FILESIZE the maximum size of each dump file
Recalculate_statistics Recalculate Statistics (N)

The following keywords are only available for use in a table space that can be transferred
Transport_tablespace import of removable tablespace metadata (N)
Tablespaces The table space to be transferred to the database
Datafiles data files to be transferred to the database
Tts_owners a user with data in a table-space-centralized transport
Import Considerations:
(1) The database object already exists
In general, you should delete the table, sequence, function/process, triggers, etc. before the data is imported.
The database object already exists, and the default IMP parameter will import the failed
If the parameter ignore=y is used, the data content in the exp file is imported
If the table has a constraint on a unique keyword, the condition will not be imported
If the table does not have a unique keyword constraint, it will cause the record to repeat
(2) Database objects have primary foreign key constraints
Data will fail to import if the primary foreign key constraint is not met
Workaround: Import primary table First, then import dependency table
Disable the primary foreign KEY constraint on the target import object, and then enable them after importing the data
(3) Insufficient authority
If you want to import a user's data into a B user, a user needs to have Imp_full_database permissions
(4) When importing large tables (greater than 80M), storage allocation fails
The default exp, compress = Y, is to compress all the data on a block of data.
When imported, failure is imported if there is no contiguous large block of data.
When you export a large table above 80M, remember compress= N, this error is not caused.
(5) IMP and exp use different character sets
If the character set is different, the import fails, and you can change the UNIX environment variable or the NT registry Nls_lang related information.
When the import is complete, change it back.
(6) IMP and EXP versions are not compatible
IMP can successfully import the files generated by the low version exp and cannot import the files generated by the high version exp

How to use:
Example format and Description:
1. General database Export and import
Exp User/Password @dbname file= path. DMP Full=y--There are other parameters that need to be filled in
$ exp user/pwd file=/dir/xxx.dmp log=xxx.log full=y commit=y ignore=y--all exports
$ imp user/pwd file=/dir/xxx.dmp log=xxx.log fromuser=dbuser touser=dbuser2--Import all

2. Specify users to export all
/home/oracle/product/9.2.0.4/bin/exp userid= User/password--NOTE: Local Database login (you can specify a different database, you need to add @dbname)
Owner= the exported username file= the export path holds the directory. DMP log= exported log information. Log-Main: This is not allowed to use Full=y or error (default for this user to export all)

3. File parameter Export
$ exp Parfile=username.par//Enter the required parameters in the parameter file
Parameter file Username.par content
Userid=username/userpassword
buffer=8192000
Compress=n
Grants=y
File=/oracle/test.dmp
Full=y
4. Make table export (partition table export and conditional table export)
$ exp user/pwd file=/dir/xxx.dmp log=xxx.log tables=table1,table2--or tables (Table1,table2,.....)
$ exp user/pwd file=/dir/xxx.dmp log=xxx.log tables= (t1:table1,t2:table2,.....)--T1 is a partitioned table
$ exp Scott/tiger tables=emp query=/"where job=/' salesman/' and sal/<1600/" file=/directory/scott2.dmp or exported according to the parameter file

5. Import (one or more sheets)
$ imp user/pwd file=/dir/xxx.dmp log=xxx.log tables= (table1,table2) fromuser=dbuser
Touser=dbuser2 commit=y Ignore=y
$ imp user/pwd file=/dir/xxx.dmp log=xxx.log fromuser=dbuser touser=dbuser2
Commit=y Ignore=y

6. Export only data objects do not export data
$ exp user/pwd file=/dir/xxx.dmp log=xxx.log owner=user rows=n--rows=n/y indicate whether to export data rows

7. Split multiple file export and import
$ exp user/pwd file=1.dmp,2.dmp,3.dmp,... filesize=1000m log=xxx.log full=y
$ imp user/pwd file=1.dmp,2.dmp,3.dmp,... filesize=1000m tables=xxx fromuser=dbuser
Touser=dbuser2 commit=y Ignore=y

8. Incremental export and Import
A. Full incremental export (inctype=complete)//backup of the entire database
$ exp user/pwd file=/dir/xxx.dmp Log=xxx.log inctype=complete
B. Incremental incremental export exports the data changed after the last backup (Inctype=incremental).
$ exp user/pwd file=/dir/xxx.dmp Log=xxx.log inctype=incremental
C. Cumulative incremental export (cumulative) Only exports information that has changed in the database since the last "full" export.
$ exp user/pwd file=/dir/xxx.dmp Log=xxx.log inctype=cumulative
D. Incremental import:
$ imp usr/pwd full=y inctype=system/restore/inctype – (System: Importing system objects, restore: Importing all user objects)

9. Export and Import using SYSDBA
1. Command line mode:
A:windows Platform:
C:/> exp ' sys/sys@instance as Sysdba ' Tables=scott.emp file=e:/emp.dmp
B:unix & Linux Platform (at this point "'" requires the escape character "/"):
$ exp/' sys/change_on_install@instance as sysdba/' Tables=scott.emp file=/home/oracle/emp.dmp
C: Table space Import and Export
$ imp/' usr/pwd@instance as sysdba/' tablespaces=xx transport_tablespace=y
File=xxx.dmp datafiles=xxx.dbf
2. Interactive Input mode:
Exp Tables=scott.emp--Do not enter the connection string, direct return
Export:release 10.2.0.3.0-production on Fri June 07:39:46 and Copyright (c) 1982,%, Oracle. All rights reserved.
Username:sys/change_on_install@instance as Sysdba--Enter the connection string.
3. If it is written in a parameter file, the connection string needs to be in double quotes: userid= "sys/change_on_install@instance as SYSDBA"

10. Table Space Transfer (recommended: 10g above use, but I tried in 9i did not find the corresponding check table NULL is transmitted statements, 10g support for cross-platform table space Transfer)
Attention:
L. Index is not present in the table space to be transferred. (Note that if the table is in the transport table space, and the index is not in violation of the self-contained principle, if you persist in this way, it will cause the table index in the target library to be lost).
2. Only partial partitions in the partitioned table are to be transferred in the table space set (for partitioned tables, or all of the table space to be transferred, or none at all).
3. In the table space to be transferred, for referential integrity constraints, if the constraint points to a table that is not in the transport table space set, it violates the self contained constraint, but is not relevant if the constraint is not transferred.
4. For tables that contain LOB columns, if the table is in a set of table spaces to be transferred, and the LOB column is not present, it violates the self-contained principle.
A. View the table space contains those XML files
SELECT DISTINCT P.tablespace_name
From Dba_tablespaces p, dba_xml_tables x, Dba_users u, all_all_tables t
where t.table_name = X.table_name
and t.tablespace_name = P.tablespace_name
and X.owner = U.username
B. Methods for detecting whether a table space meets the transport criteria:
SQL > Exec sys.dbms_tts.transport_set_check (' Tablespace_name ', true);
SQL > select * from Sys.transport_set_violations;
C. Summary use steps
1. Set table space to read only (assuming tablespace name is App_Data and App_index)
SQL > Alter tablespace app_data read only;
SQL > Alter tablespace app_index read only;
2. Issue Exp Order
Sql> host exp userid= ' Sys/password as Sysdba ' Transport_tablespace=y
Tablespaces= (App_Data, App_index)
The above should be noted: (or refer to my own table space import and export examples)
• In order to execute Exp,userid in SQL you must use three quotes, and in Unix you must also be careful to avoid the use of "/"
• After 816 and later, you must use SYSDBA to operate
• This command must be placed on one line in SQL (this is because the display problem is placed in two rows)
3. DBF data files (and. dmp files) to another location, that is, the target database can be CP (Unix) or copy (Windows) or transfer files via FTP (be sure to be in bin mode)
4. Set the local table space to read and write
$ alter tablespace App_Data read write;
$ alter tablespace App_index Read write;
5. Attach the data file to the target database (specify the data file name directly)
(The table space cannot exist, the corresponding username must be established or fromuser/touser)
$ imp file=expdat.dmp userid= "" "Sys/password as Sysdba" ""
Transport_tablespace=y datafiles= ("c:/app_data.dbf,c:/app_index.dbf")
Tablespaces=app_data,app_index Tts_owners=hr,oe
6. Set Target database table space for read-write
$ alter tablespace App_Data read write;
$ alter tablespace App_index Read write;
11. Optimize the speed of Imp/exp (modify parameter configuration file)
Exp:
Increase the large_pool_size, can increase the speed of exp
In a direct path (direct=y), data does not need to be integrated and checked by memory.
Set the larger buffer, if the large object is exported, the small buffer will fail.
Export file is not on the drive used by Oracle, do not export to NFS file system
UNIX Environment: Direct import of export using pipe mode to improve IMP/EXP performance
IMP:
Establish an indexfile after the data import completes the index
Put the import file on a different drive
Increase Db_block_buffers
Increase Log_buffer
Run Oracle:alter DATABASE Noarchivelog in a non-archived way;
Create large table spaces and rollback segments, OFFLINE other rollback segments, and rollback segment size to 1/2 of the largest table
Using Commit=n
Using Analyze=n
Single-User mode import
UNIX Environment: Direct import of export using pipe mode to improve IMP/EXP performance

12. Accelerate Exp/imp speed by Unix/linux Pipe pipeline
The steps are as follows:
To export data by pipeline:
1. Establishing pipelines through MKNOD-P
$ mknod/home/exppipe P///home a pipe under the directory exppipe Note parameter p
2. Export data through EXP and gzip to established pipelines and compress
$ exp test/test File=/home/exppipe & gzip $ exp test/test Tables=bitmap File=/home/newsys/test.pipe &
Gzip 3. Delete the established pipeline after successful export
$ rm-rf/home/exppipe
4.shell scripts can be written like this (I'm just writing the main)
Under Unix:
Mkfifo/home/exp.pipe
chmod A+RW Exp.pipe
Compress < Exp.pipe > Exp.dmp.z &
Su-u oracle-c "Exp userid=ll/ll file=/home/exp.pipe full=y buffer=20000000"
RM exp.pipe
Linux under:
Mknod/home/exppipe P
$ imp test/test file=/home/exppipe fromuser=test Touser=macro &
Gunzip < exp.dmp.gz >/home/exppipe
$ rm–fr/home/exppipe

# # #版权-----found2008------mailbox: hiho1128@126.com

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.