1. Exp/imp (export and import loading and unloading library) 1.1 basic command 1. Get help
$ exp Help=y
$ imp help=y
2. Three ways of working
(1) Interactive mode
$ exp//Then follow the prompts to enter the required parameters
(2) command line mode
$ exp User/[email protected] file=/oracle/test.dmp full=y//command line input required parameters
(3) Parameter file mode
$ exp Parfile=username.par//Enter the required parameters in the parameter file
Parameter file Username.par content
Userid=username/userpassword
buffer=8192000
Compress=n
Grants=y
File=/oracle/test.dmp
Full=y
3. Three modes
(1) Table mode, export/import the data of the specified table.
Export:
Export one or several tables:
$ exp user/pwd file=/dir/xxx.dmp Log=xxx.log tables=table1,table2
Export part of a table's data
$ exp user/pwd file=/dir/xxx.dmp log=xxx.log tables=table1query=\ "where col1=\ ' ... \"
and col2 \<...\ "
Import:
Import one or several tables
$ imp user/pwd file=/dir/xxx.dmp log=xxx.log tables=table1,table2fromuser=dbuser
User=dbuser2 commit=y Ignore=y
(2) User mode, all objects and data of the specified user are exported/imported.
Export:
$ exp user/pwd file=/dir/xxx.dmp log=xxx.log owner= (XX,YY)
Export Data Objects only, do not export data (rows=n)
$ exp user/pwd file=/dir/xxx.dmp log=xxx.log owner=user rows=n
Import:
$ imp user/pwd file=/dir/xxx.dmp log=xxx.log fromuser=dbusertouser=dbuser2
Commit=y Ignore=y
(3) Full library to export/import all objects in the database
Export:
$ exp user/pwd file=/dir/xxx.dmp log=xxx.log full=y commit=yignore=y
Import:
$ imp user/pwd file=/dir/xxx.dmp log=xxx.log fromuser=dbusertouser=dbuser2
1.2 Advanced option 1. Split into multiple files
Export as multiple fixed-size files: This is usually done with a large amount of table data, and a single dump file may
will exceed the file system limits.
$ exp user/pwd file=1.dmp,2.dmp,3.dmp,... filesize=1000m log=xxx.logfull=y
Import in multiple fixed-size files
$ imp user/pwd file=1.dmp,2.dmp,3.dmp,... filesize=1000m tables=xxxfromuser=dbuser
Touser=dbuser2 commit=y Ignore=y
2. Incremental Export/Import
Oracle 9i after exp no longer supports Inctype
You must be SYS or SYSTEM to perform an incremental export import
Incremental export: Consists of three types:
(1) "Full" incremental export (complete)//back up the entire database
$ exp user/pwd file=/dir/xxx.dmp Log=xxx.log inctype=complete
(2) "incremental" incremental export exports the data that was changed after the last backup.
$ exp user/pwd file=/dir/xxx.dmp Log=xxx.log inctype=incremental
(3) "Cumulative" incremental export (cumulative) Only exports changes in the database since the last "full" export
The information.
$ exp user/pwd file=/dir/xxx.dmp Log=xxx.log inctype=cumulative
Incremental import:
$ imp usr/pwd full=y Inctype=system/restore/inctype
which
System: Import Systems objects
RESTORE: Import all user objects
3. Export/import with SYSDBA
1. For Oracle Technical Support
2. For table space Transfer
Cases:
$ imp \ ' Usr/[email protected] as Sysdba\ ' tablespaces=xxtransport_tablespace=y
File=xxx.dmp datafiles=xxx.dbf
$ imp file=expdat.dmp userid= "" "Sys/password as Sysdba" "" Transport_tablespace=y
"Datafile= (C:tempapp_data,c:tempapp_index)"
4. Tablespace Transfer (FAST)
Table Space Transfer is a new way to quickly move data between databases by adding a number of 8i
Data file is attached to another database, rather than exporting it to a DMP file,
In some cases it works very well because the transport table space moves the data as quickly as copying the file.
1. There are some rules regarding the transfer table space (10g ago):
(a) The source and target databases must be running on the same hardware platform.
(b) The source database and the target database must use the same character set.
(c) The source database and the target database must have the same size data block
(iv) The target database cannot have a tablespace with the same name as the migration table space
(v) The object of SYS cannot be migrated
(vi) The set of self-contained objects must be transferred
(vii) Some objects, such as materialized views, function-based indexes, etc. cannot be transmitted
(The same byte-order file cross-platform can be used to replace the file header of the data file method)
(10g supports cross-platform tablespace transfers, which can be done as long as the operating system has the same byte order.) Need to use RMAN to convert file format, slightly)
2. To detect whether a tablespace conforms to the transport standard method:
SQL > Exec sys.dbms_tts.transport_set_check (' Tablespace_name ', true);
SQL > select * from Sys.transport_set_violations;
If there is no row selection, the table space contains only table data and is self-contained. For some non-self-package
The table spaces that are included, such as data table space and index table space, can be transferred together.
3. Brief use steps :
You can also refer to the ORACLE online help if you want to refer to the detailed usage method.
1. Set Tablespace to read-only (assuming tablespace name App_Data and App_index)
SQL > Alter tablespace app_data read only;
SQL > Alter tablespace app_index read only;
2. Issue EXP command
Sql> host exp userid= "" "Sys/password as Sysdba" "" Transport_tablespace=y
Tablespaces= (App_Data, App_index)
The above should be noted that
• In order to execute Exp,userid in SQL, you must use three quotation marks, which must also be noted in Unix
Avoid the use of "/"
• After 816 and later, you must use SYSDBA to operate
• This command must be placed on one line in SQL (this is because the display problem is placed in two lines)
3. copy. dbf data files (and. dmp files) to another location, the target database
can be either CP (Unix) or copy (Windows) or transfer files via FTP (must be in bin mode)
4. Set the local table space to read and write
$ alter tablespace App_Data read write;
$ alter tablespace App_index Read write;
5. Append the data file to the target database (specify the data file name directly)
(table space cannot exist, must establish corresponding user name or use Fromuser/touser)
$ imp file=expdat.dmp userid= "" "Sys/password as Sysdba" ""
transport_tablespace=ydatafiles= ("c:\app_data.dbf,c:\app_index.dbf")
Tablespaces=app_data,app_index Tts_owners=hr,oe
6. Set the target database table space to read and write
$ alter tablespace App_Data read write;
$ alter tablespace App_index Read write;
1.3 Optimization 1. Faster exp Speed
Increase the large_pool_size, can increase exp speed
Using a direct path (direct=y), the data does not need to be integrated and checked in memory.
Set a larger buffer, and if you export large objects, the small buffer will fail.
Export file is not on the drive used by ORACLE
Do not export to NFS file system
UNIX Environment: Direct import and export with pipeline mode to improve IMP/EXP performance
2. Speed up Imp
Set up a indexfile to index after data import is complete
Place the import file on a different drive
Increase Db_block_buffers
Increase Log_buffer
Run Oracle:alter DATABASE Noarchivelog in a non-archival manner;
Build large tablespace and rollback segments, offline other rollback segments, the size of the rollback segment is 1/2 of the maximum table
Using Commit=n
Using Analyze=n
Single-User mode import
UNIX Environment: Direct import and export with pipeline mode to improve IMP/EXP performance
3. Speed up Exp/imp by unix/linux pipe piping
To export data through a pipeline:
1. Build pipelines through Mknod-p
$ mknod/home/exppipe p//Create a pipe in the directory////Exppipe note the parameter P
2. Export data through exp and gzip to the established pipeline and compress
$ exp test/test file=/home/exppipe &gzip
$ exp Test/test Tables=bitmap file=/home/newsys/test.pipe&
Gzip
3. Delete the established pipeline after the export has completed successfully
$ rm-rf/home/exppipe
Export script:
# # #UNIX ORACLE database is backed up by pipe pipe
###### using "Export" and "tar" command to bakup Oracle Datebase #######
Trap "" 1 #nohup
Logfile=/opt/bakup/log/bakup_ora.log
Export LOGFILE
Dumpdir=/archlog_node1
Export Dumpdir
exec > $LOGFILE 2>&1
Echo
Echo ' Begin at ' date '
Echo
# Clear Old result file
CD $DUMPDIR
If [-F exp.dmp.z]
Then
echo "Clear old result file"
RM exp.dmp.z
Fi
# Make Pipe
Mkfifo Exp.pipe
chmod A+RW Exp.pipe
# Gain the DMP. Z file
Compress < Exp.pipe > Exp.dmp.z &
Su-u oracle-c "Exp userid=ll/llfile= $DUMPDIR/exp.pipe full=y buffer=20000000"
Echo
Echo ' exp end at ' Date '
Echo
# RM Pipe
RM exp.pipe
# tar the DMP. Z file to tape
Mt-f/dev/rmt/0 Rew
Tar cvf/dev/rmt/0 exp.dmp.z
Echo
Echo ' tar end at ' date '
Echo
To import a generated file through a pipeline:
1. Build pipelines through Mknod-p
$ mknod/home/exppipe P
2. Import the generated compressed file
$ imp test/test file=/home/exppipefromuser=test Touser=macro &
Gunzip < exp.dmp.gz >/home/exppipe
3. Delete a pipeline
$ rm–fr/home/exppipe
4. General steps for full-Library import
Note: When exporting, you need to extract the source database by Toad or other tools to create a script for the primary key and index
1. First the whole library plus ROWS=N the structure in
$ imp system/manager file=exp.dmp log=imp.logfull=y rows=n indexes=n
2. Invalidate/delete primary key and unique index for business user's triggers
Spool Drop_pk_u.sql
Select ' ALTER TABLE ' | | table_name| | ' Dropconstraint ' | | constraint_name| | '; '
From User_constraints
where Constraint_type in (' P ', ' U ');
/
Spool off
Spool Disable_trigger.sql
Select ' Alter TRIGGER ' | | trigger_name| | ' disable; '
From User_triggers;
/
Spool off
@drop_pk_u. sql
@disable_trigger. sql
3. Import with Ignore=y full library
$ imp system/manager file=exp.dmplog=imp.log full=y ignore=y
4. Extract the source database by Toad or other tools script to create primary key and index, create primary key in target database
and indexes. Causes the trigger to take effect.
1.4 FAQs
1. Character Set issues
The ORACLE multi-language setting is designed to support world-wide languages and character sets, generally for language cues,
Currency forms, sorting methods, and the display of data in char,varchar2,clob,long fields are valid.
The two most important features of ORACLE's multi-language settings are national language settings and character set settings, national
The language of the interface or prompt use, the character set determines the database is saved with the character set data
Encoding rules (such as text).
ORACLE character set settings, divided into the database character set and the client character set environment settings. On the database side,
The character set is set when the database is created and saved in the Database props$ table.
In the client's character set environment is relatively simple, mainly is the environment variable or registry key Nls_lang, note
The priority level for Nls_lang is: Parameter file < Registry < environment variable <alter session. If the client
The character set and the server-side character set are not the same, and the conversion of the character set is not compatible, the client's data display
Data related to the character set exported/imported will be garbled.
With a little bit of finesse, you can make the export/import transform data on a database of different character sets. Over here
Requires a 2-file editing tool, such as Uedit32. Open the exported DMP file in edit mode and obtain
Take 2, 3 bytes of content, such as 00 01, first convert it to 10 binary, 1, use the function
The character set can be obtained nls_charset_name:
Sql> Select Nls_charset_name (1) fromdual;
Nls_charset_name (1)
-------------------
Us7ascii
You can know that the DMP file character set is Us7ascii, if you need to change the character set of the DMP file
Into ZHS16GBK, you need to use nls_charset_id to get the number of the character set:
Sql> Select nls_charset_id (' ZHS16GBK ') from dual;
nls_charset_id (' ZHS16GBK ')
--------------------------
852
Replace 852 with 16 decimal, 354, 2, 3 byte 00 01 for 03 54, which completes the
DMP file character set conversions from Us7ascii to ZHS16GBK, so that the DMP file is imported into the ZHS16GBK
The database for the character set is available.
2. Version issues
Exp/imp many times, you can use it across versions, such as exporting import data between version 7 and version 8, but
To do this you must choose the correct version, the rule is:
• Always use the version of IMP to match the version of the database, and if you are importing to 816, use 816 of the importer
With.
• Always use the EXP version to match the low version of two databases, such as between 815 and 816,
The 815 EXP tool is used.
IMP and EXP versions cannot be compatible: Imp can import files from low version exp, cannot import high
Version Exp-generated files
It Ninja Turtles database backup and restore technology summary