Reprint: Oracle database Backup and Recovery summary-exp/imp (export and import loader and unload library)
1 . 1 Basic Commands
1
.
Get help$ exp help=y$ Imp help=y
2
.
three ways of working
(
1
)
Pay
Cross-way$ exp//Then follow the prompts to enter the required parameters
(
2
)
Life
Line Mode$ exp User/[email protected] file=/oracle/test.dmp full=y//command line input required parameters
(
3
)
Reference
number of file methods$ exp Parfile=username.par//Enter the required parameter file in the parameter file Username.par content Userid=username/userpassword buffer=8192000compress=n Grants=yfile=/oracle/test.dmp full=y
3
.
three different modes
(
1
) table to export the data for the specified table
/
Import.
Export:Export one or several tables: $ exp user/pwd file=/dir/xxx.dmp log=xxx.log tables=table1,table2 Export partial data for a table $ exp user/pwd file=/dir/xxx.dmp Log=xxx.log tables=table1 query=\ "where col1=\ ' ... \ ' and col2 \<...\"
Import
:Import one or several tables $ imp user/pwd file=/dir/xxx.dmp log=xxx.log tables=table1,table2 fromuser=dbuser touser=dbuser2 commit=y IG Nore=y
(
2
) User mode, export all objects and data of the specified user
/
Import.
Export
:$ exp user/pwd file=/dir/xxx.dmp log=xxx.log owner= (xx, yy) Export data Objects only, do not export data (rows=n) $ exp user/pwd file=/dir/xxx.dmp log=xx X.log Owner=user Rows=n
Import
:$ imp user/pwd file=/dir/xxx.dmp log=xxx.log fromuser=dbuser touser=dbuser2commit=y ignore=y
(
3
) to export all objects in the database in a full-library manner
/
Import
Export:$ exp user/pwd file=/dir/xxx.dmp log=xxx.log full=ycommit=y ignore=y
Import
:$ imp user/pwd file=/dir/xxx.dmp log=xxx.log fromuser=dbuser touser=dbuser2
1 . 2 Advanced Options
1
.
split into multiple filesExport in multiple fixed-size files: This practice is typically used in cases where the amount of table data is large and a single dump file may exceed the file system limitations $ exp user/pwd file=1.dmp,2.dmp,3.dmp,... filesize=1000m log= Xxx.log full=y Import $ imp user/pwd file=1.dmp,2.dmp,3.dmp,... filesize=1000m tables=xxx fromuser=dbuser touser=db in multiple fixed-size files User2 commit=y Ignore=y
2
.
Incremental Export
/
ImportAfter Oracle 9i EXP no longer supports inctype must be SYS or SYSTEM to perform an incremental export import
Incremental Export
: Includes three types: (1) "Full" incremental export (complete)//backup entire database $ exp user/pwd file=/dir/xxx.dmp log=xxx.log inctype=complete (2) "incremental" incremental export export The data that was changed after the last backup. $ exp user/pwd file=/dir/xxx.dmp log=xxx.log inctype=incremental (3) "Cumulative" incremental export (cumulative) Only exports information that has changed in the database since the last "full" export. $ exp user/pwd file=/dir/xxx.dmp Log=xxx.log inctype=cumulative
Incremental Import
:$ imp usr/pwd full=y inctype=system/restore/inct ype, Where: System: Import Systems Object restore: Import all user objects
3
.
in
S
Y
S
DBA
to export
/
Import1. For Oracle Technical support 2. For table Space Transfer example: $ imp \ ' Usr/[email protected] as Sysdba\ ' tablespaces=xx transport_tablespace=y file=xxx.dmp da tafiles=xxx.dbf$ imp file=expdat.dmp userid= "" "Sys/password as Sysdba" "Transport_tablespace=y" datafile= (c:tempapp_ Data,c:tempapp_index) "
4
.
Table Space Transfer
(
Speed
degrees Fast
)Tablespace transfer is a new way to quickly move data between databases by attaching a format data file on a database to another database instead of exporting the data to a DMP file, which is useful in some cases, because the transport table space moves the data as fast as the file is copied.
1
.
There are some rules about transport table space
(10g
before
)
:? The source and target databases must be running on the same hardware platform. The source database and the target database must use the same character set. The source database and the target database must have the same size data block? The destination database cannot have a tablespace with the same name as the migration table space? SYS object cannot be migrated? Must transfer a self-contained set of objects? Some objects, such as materialized views, function-based indexes, etc. cannot be transmitted (the same byte-order file cross-platform can be used to replace the file header of the data file) (10g supports cross-platform tablespace transfer, as long as the operating system byte order is the same, you can make tablespace transfer.) Need to use Rman to convert file format, slightly)
2
.
methods for detecting whether a tablespace conforms to the transport standard:SQL > Exec sys.dbms_tts.transport_set_check (' Tablespace_name ', true); SQL > select * from Sys.transport_set_violations; if there is no row selection, it means that the tablespace contains only table data and is self-contained. For some non-self-contained table spaces, such as data table spaces and Index table spaces, you can transfer them together.
3
.
Jane
to use the steps:You can also refer to the Oracle online Help if you want to refer to the detailed usage method. 1. Set Tablespace to read-only (assuming tablespace name App_Data and App_index) SQL > Alter tablespace app_data read only; SQL > Alter tablespace app_index read ONLY;2 emit exp Command sql> host exp userid= "" Sys/password as Sysdba "" "Transport_tab Lespace=y tablespaces= (App_Data, App_index) It is important to note that in order to execute Exp,userid in SQL, you must use three quotation marks, and you must also avoid the use of "/" in UNIX. • In 816 and later, you must use SYSDBA to operate • This command must be placed on one line in SQL (this is because the display problem is in two lines) 3. copy. dbf data files (and. dmp files) to another location, that is, the target database can be CP (Unix) or Cop y (Windows) or by FTP transfer file (must be in bin mode) 4. Set the local tablespace to read/write $ alter tablespace App_Data read write;$ alter tablespace App_index read write;5. Attached to the target database Add the data file (directly specify the data file name) (table space cannot exist, must establish the corresponding user name or with Fromuser/touser) $ imp file=expdat.dmp userid= "" "Sys/password as Sysdba" " Transport_tablespace=y datafiles= ("c:\app_data.dbf,c:\app_index.dbf") Tablespaces=app_data,app_index tts_owners= Hr,oe6. Set the target database tablespace for read-write $ alter tablespace App_Data read write;$ alter tablespace App_index Read write;
1 . 3 Optimized
1
.
Accelerate
e
x
P
SpeedIncrease the large_pool_size, you can improve the speed of exp using a direct path (direct=y), the data does not need to be integrated and checked memory. Set a larger buffer, and if you export large objects, the small buffer will fail. Export file is not on the drive used by Oracle do not export to NFS File System UNIX Environment: import and export directly with pipeline mode to improve IMP/EXP performance
2
.
Accelerate
Imp
SpeedSet up a indexfile to add an import file on a different drive after the data import is completed db_block_buffers add Log_buffer to run Oracle:alter DATABASE in a non-archived way Noarchivelog; Build large tablespace and rollback segments, offline other rollback segments, the size of the rollback segment to the maximum table of 1/2 use Commit=n to import the UNIX environment using analyze=n single-user mode: Import export directly with pipeline mode to improve IMP/EXP performance
3
.
through
u
ni
x
/linux PIPE
Tube
Tao accelerates
e
x
P
/imp
Speed
exporting data through pipelines
:1. Build pipeline through Mknod-p $ mknod/home/exppipe p//Create a pipeline in the directory////Exppipe Note the parameter P2. Export data to the established pipeline and compress $ exp test/test F through exp and gzip Ile=/home/exppipe & gzip
Export Scripts
: # # #UNIX下 Oracle database is backed up via pipe pipe ###### using "export" and "tar" command to bakup Oracle datebase ####### trap "" 1 #no Huplogfile=/opt/bakup/log/bakup_ora.log export logfiledumpdir=/archlog_node1export DUMPDIRexec > $LOGFILE 2 >&1 echoecho ' Begin at ' date ' echo # clear Old result Filecd $DUMPDIRif [-F exp.dmp.z]thenecho "Clear old result file" rm exp.dmp.zfi # Make Pipemkfifo exp.pipechmod a+rw exp.pipe # gain the DMP. Z filecompress < exp.pipe > exp.dmp.z &su-u oracle-c "exp userid=ll/ll file= $DUMPDIR/exp.pipe full=y buffer=20 000000 " echoecho ' exp end at ' Date ' echo # RM piperm exp.pipe # tar the DMP. Z file to tapemt-f/dev/rmt/0 rewtar cvf/dev/rmt/0 exp.dmp.z echoecho ' tar end at ' Date ' E cho
to import a generated file through a pipeline
:1. Build pipelines through Mknod-p $ mknod/home/exppipe p2. Import generated compressed files $ imp test/test file=/home/exppipe fromuser=test touser=macro &gu Nzip < exp.dmp.gz >/home/exppipe3. Remove Pipeline $ rm–fr/home/exppipe
4
.
General steps for full library import
Note
:When exporting, you need to extract the source database by Toad or other tools to create the primary key and index of script 1. First the whole library plus ROWS=N the structure into the $ imp system/manager file=exp.dmp log=imp.log full=y rows=n indexes=n2. Invalidate/delete primary key and unique index for business user's trigger spool drop_pk_u.sqlselect ' ALTER TABLE ' | | table_name| | ' drop constraint ' | | constraint_name| | '; ' From User_constraintswhere Constraint_type in (' P ', ' U ');/spool offspool disable_trigger.sqlselect ' alter TRIGGER ' | | trigger_name| | ' Disable; ' From User_triggers;/spool off@DRop_pk_u. SQL@disabLe_TRIggeR. SQL3. Import $ imp system/manager file=exp.dmp log=imp.log full=y ignore=y4 in Ignore=y full library. Create a primary key and index in the target database by extracting a script that creates primary keys and indexes from the source database through Toad or other tools. Causes the trigger to take effect.
1 . 4 FAQ
1
.
Character Set issues oracle Multi-language settings are designed to support world-wide languages and character sets, generally valid for language cues, currency forms, sorting methods, and display of char,varchar2,c lob,long field data. The two most important features of ORACLE 's multi-language settings are national language settings and character set settings, and national language settings determine the kind of language that the interface or prompt uses, and the character set determines the encoding rules that the database holds with regard to data about the character set, such as text. ORACLE character set settings, divided into the database character set and the client character set environment settings. The database-side, character set is set when the database is created and saved in the Database props$ table. In the client's character set environment is relatively simple, mainly is the environment variable or registry key nls_ LANG, note that Nls_lang priority is: Parameter file < Registry < environment variable <alter session. If the client character set and the server-side character set are not the same, and the conversion of the character set is not compatible, then the client's data display is garbled with the exported/imported data about the character set. With a little bit of finesse, you can make the export/import transform data on a database of different character sets. Here you need a 2-file editing tool, such as Uedit32. Open the exported DMP file in edit mode, get the contents of 2, 3 bytes, such as 00 01, first convert it to &NBSP;&NBSP;10&NBSP;&N BSP, the binary number, for 1, the character set is obtained using the function nls_charset_name: sql> select Nls_charset_name (1) from dual; Nls_charset_name (1)-------------------Us7ascii can know that the DMP file character set is Us7ascii, if you need to change the character set of the DMP file to ZHS16GBK, you need to use Nls_ CHARSET_ID gets the number of the character set: sql> select nls_charset_id (' ZHS16GBK ') from dual; nls_charset_id (' ZHS16GBK ')--------------------------convert 852 into 16The system number, for 354, the 2, 3 bytes of the 01 into the 54, that is completed the DMP file character set from Us7ascii to ZHS16GBK conversion, so that the dmp file into the ZHS16GBK character set of the database can be 。
2
.
Version IssueExp/imp many times, you can use it across versions, such as exporting import data between version 7 and version 8, but you must choose the correct version, the rule is: • Always use the version of IMP to match the version of the database, and if you are importing to 816, use the 816 Import tool. • Always use EXP Version matches the lower version of the two databases, such as between 815 and 816, use the EXP tool of 815. IMP and EXP versions are not compatible: Imp can import files from the lower version of exp, and cannot import the files that are generated by the high version exp.
Exp/imp Export and Import summary