Oracle database backup and recovery (1): exp and imp

Source: Internet
Author: User

I. Basic commands

1. Get help

$ Exp help = y

$ Imp help = y

2. Three ways of working

(1) Interactive Mode

$ Exp // enter the required parameters as prompted

(2) Command Line

$ Exp user/pwd @ dbname file =/oracle/test. dmp full = y // enter the required parameters in the command line.

(3) parameter file Method

$ Exp parfile = username. par // enter the required parameters in the parameter file

Parameter file username. par content userid = username/userpassword buffer = 8192000

Compress = n grants = y

File =/oracle/test. dmp full = y

3. Three Modes

(1) Export/import data from a specified table.

Export: export one or more tables: $ exp user/pwd file =/dir/xxx. dmp log = xxx. log tables = table1, table2

Export some data of a table

$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log tables = table1 query = "where col1 = '... \ 'And col2 <..."

Import: import one or more tables.

$ Imp user/pwd file =/dir/xxx. dmp log = xxx. log tables = table1,

Table2 fromuser = dbuser touser = dbuser2 commit = y ignore = y

(2) Export/Import all objects and data of a specified user in user mode.

Export: $ exp user/pwd file =/dir/xxx. dmp log = xxx. log owner = (xx, yy)

Export only data objects, not data (rows = n)

$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log owner = user rows = n

Import: $ imp user/pwd file =/dir/xxx. dmp log = xxx. log fromuser = dbuser touser = dbuser2

Commit = y ignore = y

(3) Export/Import and Export all objects in the database in full database mode:

$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log full = ycommit = y ignore = y

Import: $ imp user/pwd file =/dir/xxx. dmp log = xxx. log fromuser = dbuser touser = dbuser2

Ii. Advanced options

1. Split into multiple files

Export using multiple fixed-size files: This method is usually used when the table data volume is large, and a single dump file may exceed the limit of the file system.

$ Exp user/pwd file = 1.dmp, 2.dmp, 3.dmp ,... Filesize = 1000 m log = xxx. log full = y

Import multiple fixed-size files

$ Imp user/pwd file = 1.dmp, 2.dmp, 3.dmp ,... Filesize = 1000 m

Tables = xxx fromuser = dbuser touser = dbuser2 commit = y ignore = y

2. incremental Export/Import

// After oracle 9i, exp no longer supports inctype

You must use SYS or SYSTEM to perform incremental export and import.

Incremental export: includes three types:

(1) "Complete" incremental Export (Complete) // back up the entire database

$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log inctype = complete

(2) incremental export and export the data changed after the last backup.

$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log inctype = incremental

(3) Cumulative only exports the changed information in the database after the last full export.

$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log inctype = cumulative

Incremental import: $ imp usr/pwd FULL = y inctype = system/restore/inct ype

Where:

SYSTEM: import SYSTEM objects

RESTORE: import all user objects

3. Export/Import with SYSDBA

(1) For Oracle Technical Support

(2) used for tablespace Transmission

Example: $ imp usr/pwd @ instance as sysdba tablespaces = xx transport_tablespace = y file = xxx. dmp datafiles = xxx. dbf $ imp file = expdat. dmp userid = "sys/password as sysdba" transport_tablespace = y "datafile = (c: tempapp_data, c: tempapp_index )"

4. tablespace transmission (FAST)

Table space transfer is a newly added 8 I Method to quickly move data between databases. It is to attach the format data files of a database to another database, instead of exporting data to a dmp file, this is very useful in some cases, because the transfer of tablespace moving data is as fast as copying a file.

(1) There are some rules for the tablespace to be transmitted (before 10 Gb ):

The source database and target database must run on the same hardware platform.

The source database and target database must use the same character set

The source database and target database must have data blocks of the same size.

The target database cannot have a tablespace with the same name as the migrated tablespace.

SYS objects cannot be migrated.

Must be transmitted from the contained object set

Some objects, such as materialized views and function-based indexes, cannot be transmitted (cross-platform replacement of data file headers can be used for files in the same byte sequence)

(10 Gb supports cross-platform tablespace transmission. As long as the operating system has the same byte sequence, the tablespace transmission can be performed. The file format needs to be converted using RMAN, omitted)

(2) check whether a tablespace meets the transmission standard:

SQL> exec sys. dbms_tts.transport_set_check ('tablespace _ name', true); SQL> select * from sys. transport_set_violations;

If no row is selected, the tablespace only contains table data and is self-contained. Some non-self-contained tablespaces, such as data table spaces and index tablespaces, can be transmitted together.

(3) Procedure:

For more information, see ORACLE online help.

1). Set the tablespace to read-only (assuming the tablespace names are APP_Data and APP_Index)

SQL> alter tablespace app_data read only; SQL> alter tablespace app_index read only;

2). Issue the EXP command

SQL> host exp userid = "sys/password as sysdba" transport_tablespace = y tablespaces = (app_data, app_index)

Note that. To execute EXP in SQL, USERID must be enclosed in three quotation marks, and "/" must be avoided in UNIX. After 816 and later, you must use sysdba to perform the operation. This command must be placed in one line in SQL (this is because the display problem is placed in two lines)

3). Copy. Dbf data file (and. Dmp files) to another location, that is, the target database can be cp (unix) or cop y (windows) or transfer files through ftp (must be in bin Mode)

4). Set the local tablespace to read/write.

$ Alter tablespace app_data read write;

$ Alter tablespace app_index read write;

5). append the data file to the target database (directly specify the data file name)

(The tablespace cannot exist. You must create a user name or use fromuser/touser)

$ Imp file = expdat. dmp userid = "sys/password as sysdba" transport_tablespace = y datafiles = ("c: app_data.dbf, c: app_index.dbf") tablespaces = app_data, app_index tts_owners = hr, oe

6) set the destination database tablespace to read/write

$ Alter tablespace app_data read write;

$ Alter tablespace app_index read write;


Iii. Optimization

1. Speed up exp

Increase large_pool_size to speed up exp using direct path (direct = y). Data does not need to be integrated and checked by memory. Set a large buffer. If a large object is exported, the small buffer will fail.

The export file is not on the drive used by ORACLE. Do not export it to the NFS file system.

UNIX environment: import and export data in MPs queue mode to Improve the Performance of imp/exp.

2. Speed up imp

Create an indexfile. After the data import is complete, add DB_BLOCK_BUFFERS to add LOG_BUFFER to the import file on different drives after the index is created.

Run ORACLE in non-archive mode: alter database noarchivelog; create large tablespaces and rollback segments, and use COMMIT = N for other rollback segments of OFFLINE.

Use ANALYZE = N

Single-user mode Import

UNIX environment: import and export data in MPs queue mode to Improve the Performance of imp/exp.

3. Speed up exp/imp using unix/Linux PIPE Pipelines

Export data through pipelines:

(1) create a pipeline through mknod-p

$ Mknod/home/exppipe p // create an MTS queue under the directory/home. Note that the parameter p

(2) export data to the created MPs queue and compress the data through exp and gzip.

$ Exp test/test file =/home/exppipe & gzip

$ Exp test/test tables = bitmap file =/home/newsys/test. pipe & gzip

(3) Delete the created MPs queue after the export is completed.

$ Rm-rf/home/exppipe

Export script:

### In UNIX, ORACLE databases pass through

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.