Detailed introduction to logical export Exp/import Imp in Oracle and parameter Introduction

Source: Internet
Author: User

Import/export is the oldest two surviving ORACLE command line tools. In fact, I never think Exp/Imp is a good backup method, the correct statement is that Exp/Imp can only be a good dump tool, especially in small database dump, tablespace migration, table extraction, and detection of logical and physical conflicts. Of course, we can also use it as a logical secondary backup after physical backup of small databases, which is also a good suggestion. EXP/IMP becomes increasingly inadequate for larger databases, especially TB-level databases and more data warehouses. At this time, database backup is switched to RMAN and third-party tools. The following describes the use of EXP/IMP.

How to display exp help in different character sets: set nls_lang = simplified chinese_china.zhs16gbk. By setting environment variables, you can display exp help in Chinese. If set nls_lang = American_america. character Set, so the help is English.

All EXP parameters (the default values in parentheses ):

USERID username/password, for example, USERID = duanl/duanl

FULL export the entire database (N)

BUFFER data BUFFER size

OWNER owner user name list. If you want to export the user's object, use OWNER = username

FILE output FILE (EXPDAT. DMP)

TABLES table Name List, specifying the name of the exported table, such as: TABLES = table1, table2

COMPRESS imports an extent (Y)

Recordlength io record length

GRANTS export permission (Y)

INCTYPE incremental export type

INDEXES export index (Y)

Incremental export of RECORD tracking (Y)

ROWS export data ROWS (Y)

PARFILE parameter file name. If you have many exp parameters, you can save them as parameter files.

CONSTRAINTS export constraint (Y)

CONSISTENT cross tabulation consistency

LOG File output by LOG Screen

STATISTICS Analysis object (ESTIMATE)

DIRECT path (N)

TRIGGERS export trigger (Y)

FEEDBACK displays the progress of each x row (0)

FILESIZE maximum size of each dump file

QUERY the clause of the selected export table subset

The following keywords are only used for table spaces that can be transferred.

TRANSPORT_TABLESPACE export the table space metadata that can be transferred (N)

Table space list of TABLESPACES to be transmitted

All IMP parameters (the default value of the parameter in parentheses ):

USERID username/password

FULL import of the entire file (N)

BUFFER data BUFFER size

FROMUSER User Name List

FILE input FILE (EXPDAT. DMP)

TOUSER User Name List

SHOW only lists file content (N)

TABLES Table Name List

IGNORE creation error (N)

Recordlength io record length

GRANTS import permission (Y)

INCTYPE incremental Import Type

INDEXES import index (Y)

COMMIT submits array insert (N)

ROWS import data ROWS (Y)

PARFILE parameter file name

LOG File output by LOG Screen

CONSTRAINTS import restrictions (Y)

DESTROY overwrite the tablespace data file (N)

INDEXFILE writes table/index information to the specified file

SKIP_UNUSABLE_INDEXES skips maintenance of unavailable indexes (N)

ANALYZE executes the ANALYZE Statement (Y) in the dump file)

FEEDBACK displays the progress of each x row (0)

TOID_NOVALIDATE skips the verification of the specified type id

FILESIZE maximum size of each dump file

RECALCULATE_STATISTICS recalculates the statistical value (N)

The following keywords are only used for table spaces that can be transferred.

TRANSPORT_TABLESPACE import the deletable tablespace metadata (N)

TABLESPACES tablespace to be transmitted to the database

Data files to be transmitted to the database

TTS_OWNERS has users who can transmit table space centralized data.

Description of incremental parameters: the increment of exp/imp is not a real increment, so it is best not to use it.

Usage:

Exp parameter_name = value or Exp parameter_name = (value1, value2 ......)

You only need to input help = y to view all the help information.

Common EXP options

1. FULL. This is used to export the entire database. When ROWS = N is used together, the structure of the entire database can be exported. For example:

Exp userid = test/test file =./db_str.dmp log =./db_str.log full = y rows = n compress = y direct = y

Note: In oracle10g, the userid can be directly written into the username/password, as shown in the following code:

Exp test/test file =./db_str.dmp log =./db_str.log full = y rows = n compress = y direct = y

2. OWNER and TABLE. These two options are used to define the EXP object. The OWNER defines to export the object of the specified user; the TABLE specifies the table name of the EXP, for example:

Exp userid = test/test file =./db_str.dmp log =./db_str.log owner = duanl

Note: Sometimes we may encounter exporting a single table only. The following method can help you solve the problem:

Exp userid = test/test file =./db_str.dmp log =./db_str.log table = nc_data, fi_arap

3. BUFFER and FEEDBACK. When exporting a large amount of data, I will consider setting these two parameters. For example:

Exp userid = test/test file = yw97_2003.dmp log = yw97_2003_3.log feedback = 10000 buffer = 100000000 tables = WO4, OK _YT

4. FILE and LOG. The two parameters respectively specify the DMP name and LOG name of the backup, including the FILE name and directory. For example, see the preceding figure.

5. The COMPRESS parameter does not COMPRESS the exported data. Controls how the storage statement of the exported object is generated. The default value is Y. The default value is used. The init extent of the Object Storage statement is equal to the total extent of the current exported object. COMPRESS = N is recommended.

6. FILESIZE this option is available in 8i. If the exported dmp file is too large, use the FILESIZE parameter to limit the file size to 2 GB. For example:

Exp userid = duanl/duanl file = f1, f2, f3, f4, f5 filesize = 2G owner = scott

In this way, a series of files such as f1.dmp and f2.dmp will be created, each of which is 2 GB. If the total exported amount is less than 10 Gb

Expdoes not have to create f5.bmp.

Common IMP options

1. Use FROMUSER and TOUSER to import data from one SCHEMA to another. For example, if we export the object 'test' as the exp, we want to import the object to the user:

Imp userid = test1/test1 file = expdat. dmp fromuser = test1 touser = test1

2. IGNORE, GRANTS, and INDEXES. The IGNORE parameter ignores the existence of the table and continues the import. This is useful when you need to adjust the storage parameters of the table, we can create a table with reasonable storage parameters according to the actual situation, and then import the data directly. GRANTS and INDEXES indicate whether to import authorization and INDEXES. If you want to use the new storage parameter to re-create the index, or to speed up the import, you can consider setting INDEXES to N, GRANTS is generally Y. Example: imp userid = test1/test1 file = expdat. dmp fromuser = test1 touser = test1 indexes = N

Tablespace Transmission

Table space transfer is a newly added 8 I Method to quickly move data between databases. It is to attach the format data files of a database to another database, instead of exporting data to a Dmp file, this is very useful in some cases, because the transfer of tablespace moving data is as fast as copying a file.

There are some rules for the tablespace to be transmitted, namely:

· The source database and target database must run on the same hardware platform.

· The source database and target database must use the same character set.

· The source database and target database must have data blocks of the same size

· The target database cannot have a tablespace with the same name as the migrated tablespace.

· SYS objects cannot be migrated.

· The self-contained object set must be transmitted

· Some objects, such as materialized views and function-based indexes, cannot be transmitted.

You can use the following methods to check whether a tablespace or a set of tablespaces meets the transmission standard:

Exec sys. dbms_tts.transport_set_check ('tablespace _ name', true );

Select * from sys. transport_set_violation;

If no row is selected, the tablespace only contains table data and is self-contained. Some non-self-contained tablespaces, such as data table spaces and index tablespaces, can be transmitted together.

The following is a brief procedure. For more information, see ORACLE online help.

1. Set the tablespace to read-only (assuming the tablespace name is APP_Data and APP_Index)

Alter tablespace app_data read only;

Alter tablespace app_index read only;

2. Issue the EXP command

SQL> host exp userid = "sys/password as sysdba """

Transport_tablespace = y tablespace = (app_data, app_index)

Note that

· To execute EXP in SQL, USERID must be enclosed in three quotation marks, and "/" must be avoided in UNIX.

· After 816 and later, you must use sysdba to operate

· This command must be placed in one row in SQL (this is because the display problem is placed in two rows)

3. copy the data file to another location, that is, the target database.

It can be cp (unix), copy (windows), or transfer files through ftp (it must be in bin Mode)

4. Set the local tablespace to read/write

5. append the data file to the target database.

Imp file = expdat. dmp userid = "sys/password as sysdba" transport_tablespace = y "datafile = (c: tempapp_data, c: tempapp_index )"

6. Set the tablespace of the target database to read/write.

Alter tablespace app_data read write;

Alter tablespace app_index read write;

Methods To optimize EXP/IMP:

When the amount of data required for exp/imp is large, this process takes a long time. We can use some methods to optimize exp/imp operations.

Exp: Use the direct path direct = y

Oracle will avoid the SQL statement processing engine, read data directly from database files, and then write the exported file.

The exp-00067: table xxx will be exported in conventional path can be observed in the export log

If the direct path is not used, ensure that the value of the buffer parameter is large enough.

Some parameters are incompatible with direct = y and cannot be used to export movable tablespace directly or use query parameters to export database subsets.

When the imported and exported database runs under different operating systems, the value of the recordlength parameter must be consistent.

Imp: optimization through the following channels

1. Avoid disk sorting

Set sort_area_size to a large value, such as 100 M.

2. Avoid waiting for log Switching

Increase the number of redo log groups and the size of log files.

3. optimized the log buffer.

For example, increase the log_buffer capacity by 10 times (the maximum is 5 MB)

4. insert and submit Arrays

Commit = y

Note: arrays cannot process tables of the LOB and LONG types. For such tables, if commit = y is used, a commit is executed for each row inserted.

5. Use NOLOGGING to reduce the size of redo logs

Specify the parameter indexes = n during the import. Only import data and ignore the index. After importing the data, create the index using the script and specify the NOLOGGING option.

Export/import and Character Set

When importing and exporting data, we should pay attention to the character set issue. In the EXP/IMP process, we need to pay attention to the four character set parameters: the client character set at the export end, the database character set at the export end, the client character set at the import end, and the database character set at the import end.

First, we need to view the parameters of these four character sets.

View the character set information of the database:

SQL> select * from nls_database_parameters;

PARAMETER VALUE

--------------------------------------------------------------------------------------------------------------

NLS_LANGUAGE AMERICAN

NLS_TERRITORY AMERICA

NLS_CURRENCY $

NLS_ISO_CURRENCY AMERICA

NLS_NUMERIC_CHARACTERS .,

NLS_CHARACTERSET ZHS16GBK

NLS_CALENDAR GREGORIAN

NLS_DATE_FORMAT DD-MON-RR

NLS_DATE_LANGUAGE AMERICAN

NLS_SORT BINARY

NLS_TIME_FORMAT HH. MI. SSXFF AM

NLS_TIMESTAMP_FORMAT DD-MON-RR HH. MI. SSXFF AM

NLS_TIME_TZ_FORMAT HH. MI. ssxff am tzh: TZM

NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR hh. MI. ssxff am tzh: TZM

NLS_DUAL_CURRENCY $

NLS_COMP BINARY

NLS_NCHAR_CHARACTERSET ZHS16GBK

NLS_RDBMS_VERSION 8.1.7.4.1

NLS_CHARACTERSET: ZHS16GBK is the character set of the current database.

Let's check the character set information of the client:

The parameter NLS_LANG =_< territory>.

Language: Specifies the language in which oracle messages are sent, including the date, day, And month.

Territory: Specifies the format of currency and number, region and the habit of calculating the week and date.

Characterset: controls the character set used by client applications. Usually set or equal to the client's code page. Or set the unicode application to UTF8.

In windows, you can query and modify NLS_LANG in the registry:

HKEY_LOCAL_MACHINESOFTWAREOracleHOMExx

Xx indicates the system number when multiple Oracle_HOME exists.

In unix:

$ Env | grep NLS_LANG

NLS_LANG = simplified chinese_china.ZHS16GBK

Change availability:

$ Export NLS_LANG = AMERICAN_AMERICA.UTF8

Generally, it is best to set the character set of the client to the same as that of the database during export. When importing data, there are two main situations:

(1) The source database and target database have the same character set settings.

In this case, you only need to set the Export and Import client NLS_LANG to be equal to the database character set.

(2) The character sets of the source and target databases are different.

First, set the export client's NLS_LANG to be consistent with the export database character set, export data, and then set the import client's NLS_LANG to be consistent with the export client, import data, in this way, the conversion only occurs on the Database End and only occurs once.

In this case, data can be completely imported only when the character set of the importing database is a strict superset of the character set of the exporting database. Otherwise, data inconsistency or garbled characters may occur.

EXP/IMP problems of different versions

Generally, it is not a problem to import data from a lower version to a later version. The trouble is to import data from a later version to a lower version. Before Oracle9i, EXP/IMP between Oracle versions can be solved through the following methods:

1. Run catexp. SQL of the base version on the High Version database;

2. Use EXP of a lower version to export data of a later version;

3. Use the IMP of a lower version to import the database to a lower version;

4. Run the catexp. SQL script in the later version of the database.

But in 9i, the above method cannot solve the problem. If you use a lower version of EXP/IMP directly, the following error occurs:

EXP-00008: orACLE error % lu encountered

OrA-00904: invalid column name

This is a published BUG that can be solved only after Oracle10.0. The BUG number is 2261722. You can go to METALINK to view details about this BUG.

BUG is a BUG. We still need to do our work. We should solve the BUG by ourselves before we have Oracle Support. Execute the following SQL statement in Oracle9i to recreate the exu81rls view.

Create or REPLACE view exu81rls

(Objown, objnam, policy, polown, polsch, polfun, mongots, chkopt, enabled, spolicy)

AS select u. name, o. name, r. pname, r. pfschma, r. ppname, r. pfname,

Decode (bitand (r. stmt_type, 1), 0, '', 'select ,')

| Decode (bitand (r. stmt_type, 2), 0, '', 'insert ,')

| Decode (bitand (r. stmt_type, 4), 0, '', 'Update ,')

| Decode (bitand (r. stmt_type, 8), 0, '', 'delete ,'),

R. check_opt, r. enable_flag,

DECODE (BITAND (r. stmt_type, 16), 0, 0, 1)

From user $ u, obj $ o, rls $ r

Where u. user # = o. owner #

And r. obj # = o. obj #

And (uid = 0 or

Uid = o. owner # or

Exists (select * from session_roles where role = 'select _ catalog_role ')

)

/

Grant select on sys. exu81rls to public;

/

You can use EXP/IMP across versions, but you must use the EXP and IMP versions correctly:

1. The IMP version is always used to match the database version. For example, to import data to analyticdb 817, use the IMP tool of analyticdb 817.

2. Always use the EXP version to match the lowest version of the two databases. For example, if you import data from 9201 to 817, use the EXP tool of version 817.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.