All about Oracle IMP/EXP

Source: Internet
Author: User

All about Oracle IMP/EXP import/export is the oldest two command line tools survived by ORACLE. In fact, I never think Exp/Imp is a good backup method, the correct statement is that Exp/Imp can only be a good dump tool, especially in small database dump, tablespace migration, table extraction, and detection of logical and physical conflicts. Of course, we can also use it as a logical secondary backup after physical backup of small databases, which is also a good suggestion. EXP/IMP becomes increasingly inadequate for larger databases, especially TB-level databases and more data warehouses. At this time, database backup is switched to RMAN and third-party tools. The following describes the use of EXP/IMP. How to display exp help in different character sets: set nls_lang = simplified chinese_china.zhs16gbk. By setting environment variables, you can display exp help in Chinese. If set nls_lang = American_america. character Set, so the help is all the parameters of the program code EXP (the default value of the parameter in parentheses): USERID username/password such as: USERID = duanl/duanl FULL export the entire database (N) BUFFER data BUFFER size OWNER owner User Name List, which user object you want to export, use OWNER = usernameFILE output file (EXPDAT. DMP) TABLES table Name List, specifying the name of the exported table, such as: TABLES = table1, table2COMPRESS import the length of an extent (Y) recordlength io record GRANTS Export Permission Limited (Y) INCTYPE incremental export type INDEXES export index (Y) RECORD trace incremental Export (Y) ROWS export data row (Y) PARFILE parameter file name, if you have many exp parameters, it can be saved as a parameter file. CONSTRAINTS export constraint (Y) CONSISTENT cross tabulation consistency LOG output LOG File STATISTICS analysis object (ESTIMATE) DIRECT path (N) TRIGGERS export trigger (Y) FEEDBACK shows every x rows (0) progress FILESIZE the maximum size of each dump file QUERY the clause of the selected exported table subset the following keywords are only used for the transferred tablespace TRANSPORT_TABLESPACE to export the convertible tablespace metadata (N) TABLESPACES imports all the parameters of the transferred tablespace list program code IMP (the default value of the parameter in parentheses): USERID username/password FULL into the entire file (N) BUFFER data BUFFER size FROMUSER all user name list FILE input FILE (EXPDAT. DMP) TOUSER username list SHOW only list file content (N) TABLES Table Name List IGNORE creation error (N) recordlength io record length GRANTS import permission (Y) INCTYPE incremental import type INDEXES import index (Y) COMMIT submit array insert (N) ROWS import data row (Y) PARFILE parameter file name LOG screen output LOG File CONSTRAINTS import limit (Y) DESTROY overwrite tablespace data file (N) INDEXFILE write table/index information to the specified file SKIP_UNUSABLE_INDEXES skip maintenance of unavailable indexes (N) ANALYZE execute ANALYZE Statement (Y) in the dump file) FEEDBACK shows the progress of each x rows (0) ID_NOVALIDATE skips verification FILESIZE for a specified type of id RECALCULATE_STATISTICS the maximum size of each dump file recalculates the statistical value (N) the following keywords are only used for the transferred tablespace TRANSPORT_TABLESPACE import the deletable tablespace metadata (N) TABLESPACES the tablespace DATAFILES to be transmitted to the database the data file TTS_OWNERS that will be transmitted to the database has the ability to transmit the data in the tablespace set description about the incremental parameters: the increment of exp/imp is not a real increment, so it is best not to use it. Usage: Exp parameter_name = value or Exp parameter_name = (value1, value2 ......) If you enter the help = y parameter, you can see all the common help. EXP Options 1. FULL. This is used to export the entire database. When ROWS = N is used together, you can export the structure of the entire database. Example: exp userid = test/test file =. /db_str.dmp log =. /db_str.log full = y rows = n compress = y direct = y2. OWNER and TABLE. These two options are used to define the EXP object. The OWNER defines to export the object of the specified user; the TABLE specifies the table name of the EXP, for example, exp userid = test/test file =. /db_str.dmp log =. /db_str.log owner = duanlexp userid = test/test file =. /db_str.dmp log =. /db_str.log table = nc_data, fi_arap3. BUFFER and FEEDBACK. When exporting a large amount of data, I will consider setting these two parameters. Example: exp userid = test/test file = yw97_2003.dmp log = yw97_2003_3.log feedback = 10000 buffer = 100000000 tables = WO4, OK _YT4. FILE and LOG, which respectively specify the DMP name and LOG name for the backup, including the FILE name and directory. See the preceding example. 5. COMPRESS: whether to COMPRESS the exported data. Controls how the storage statement of the exported object is generated. The default value is Y. The default value is used. The init extent of the Object Storage statement is equal to the total extent of the current exported object. COMPRESS = N is recommended. (Causes Export to flag table data for each lidation into one initial extent upon Import, The Export utility, not the Import utility, generates the data definitions, including the storage parameter definitions .) 6. FILESIZE this option is available in 8i. If the exported dmp file is too large, use the FILESIZE parameter to limit the file size to 2 GB. For example, exp userid = duanl/duanl file = f1, f2, f3, f4, f5 filesize = 2G owner = scott. In this case, f1.dmp, Alipay will be created. 7. CONSISTENT, If you use CONSISTENT = n, each table is usually exported in a single transaction. however, if a table contains nested tables, the outer table and each inner table are exported as separate transactions. if a table is partitioned, each partition is exported A separate transaction. IMP common options 1. FROMUSER and TOUSER are used to import data from one SCHEMA to another SCHEMA. For example, assume that we export the object 'test' when doing exp. Now we want to import the object to the user: imp userid = test1/test1 file = expdat. dmp fromuser = test1 touser = test12, IGNORE, GRANTS, and INDEXES. The IGNORE parameter ignores the existence of the table and continues the import. This is useful when you need to adjust the storage parameters of the table, we can create a table with reasonable storage parameters according to the actual situation, and then import the data directly. GRANTS and INDEXES indicate whether to import authorization and INDEXES. If you want to use the new storage parameter to re-create the index, or to speed up the import, you can consider setting INDEXES to N, GRANTS is generally Y. For example, imp userid = test1/test1 file = expdat. dmp fromuser = test1 touser = test1 indexes = N tablespace transmission is a newly added method for 8 I to quickly move data between databases, it is to attach a format data file on a database to another database, rather than export the data to a Dmp file, which is very useful in some cases, because it is as fast as copying a file to move data in a tablespace. There are some rules for the tablespace to be transmitted: the source database and the target database must run on the same hardware platform. · The source database and target database must use the same character set. · The source database and target database must have data blocks of the same size. · the target database cannot have a tablespace with the same name as the migrated tablespace. · SYS objects cannot be migrated. · Self-Contained object sets must be transmitted · there are some objects, for example, materialized views and function-based indexes cannot be transmitted. You can use the following methods to check whether a tablespace or a tablespace meets the transmission standard: exec sys. dbms_tts.transport_set_check ('tablespace _ name', true); select * from sys. transport_set_violation; if no row is selected, the tablespace only contains table data and is self-contained. Some non-self-contained tablespaces, such as data table spaces and index tablespaces, can be transmitted together. The following is a brief procedure. For more information, see ORACLE online help. 1. set the tablespace to read-only (assuming the tablespace names are APP_Data and APP_Index) alter tablespace app_data read only; alter tablespace app_index read only; 2. issue the EXP command SQL> host exp userid = "sys/password as sysdba" transport_tablespace = y tablespace = (app_data, app_index) note that in order to execute EXP in SQL, USERID must be enclosed in three quotation marks. in UNIX, you must also avoid the use of "/" after 816 and later, you must use sysdba to operate the command. This command must be placed in one row in SQL (this is because the problem is displayed in two rows. copy data files to another location, that is, the target database can be cp (unix), copy (windows), or transfer files through ftp (must be in the bin Method) 4. set the local tablespace to read/write. append the data file imp file = expdat to the target database. dmp userid = "sys/password as sysdba" transport_tablespace = y "datafile = (c: \ temp \ app_data, c: \ temp \ app_index)" 6. set the destination database tablespace to read and write alter tablespace app_data read write; alter tablespace app_index read write; optimize EXP/IMP: when the amount of data required for exp/imp is large, this process takes a long time. We can use some methods to optimize exp/imp operations. Exp: using the direct path direct = yoracle will avoid the SQL statement processing engine, read data directly from the database file, and then write the exported file. you can observe in the export log: exp-00067: table xxx will be exported in conventional path if you do not use the direct path, you must ensure that the value of the buffer parameter is large enough. some parameters are incompatible with direct = y and cannot be used to export movable tablespace directly or use query parameters to export database subsets. when the imported and exported database runs under different operating systems, the value of the recordlength parameter must be consistent. imp: optimize it in the following ways: 1. avoid setting sort_area_size to a large value in disk sorting, for example, 100M2. avoid log switching and wait to increase the number of redo log groups and increase the log file size. 3. optimize the log buffer. For example, increase the log_buffer capacity by 10 times (up to 5 MB). 4. insert and submit using Arrays Commit = y note: the array method cannot process tables that contain the LOB and LONG types. For such tables, if commit = y is used, a commit is executed for each row inserted. 5. the NOLOGGING method is used to reduce the size of the redo log. During the import, specify the parameter indexes = n. Only the data is imported, while the index is ignored. After the data is imported, the index is created through the script, when specifying the NOLOGGING option to export/import data and Character Set for data import and export, we should pay attention to the character set issues. In the EXP/IMP process, we need to pay attention to the four character set parameters: the client character set at the export end, the database character set at the export end, the client character set at the import end, and the database character set at the import end. First, we need to view the parameters of these four character sets. View the database's Character Set Information: SQL> select * from nls_database_parameters; PARAMETER VALUE contains invalid NLS_LANGUAGE PARAMETER values $ NLS_ISO_CURRENCY variable., NLS_CHARACTERSET contains GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYN LS_TIME_FORMAT HH. MI. SSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HH. MI. SSXFF AMNLS_TIME_TZ_FORMAT HH. MI. ssxff am tzh: TZMNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH. MI. ssxff am tzh: maid $ NLS_COMP BINARYNLS_NCHAR_CHARACTERSET ZHS16GBKNLS_RDBMS_VERSION _ CHARACTERSET: ZHS16GBK is the character set of the current database. Let's look at the character set information of the client: the client Character Set parameter NLS_LANG =_< territory>. language: Specifies the language in which oracle messages are displayed, including the date, day, And month. Territory: Specifies the format of currency and number, region and the habit of calculating the week and date. Characterset: controls the character set used by client applications. Usually set or equal to the client's code page. Or set the unicode application to UTF8. In windows, you can query and modify NLS_LANG in the registry. HKEY_LOCAL_MACHINE \ SOFTWARE \ Oracle \ HOMExx \ xx indicates the system number when multiple Oracle_HOME exists. In unix: $ env | grep NLS_LANGNLS_LANG = simplified chinese_china.ZHS16GBK modification available: $ export NLS_LANG = AMERICAN_AMERICA.UTF8 it is best to set the client character set to the same as the database during export. When importing data, there are two main situations: (1) the source database and the target database have the same character set settings. In this case, you only need to set the Export and Import client NLS_LANG to be equal to the database character set. (2) The character sets of the source and target databases are different. First, set the export client's NLS_LANG to be consistent with the export database character set, export data, and then set the import client's NLS_LANG to be consistent with the export client, import data, in this way, the conversion only occurs on the Database End and only occurs once. In this case, data can be completely imported only when the character set of the importing database is a strict superset of the character set of the exporting database. Otherwise, data inconsistency or garbled characters may occur. The EXP/IMP issue of different versions is generally not a big problem when importing data from a lower version to a higher version. The trouble is to import data from a later version to a lower version. Before Oracle9i, EXP/IMP between Oracle versions can be solved through the following methods: 1. Run catexp of the base version on the High Version database. SQL; 2. Use the EXP of the earlier version to export data of the later version; 3. Use the IMP of the earlier version to import the database to the later version; 4. Run the catexp of the later version on the database of the later version. SQL script. But in 9i, the above method cannot solve the problem. If you use earlier versions of EXP/IMP directly, the following error occurs: EXP-00008: orACLE error % lu encounteredorA-00904: invalid column name, which is already a published BUG that needs to be addressed by Oracle10.0, the BUG number is 2261722. You can go to METALINK to view details about the BUG. BUG is a BUG. We still need to do our work. We should solve the BUG by ourselves before we have Oracle Support. Execute the following SQL statement in Oracle9i to recreate the exu81rls view. Create or REPLACE view exu81rls (objown, objnam, policy, polown, polsch, polfun, mongots, chkopt, enabled, spolicy) AS select u. name, o. name, r. pname, r. pfschma, r. ppname, r. pfname, decode (bitand (r. stmt_type, 1), 0, '', 'select, ') | decode (bitand (r. stmt_type, 2), 0, '', 'insert, ') | decode (bitand (r. stmt_type, 4), 0, '', 'Update, ') | decode (bitand (r. stmt_type, 8), 0, '', 'delete, '), r. check_opt, r. enable_flag, DECO DE (BITAND (r. stmt_type, 16), 0, 0, 1) from user $ u, obj $ o, rls $ r where u. user # = o. owner # and r. obj # = o. obj # and (uid = 0 or uid = o. owner # or exists (select * from session_roles where role = 'select _ catalog_role')/grant Select on sys. exu81rls to public;/EXP/IMP can be used across versions, but the EXP and IMP versions must be correctly used: 1. The IMP version is always used to match the database version, for example: to import data to 817, use the 817 IMP tool. 2. Always use the EXP version to match the lowest version of the two databases. For example, if you import data from 9201 to 817, use the EXP tool of version 817.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.