Export of Oracle 11G

Source: Internet
Author: User
Tags create directory empty file copy backup

Oracle 11G when exported with export, empty table can not guide 11G R2 has a new feature, when the table has no data, do not allocate segment to save space

WORKAROUND: Insert a row, then rollback will produce segment. The method is to insert data in an empty table and then delete it, resulting in segment. When you export, you can export an empty table.

Sets the Deferred_segment_creation parameter, which defaults to True and assigns segment when it is changed to false, whether it is an empty table or a non-empty table. To modify an SQL statement:

Alter system set DEFERRED_SEGMENT_CREATION=FALSE Scope=both Note that this value is set to have no effect on previously imported empty tables, and cannot be exported, only to be useful for subsequent tables. If you want to export an empty table before, you can only use the first method. You can build a command statement that allocates space for empty tables select ' ALTER TABLE ' | | table_name| | ' allocate extent ' from user_tables where num_rows=0;

Export the query results, execute the exported statements, forcibly modify the segment value, and then export to export the empty table, note: Before the database inserts data, modify the 11G_R2 parameter to export the empty table

Find empty Table Select table_name from user_tables where num_rows=0;,oracle 10g added EXPDP and IMPDP tools, you can also export empty tables with this tool

Oracle EXPDP/IMPDP usage to create a logical directory that does not create a real directory on the operating system, preferably with an administrator such as system.

Create directory Db_bak as ' d:\test\dump '; View the Administrator directory (see also if the operating system exists because Oracle does not care if the directory exists, if it does not exist) select * from Dba_ directories;

Give the system user permission to operate in the specified directory, preferably with administrators such as system. Grant Read,write on directory Db_bak to system;

3.10 Remap_tablespace Imports all the objects from the source table space into the target tablespace, we import the objects under the Dave table space into the BL table space.

In order to classify users by table space, you need to transfer all of the user's current data to another table space, then you can use the IMPDP remap_tablespace parameters. The following is an experiment on this content.

$ Expdp ' test5/test3@book ' directory=backup dumpfile=tbs.dmp logfile=tbs.log tablespaces=dave

$ Impdp ' test5/test3@book ' directory=backup dumpfile=user.dmp logfile=user.log remap_tablespace=test1:test3 table_ Exists_action=replace

3.9 Remap_schema This option is used to load all objects of the source scheme into the target scenario. We export the table under the Dave User and import it under the BL user.

"IMPDP" implements data migration--remap_schema parameters between different users

$ Expdp ' test5/test3@book ' directory=backup dumpfile=user.dmp logfile=user.log schemas=dave

$ Impdp ' test5/test3@book ' directory=backup dumpfile=user.dmp logfile=user.log REMAP_SCHEMA=DAVE:BL

3.12 Transport_datafiles This option represents the transfer of tablespace. A data file to be imported to the target database when moving space is specified.

The steps for doing this are as follows:

(1) Change the tablespace to read only state, and copy all data files from the table space to be transferred to the target library. This can be renamed.

(2) Export table space in transport mode. Note: This step only metadata, that is, the metadata, only the definition, no data import the dump file. The actual data we have in the first step has been copy passed.

(3) Import our data. (4) Change the table space to read write:

The metadata (metadata) is imported from our dump file, and the data pump imports the actual data from the Workers.dat we specify. There must be absolute road strength here. Let's look at an example:

1. First add a data file to the table space Dave:sql> alter tablespace test4 add datafile ' d:\test44.dbf ' size 10m;

2.copy to other instances of the corresponding position, before moving the table space first to read only state:sql> alter tablespace TEST4 Read only;

This column more highlights: http://www.bianceng.cnhttp://www.bianceng.cn/database/Oracle/

Move all the data files under the tablespace to another instance. You can make a heavy command. I am here with the same example. Because I'm here to be an example. I moved the data file we just added test44.dbf to ' e:\test44.dbf '. When the EXPDP is finished, we drop the table space and enter in the import. $ cp d:\test44.dbf e:\test44.dbf copies dave01.dbf to BL03.DBF. The table space will be deleted, or it will be deleted.

3.EXPDP Export meta data: $EXPDP ' Test4/test3@book ' directory=backup dumpfile=test4.dmp transport_tablespaces=test4

4.import data: First drop off the tablespace in import:sql> drop tablespace test4 including contents and datafiles;

$ Impdp ' test4/test3@book ' directory=backup dumpfile=test4.dmp transport_datafiles= ' d:\test4.dbf ', ' e:\test44.dbf '

Note that the transport table space here is not available on another instance. Otherwise it cannot be imported. If there are many files, you can also write to a configuration file. The import is specified by the Parfile parameter.

(5) Change the table space to read write mode:

Sql> alter tablespace TEST4 Read write;

Sql> select Tablespace_name,status from Dba_tablespaces;

Transport_datafiles attention to several points:

(1) All data files in the table space should be copy to the target library. Copy d:\test_test.dbf e:\test_test1.dbf;copy

(2) Copy, change the table space to read only state. Alter Tablespace Test_test Read only;

(3) After copy, you can rename the data file. Therefore, transport_datafiles can also be used to rename and move data files. Drop tablespace test_test including contents and datafiles;

(4) After the completion of the transport_datafiles, do not forget to change the table space to read and write mode. IMPDP ' Test5/test3@book ' directory=backup dumpfile=test222. DMP transport_datafiles= ' d:\test_test.dbf ', ' e:\test_test1.dbf '; C:\>EXP \ "Sys/ymhas sysdba\" File=d:/tbs.dmp tablespaces= (DATA_TBS,IDX_TBS) transport_tablespace=y Tts_full_check =y=> Export two tablespaces together, not data (tablespaces= (DATA_TBS,IDX_TBS): Table space to export; Transport_tablespace=y: only metadata is exported; tts_full_check= Y for full self-contained checks)

Summary error: (1) ORA-29335: Table space ' Data_tbs ' is not read-only, the exported tablespace must be read-only, otherwise the following error (2) ORA-29341: A transport set is not a self-contained "transfer set not self-contained" method:

1, all the dependent table space to pass together, such as the problem, a table space will be a self-test error, two table space at the same time self-test will not.

sql> Execute sys.dbms_tts.transport_set_check (' Data_tbs ', true,true);

Sql> SELECT * from Sys.transport_set_violations;

2, the table space processing is self-contained: for example, delete the index, lead to the other database rebuild the index, or rebuild the index to the data file table space, and then export a tablespace

Note: When you create a table under the SYS or system user, using table space transport also shows that the "transfer set is not self-contained" and cannot be transferred at the same time in two table spaces, so it is best to export the table space used by the SYS and the system build table, whether it is a self-contained system or a newly created tablespace. EXP-00008: ORACLE Error encountered 29341,ora-29341: The delivery set is not self-contained, ORA-06512: in the SYS. Dbms_plugts ", line 1387

(3) IMP-00053: Import mode is incompatible with export dump file: 1, may be different storage format when Cross-platform (not tested), 2, under the same platform, note the exported parameters and imported parameters, because the export forgot to add transport_tablespace=y, The data is also exported, and the parameter is added to the import, resulting in an error: IMP-00053: Import mode is incompatible with export dump file, IMP-00000: Failed to terminate import successfully

(4) ORA-27041: Unable to open file O/s-error: (OS2) The system could not find the specified file. (5) ORA-19722: Data file D:\oracle\oradata\DATA_TBS version error

The table space is set to read-only to ensure data consistency, so only after the data file copy of the table space is completed can the writable state of the tablespace be restored in the source database. Otherwise, the following error occurs when the target database is imported:

(6) PLS-00201: The identifier ' Dbms_plugts must be declared. Newtablespace ' use ' sys/ymh as SYSDBA ' to import if the following error occurs when importing with a normal user or system user, you can add parameter Tts_owners=scott (tts_owners not detailed test) (7) OSD-04002: Unable to open file O/s-error: (OS123) filename, directory name, or volume banner method is incorrect. If there are more than one data file, datafile=xxx,xxx, separated by commas, do not use the "" number will be more than a summary, otherwise the following error

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.