Summary of Oracle 11G EXPORT with EXPORT

Source: Internet
Author: User

When Oracle 11G exports data using EXPORT, the empty table cannot EXPORT the new feature in 11G R2. When the table does not have data, no segment is allocated to save space.
Solution: insert a row and then roll back to generate a segment. This method inserts data into an empty table and then deletes it, resulting in a segment. An empty table can be exported.
Set the deferred_segment_creation parameter. The default value of this parameter is TRUE. When it is set to FALSE, segment is allocated to both empty and non-empty tables. Modify the SQL statement:
Alter system set deferred_segment_creation = false scope = both; note that this value does not work for empty tables previously imported and cannot be exported, only the newly added tables can be used. To export an empty table, you can only use the first method. You can build a command statement to allocate space for empty tables: select 'alter table' | table_name | 'allocate extent' from user_tables where num_rows = 0;
Export the query result, execute the Export Statement, forcibly modify the segment value, and then export the result to export the empty table. Note: before inserting data into the database, modify the 11g_R2 parameter to export the empty table.
Query empty tables select table_name from user_tables where NUM_ROWS = 0; The expdp and impdp tools are added after Oracle 10 Gb. You can use this tool to export empty tables.
Explain how to use oracle expdp/impdp to create a logical directory. This command does not create a real directory in the operating system. It is best to create a logical directory as a system administrator.
Create directory db_bak as 'd: \ test \ dump; check the Administrator directory (check whether the operating system exists because Oracle does not care whether the directory exists. If not, an error occurred.) select * from dba_directories;
Grant the system user the operation permission on the specified directory. It is best to grant the permission to the system administrator. Grant read, write on directory db_bak to system;
3.10 REMAP_TABLESPACE imports all objects in the source tablespace to the target tablespace. We import objects in the dave tablespace to the bl tablespace.
To classify users by tablespace, you need to transfer all the current data of the user to another tablespace. You can use the remap_tablespace parameter of impdp. Next we will conduct an experiment on this content.
$ Expdp 'test5/test3 @ Book' directory = backup dumpfile = tbs. dmp logfile = tbs. log tablespaces = dave
$ Impdp 'test5/test3 @ Book' directory = backup dumpfile = user. dmp logfile = user. log remap_tablespace = test1: test3 table_exists_action = replace
3.9 REMAP_SCHEMA this option is used to load all objects in the source scheme to the Target Scheme. We export the table under dave and import it to bl.
[IMPDP] implements data migration between different users-REMAP_SCHEMA Parameter
$ Expdp 'test5/test3 @ Book' directory = backup dumpfile = user. dmp logfile = user. log schemas = dave
$ Impdp 'test5/test3 @ Book' directory = backup dumpfile = user. dmp logfile = user. log remap_schema = dave: bl
3.12 TRANSPORT_DATAFILES this option indicates the transfer of tablespaces. Specifies the data file to be imported to the target database when the migration space is moved.
The procedure for this method is as follows:
(1) Change the tablespace to read only, and copy all data files in the tablespace to be transferred to the target database. You can rename it here.
(2) Export the tablespace in transport mode. Note: In this step, only metadata, that is, metadata, is defined and no data is imported into the dump file. The actual data is copied in the first step.
(3) import our data. (4) change the tablespace to read write:
Metadata (metadata) is imported from our dump file, and Data Pump imports the actual data from our specified workers. dat. The path must be absolute. Let's look at an instance:
1. add a data file to Dave in the table space: SQL> alter tablespace test4 add datafile 'd: \ test44.dbf 'size 10 m;
2. copy to the corresponding location of other instances. Change the tablespace to read only status before moving: SQL> alter tablespace test4 read only;
Move all data files in a tablespace to another instance. You can perform heavy commands. I am using the same instance. Because I am an instance here. I moved the data file test44.dbf we just added to 'e: \ test44.dbf. After the expdp is complete, drop the tablespace and import it. $ Cp d: \ test44.dbf e: \ test44.dbf copying dave01.dbf to bl03.dbf will delete the tablespace, otherwise it will be deleted.
3. expdp export metadata: $ expdp 'test4/test3 @ Book' directory = backup dumpfile = test4.dmp transport_tablespaces = test4
4. import data: drop the tablespace to import: SQL> drop tablespace test4 including contents and datafiles;
$ Impdp 'test4/test3 @ Book' directory = backup dumpfile = test4.dmp transport_datafiles = 'd: \ test4.dbf ', 'e: \ test44.dbf'
Note: The transport tablespace cannot exist on another instance. Otherwise, the data cannot be imported. If there are many files, you can also write them into a configuration file. This parameter is specified by PARFILE during import.
(5) change the tablespace To read write mode:
SQL> alter tablespace test4 read write;
SQL> select tablespace_name, status from dba_tablespaces;
Notes for transport_datafiles:
(1) copy all data files in the tablespace to the target database. Copy d: \ test_test.dbf e: \ test_test1.dbf; copy
(2) Change the tablespace to the read only state between copies. Alter tablespace test_test read only;
(3) After copy, you can rename the data file. Therefore, transport_datafiles can also be used to rename and move data files. Drop tablespace test_test including contents and datafiles;
(4) After transport_datafiles is completed, do not forget to change the tablespace to the read/write mode. Impdp 'test5/test3 @ Book' directory = backup dumpfile = TEST222.DMP transport_datafiles = 'd: \ test_test.dbf ', 'e: \ test_test1.dbf'; C: \> exp \ "sys/ymhas sysdba \" file = d:/tbs. dmp tablespaces = (data_tbs, idx_tbs) transport_tablespace = y tts_full_check = y => export two tablespaces together without data (tablespaces = (data_tbs, idx_tbs): The tablespace to be exported; transport_tablespace = y: only export metadata; tts_full_check = y for full self-contained check)
Summary error: (1) ORA-29335: tablespace 'data _ TBS 'is not read-only and the exported tablespace must be read-only; otherwise the following error occurs (2) ORA-29341: a deliverable is not a self-contained "deliverable is not a self-contained" solution:
1. pass all the dependent tablespaces together. For example, if a tablespace in this question has a self-check error, the two tablespaces will not perform self-check at the same time.
SQL> Execute sys. dbms_tts.transport_set_check ('data _ tbs ', TRUE, TRUE );
SQL> SELECT * FROM sys. transport_set_violations;
2. Handle tablespace as self-contained: for example, deleting an index, importing it to another database, re-indexing the index, or recreating the index to the tablespace of the data file, and then exporting a tablespace
Note: When you create a table under the sys or system user, when you use table space transmission, the following error message is displayed: "The transfer set is not self-contained." Two table spaces cannot be transmitted simultaneously, therefore, it is best to export the tablespaces used for table creation by sys and system, whether they are built-in systems or newly created tablespaces. EXP-00008: encountered ORACLE error 29341, ORA-29341: deliverable set not self-contained, ORA-06512: In "SYS. DBMS_PLUGTS", line 1387
(3) IMP-00053: the import mode is incompatible with the export dump file: 1. The storage format may vary across platforms (not tested); 2. on the same platform, note the exported parameters and import parameters. Because you forgot to add transport_tablespace = y during the export, you can also export the data and add this parameter to the import, resulting in the error: IMP-00053: the import mode is not compatible with the export dump file. IMP-00000: Import failed.
(4) ORA-27041: The file O/S-Error: (OS2) system could not find the specified file. (5) The ORA-19722: data file D: \ oracle \ oradata \ DATA_TBS version error
The tablespace is set to read-only to ensure data consistency. Therefore, the tablespaces can be read and written in the source database only after the tablespace data file is copied. Otherwise, the following errors will occur during the import of the target database:
(6) PLS-00201: must declare the identifier 'dbms _ PLUGTS. NEWTABLESPACE 'if the following error occurs during normal user or system user import, you can use 'sys/ymh as sysdb' to import the data. You can add the parameter TTS_OWNERS = scott (TTS_OWNERS is not tested in detail) (7) OSD-04002: Unable to open file O/S-Error: (OS123) file name, directory name or volume label syntax incorrect. If there are multiple data files, datafile = XXX, XXX, which are separated by commas (,), do not use the "" number to summarize them. Otherwise, the following error occurs:
 
 
This article is from the "technical knowledge exchange" blog

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.