oracle11g Database Import oracle10g Database

Source: Internet
Author: User
Tags create directory

EXPDP userid= ' scott/[email protected] as Sysdba ' Schemas=scott directory=data_pump_dir dumpfile=dp.dmp logfile=dp.log version=10.2.0.1.0

Create tablespace houqin_tablespace datafile ' D:\dzj\orcl\TEST01111. DBF ' size 100M autoextend on next 5M maxsize Unlimited extent management local uniform size 1M;

Create temporary tablespace test11_temp tempfile ' D:\dzj\orcl\TEST01111_temp.dbf ' size 100m autoextend on next 3m maxsize 2048m extent management Local;

Create user hous identified by hous default tablespace test11 temporary tablespace test11_temp;

IMPDP userid= ' scott11/[email protected] as Sysdba ' Schemas=scott directory=data_pump_dir dumpfile=dp.dmp logfile= Lg.log version=10.2.0.1.0

IMPDP userid= ' houqin/[email protected] as Sysdba ' Schemas=houqin directory=data_pump_dir dumpfile=dp.dmp logfile= Lg.log version=10.2.0.1.0

--View the number of tables

SELECT COUNT (*) from dba_tables WHERE OWNER = ' ES2012 ';

IMPDP userid= ' es2012/[email protected] as Sysdba ' schemas=es2012 directory=data_pump_dir dumpfile=haiyang10g.dmp Logfile=dp.log version=10.2.0.1.0

Because the import 10g data Inventory compatibility issue is exported on 11g, the EXPDP and IMPDP commands are used:

1. Export to directory Data_pump_dir using EXPDP on 11g The specified version number is 10g

SELECT * from Dba_directories; View Data_pump_dir Local Location

For example: cmd inside execution: EXPDP userid= ' scott/[email protected] as Sysdba ' Schemas=scott directory=data_pump_dir dumpfile=dp.dmp Logfile=dp.log version=10.2.0.1.0

2. Copy the dmp file from the Data_pump_dir directory to the 11g database Data_pump_dir directory

3. Create a user with the same user name and password as the export to give DBA authority

For example: cmd inside execution:

IMPDP userid= ' scott/[email protected] as Sysdba ' Schemas=scott directory=data_pump_dir dumpfile=dp.dmp logfile=lg.log version=10.2.0.1.0

If the error table space * * does not exist

You need to create a tablespace table space location search. DBF can be put together

PL/SQL execution:

Create tablespace houqin_tablespace datafile ' D:\dzj\orcl\TEST01111. DBF ' size 100M autoextend on next 5M maxsize Unlimited extent management local uniform size 1M;

4. CMD import again, import end prompt import successful, no error no warning to really complete the import

Reference 1 :

oracle11g Database Import oracle10g Database operations note

11g Backup, import 10g will be thrown wrong, directly block the import.

But sometimes you have to import a 11g database into 10g, and I have this situation today.

In the beginning, I just backed up all the tables in 11g and successfully imported 10g. However, views, stored procedures, and so on are missing. A SQL Create tired people, method is too stupid. Consult a senior DBA, re-backup, re-import, success.

Now put me into the whole process of operation record down, hope can be more convenient for people!

One, on the 11g server, use the EXPDP command to back up data

EXPDP userid= ' sys/[email protected] as Sysdba ' schemas=sybj directory=data_pump_dir dumpfile=aa.dmp logfile= Aa.log version=10.2.0.1.0

Where the red text part is rewritten as needed. For example, my sys password is cuc2009, the database SID is CUC, the user name to go to is sybj, to be imported into the 10.2.0.1.0 version of the Oracle database. Aa.dmp and Aa.log will be generated in the 11g dpdump directory, for example, my 11g is installed under the E disk, so aa.dmp will be generated in the E:\app\Administrator\admin\cuc\dpdump directory.

Second, on the 10g server, using the IMPDP command to recover data

Preparation: 1. Build a library 2. Build a table Space 3. Build a user and authorize 4. Copy the aa.dmp to the 10g Dpdump directory

1-3 points to refer to the blogger's previous blog "Oracle database Transplant Full-step", the introduction of the very detailed, there is no more to say. On the 4th, my 10g was installed in the E:\tools directory, so I copied the aa.dmp file to the E:\tools\admin\cucf\dpdump directory.

IMPDP userid= ' sys/[email protected] as Sysdba ' schemas=sybj directory=data_pump_dir dumpfile=aa.dmp logfile= Aa.log version=10.2.0.1.0

Where the red part is rewritten as needed. For example, my sys password is cuc2009, the database SID is CUCF, to import the user name SYBJ, to import into the 10.2.0.1.0 version of the Oracle database. The Aa.log will be generated in the 10g Dpdump directory.

Every step has been told to do it yourself. It's no problem. Friends if there is any good way can also take out to share. Open source atmosphere, the growth of technology is the fastest!

Reference 2 :

EXPDP and IMPDP

September 26, 2010

17:46

Today learned about oracle10g import and export tools EXPDP, IMPDP, summarize, to facilitate the query.

Using EXPDP

1. The data pump includes the following parts:

The command-line clients, EXPDP and IMPDP

The Dbms_datapump PL/SQL package (also known as the Data Pump API)

The Dbms_metadata PL/SQL package (also known as the METADATA API)

2. EXPDP, IMPDP and the original exp,imp incompatible, that is, the exported file with exp IMPDP is unable to import.

3. Data pump does not support XML schemas

4. You must first create a directory before using it:

sql> CREATE OR REPLACE DIRECTORY dpump_dir as '/oracle/oradata/orcl10/pumpdata ';

Query directory: dba_directories

Oracle data pump in order to improve performance, using a direct path to the database established by the directory to import data.

This creates a major difference between the data pump and the Imp. If the IMP is imported on the client, the DMP file is placed on the client. However, if imported through a data pump, the data pump files are always placed on the database server side.

5. Give the directory read and write permission to the user who imported the export:

Sql> GRANT READ, WRITE on DIRECTORY Dpump_dir to Scott;

6. DataPump is a server-side job, so you can exit the client via exit_client after DataPump execution. The Dba_datapump_jobs view allows you to check the status of the DataPump job, or you can use attach to reconnect to the job that is still in progress. Each datapump can be specified by the Job_name parameter name, if not specified, then there will be a default name, such as the example on the paste, the name is sys_export_table_01, through the V$session_ Longops can also view the specific contents of a long-running datapump job.

7. The following parameters affect the performance of data pump

Disk_asynch_io = True

Db_block_checking = False

Db_block_checksum = False

8. The higher the following parameter setting, to allow the maximum degree of parallelism

Processes

Sessions

Parallel_max_servers

9. The following parameters should be set to a large point

Shared_pool_size

Undo_tablespace

Experiments and Examples:

==================================================================

Create a directory, Dumpdir is the name you named

Sql> Create directory Dumpdir as ' E:datadump ';

Delete Directory

sql> Drop directory Dumpdir;

Sql> Grant Read,write on directory Dumpdir to Scott;

Export a Schema:scott

C:>EXPDP Scott/tiger dumpfile=scott.dmp Directory=dumpdir Schemas=scott

Sql> create user Lihui identified by Lihui;

Sql> Grant create Session,resource to Lihui;

Sql> Grant Read,write on the directory Dumpdir to Lihui;

Sql> Grant CREATE DATABASE link, create synonym, create view to Lihui;

Sql> Grant Imp_full_database to Scott;

Import into the schema of the Lihui

C:>IMPDP scott/tiger directory=dumpdir dumpfile=scott.dmp logfile=scott.log Remap_schema=scott:lihui

--Parameter Remap_schema to import the contents of Scott under the schema of the Guide Hui

Increasing the speed of IMPDP imports is related to parallel operations:

Sql> Show parameter CPU

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

Cpu_count Integer 2

PARALLEL_THREADS_PER_CPU Integer 2

Exporting using more than one thread with parallel parameters significantly accelerates the job. Each thread creates a separate dump file,

So dumpfile should have as many projects as parallelism. You can use wildcards to name filenames without explicitly entering individual filenames, e.g

EXPDP Lihui/lihui tables= (emp,dept) directory=dumpdir dumpfile=test_%u.dmp parallel=2 job_name=test

--%u is a wildcard character, parameter parallel parallelism

Database monitoring:

The main view of the monitoring job:

Dba_datapump_sessions, monitor the session of the foreground process while performing the task.

Dba_datapump_jobs, monitor how many worker processes (degree columns) are working on the job.

You can also view information about the process through the alert log file.

Working with specific objects:

e.g

To export only stored procedures without exporting other objects:

EXPDP Scott/tiger Directory=dumpdir dumpfile=pros.dmp include=procedure

Export only some specific objects (such as function FUNC1 and procedure PROC1)

EXPDP Scott/tiger directory=dumpdir dumpfile=pro_fun.dmp include=procedure: "= ' PROC1 '", FUNCTION: ' = ' FUNC1 ' "

Full-Library Export:

EXPDP system/sys directory=dumpdir dumpfile=full.dmp full=y job_name=expdpfull logfile=full.log//Full Library Export

Use CTRL + C to exit interactive mode, but the EXPDP operation does not stop, because EXPDP is a task defined inside the database that is not client-independent. You can view its export status after exiting

export> status

You can use the Stop_job command to actually stop the operation.

EXPDP when the export is the client's time, and the import is done to take the server-side time, it should be considered a bug.

Problem:

If the execution is found to be correct in the statement, but the export fails, the error is reported:

C:>EXPDP scott/[email protected] DUMPFILE=SCOTT.DP directory=dumpdir Schemas=scott L

Ogfile=scott.log

Export:release 10.2.0.3.0-production on Thursday, 18 December, 2008 14:05:46

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connect to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0-production

With the partitioning, OLAP, Data Mining and Real application testing options

ORA-39002: Invalid operation

ORA-39070: Unable to open log file.

ORA-29283: Invalid file operation

ORA-06512: In "SYS." Utl_file ", line 475

ORA-29283: Invalid file operation

There is a problem with the directory created, where the Dumpdir directory does not exist in the actual physical space. You should check the SELECT * from Dba_directories and see if the directory Dumpdir corresponds to exists.

This article from: http://blog.sina.com.cn/s/blog_a94efaf801018mwe.html

oracle11g Database Import oracle10g Database

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.