ORACLE11G and inbound operations of data in the database and issues that may arise from high-version data import to low-version data

Source: Internet
Author: User

1. Preface

Prior to 10g, traditional export and import using exp tools and IMP tools, starting with 10g, not only retained the original exp and IMP tools, but also provided the data Pump Export Import Tool EXPDP and IMPDP. Therefore, in the 11G of the inverted library and warehousing methods, we also have two ways to choose: traditional mode and data pump mode.

Traditional mode is also divided into: General Import Export and direct import export.

The following is an example of exporting data, respectively, the respective export principle.

1.1 Brief introduction of the principle of the import and export mode 1.1.1 general export principle

The traditional path pattern uses the SQL SELECT statement to extract table data. The data is read from the disk into the buffer cache buffers, and the rows are transferred to the evaluation buffer. After this, the records are returned to the export client based on the SQL expression, and then written to the dump file.

Example of a general export: Exp Zhejiang/zhejiang file=/data1/zj_regular.dmp buffer=20480000

1.1.2 Direct Export

Direct export mode, the data is read directly from the disk into the PGA of the export session, the row is transferred directly to the private buffer of the export session, thus skipping the SQL command processing layer. Avoids unnecessary data conversions. The final record is returned to the export client and written to the dump file.

Examples of direct exports: Exp Zhejiang/zhejiang file=/data1/zj_direct.dmp buffer=20480000 recordlength=65535 direct=y.

1.1.3 Data Pump Export

The data pump method is the enhanced version of the Exp method, its principle is similar, but its export should draw on the server-side database directory.

1.2 Comparison of traditional mode and data pump mode 1.2.1 The advantages of traditional mode

Exp and IMP are customer segment tool programs that can be used either on the client or in the server segment.

EXPDP and IMPDP are server-side utilities that can only be used on the Oracle server and not on the client side.

Advantages of 1.2.2 Data pump mode

The data pump exports data in a shorter time, and the exported data file size is smaller.

The following table provides a statistical analysis of a test for my online friends:

Type

Example (exporting about 40G of data)

Traditional mode (direct export)

18 minutes

Data pump Mode

14 minutes, and the exported file is 1.5G smaller than the traditional mode

1.3 Note

IMP applies only to the EXP export file, not to the EXPDP export file; IMPDP only applies to EXPDP export files, not to exp export files.

1.4 Summary

As can be seen from the above, in the actual operation, we choose the data pump on the server side to import data faster. Below I mainly introduce the data pump method import and export.

2. Data pump Export Data Flow 2.1 Create a logical directory and authorize

Create the logical Directory "Expdp_dir" and map it to the physical path "D:\DBbak" and execute the statement under PL/SQL: Create or replace directory Expdp_dir as ' D:\DBbak ';

When you are done, create the folder in the corresponding hard disk directory.

You can see the following results from the SQL query whether the logical directory was successfully created, Select * from Dba_directories:

Finally, grant permissions to the created directory:

Grant read,write on directory expdp_dir to public;

2.2 Performing an export operation

Establish the Expdp.bat file, the file content is: EXPDP sccot/[email protected] directory=expdp_dir dumpfile =STDCG _201501.dmp logfile= STDCG _ 201501.log; execute the bat;

Because all data is exported, there is no setting tables| schemas| Tablespaces.

After the export is successful, the exported data is automatically placed under the Expdp_dir folder that you previously set.

3. Flow of data pump import data 3.1 creating basic operations such as DB instances

A. Creating a database STDCG, including SYS and system two users

B. Use the SYS login Pl/sql->file->open->sql script to execute the Create tablespace SQL script. It is important to note that the paths in the script are modified:

DataFile ' F:\app\DELL\oradata\stdcg\DLGIS. DBF ' (red part is the path to the local database)

C. Execute create User script: Contains Dlsys, Dlmis, Dlinit, Dlgis, SDE, MMS, Umstat, Ulog users.

3.2 Creating a logical directory and authorizing

As described in 2.2, here is a direct statement:

Create or replace directory Expdp_dir as ' D:\DBbak ';

Grant read,write on directory expdp_dir to public;

However, there are a number of such steps: After setting up the folder Dbbak in the D drive, you need to copy the exported file of the database: STDCG _201501.dmp and STDCG _201501.log to the D:\DBbak directory.

3.3 Performing an import operation

Create the Impdp.bat file with the following contents: IMPDP dlsys/[email protected] dumpfile= STDCG _201501.dmp logfile= STDCG _201501_exp.log. Run the bat file.

Again, because it is a full-library import, no tables| is used schemas| Tablespaces to import the content settings.

4. 11g Data import 10g library in 4.1 reduced version export and then import

The first thing to do with the general data pump export the same operation, that is, the creation of a logical directory: Create or replace directory Expdp_dir as ' D:\DBbak ';

Then, when you export the data, add a version control to the SQL command:

EXPDP sccot/[email protected] dumpfile=stdcg10g.dmp directory= expdp_dir version=10.0.2.

After exporting, import the exported DMP file into the 10g library.

4.2 Easy-to-come problems

At present, in the Jiangbei project and Yantai Project have done the following: SHP data was first imported into the 11g database, because the field database suddenly need to be degraded, and then the 11g of data re-import exported to 10g. However, there were different problems in the two localities.

The problem with the Jiangbei site is that Oracle can be connected directly to the 10g in the catalog, and the spatial data is visible. However, when you add spatial data from the database in ArcMap, the layer renders with an SDE internal error, which prevents the layer from appearing.

On the site of Yantai, there is a direct connection database failure problem:

Tests can be connected:

Click OK and then cannot connect:

4.3 Solution and analysis 4.3.1 solution

Delete the Dlgis and SDE users that reside in the spatial data, generate the two users again through SQL, and close the table structure first. Manually connect to the database via the catalog, and then manually import the layer to resolve the problem.

Analysis of 4.3.2 Problems

It is assumed that some spatial information tables may be changed or destroyed when 11g data is turned into 10g data, and no longer conform to the arcgis10.0 connection specification. caused the SDE to be unstable or SDE internal error.

ORACLE11G and inbound operations of data in a database and issues that may arise from high-version data import to low-version data

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.