Oracle Database logical backup SH file tutorial

Source: Internet
Author: User
Tags create directory versions oracle database sqlplus

The ORACLE tutorial is the SH file of the logical backup of the Oracle database.

Full Backup SH file: exp_comp.sh

Rq = 'date + "% m % d "'
Su-oracle-c "exp system/manager full = y inctype = complete file =/oracle/export/db_comp $ rq. dmp"

Cumulative backup SH file: exp_cumu.sh

Rq = 'date + "% m % d "'
Su-oracle-c "exp system/manager full = y inctype = cumulative file =/oracle/export/db_cumu $ rq. dmp"

SH file for incremental backup: exp_incr.sh

Rq = 'date + "% m % d "'
Su-oracle-c "exp system/manager full = y inctype = incremental file =/oracle/export/db_incr $ rq. dmp"

Root User crontab file

/Var/spool/cron/crontabs/root add the following content

0 2 1 **/oracle/exp_comp.sh
30 2 ** 0-5/oracle/exp_incr.sh
45 2 ** 6/oracle/exp_cumu.sh

Of course, this timetable can be changed according to different requirements. This is just an example.


Summary of importing and exporting logical backup methods

[EXP common parameters]

USERID: username/password. This parameter must be the first one;

FILE: specifies the data output FILE path;

LOG specifies the path of the LOG output file;

TABLES exports data from a specified table;

FULL export the entire file (N );

ROWS export data ROWS (Y );

QUERY is the select clause used to export a subset of a table;

[Example]

1. [full database mode] completely exports the database orcl, and the username scott password scott data file is exported to D:/orcl/scott. dmp, log file to D:/orcl/scott. log

Exp scott/scott @ orcl file = D:/orcl/scott. dmp log = D:/orcl/scott. log full = y

2. In table mode, export the emp and dept tables in the database to D:/orcl/scott_empdept.dmp.

Exp scott/scott @ orcl file = D:/orcl/scott_empdept.dmptables = (emp, dept)

3. [user mode] exports all data of users ng_lxj1 and ng_lxj2 in database 243 to D:/orcltest/ng_lxj_user.dmp.

Exp system/manager @ 243 file = D:/orcltest/ng_lxj_user.dmpowner = (ng_lxj1, ng_lxj2)

4. Export the records with SAL field greater than 1000 in table emp to D:/orcl/scott_emp.dmp.

Exp scott/scott @ orcl file = D:/orcl/sys_scott.dmptables = (emp) query = \ "where sal> 1000 \"

5. Export the table emp in the database. Only the table creation statement is used. The data row is not exported to D:/orcl/scott_empddl.dmp.

Exp scott/scott @ orcl file = D:/orcl/scott_empddl.dmptables = emp rows = n

 

[All parameters]

USERID username/password FULL export the entire file (N)

BUFFER data BUFFER size OWNER user name list

FILE output file tables table name list

Length of the IO record that COMPRESS imports to a partition (Y) RECORDLENGTH

GRANTS export permission (Y) INCTYPE incremental export type

INDEXES export index (Y) RECORD trace incremental export (Y)

DIRECT path (N) TRIGGERS export trigger (Y)

The LOG file STATISTICS analysis object (ESTIMATE) output by the LOG screen)

ROWS export data row (Y) PARFILE parameter file name

CONSISTENT cross tabulation consistency (N) CONSTRAINTS exported by CONSTRAINTS (Y)

OBJECT_CONSISTENT is set to read-only transaction processing only during object export (N)

The progress of FEEDBACK is displayed on every x rows (0)

FILESIZE: maximum size of each dump.

FLASHBACK_SCN is used to set the session snapshot back to the previous state SCN

FLASHBACK_TIME is used to obtain the SCN time closest to the specified time.

QUERY: select clause used to export a subset of a table

When the RESUMABLE encounters a space-related error, it suspends (N)

RESUMABLE_NAME is a text string used to identify a recoverable statement.

The waiting time of RESUMABLE_TIMEOUT RESUMABLE.

TTS_FULL_CHECK performs a full or partial correlation check on TTS.

Table space List of TABLESPACES to be exported

TRANSPORT_TABLESPACE export the table space metadata that can be transferred (N)

TEMPLATE: Name of the TEMPLATE to be exported in iAS mode.

 

[IMP common parameters]

USERID: username/password. This parameter must be the first one;

FILE: specifies the data output FILE path;

LOG specifies the path of the LOG output file;

IGNORE creation error (N). If the table to be imported already exists, if this parameter is Y, no error is reported. Only data is imported; otherwise, an error is returned. If the imported table does not exist, the table is automatically created. Note that the tablespace created in the table is generally the tablespace in which the table is exported. If the tablespace is not in the imported database, an error is returned;

FROMUSER owner username list;

TOUSER user name list;

TABLES exports data from a specified table;

FULL export the entire file (N );

ROWS export data ROWS (Y );

 

[Example]

1. [full database mode] import the data in the backup database file to the database orcl, username scott password scott, data file path D:/orcl/scott. dmp, log file path D:/orcl/scott. log

Imp scott/scott @ orcl file = D:/orcl/scott. dmp log = D:/orcl/scott. log full = y ignore = y

2. In table mode, import the emp and dept tables in the backup database file to the database orcl. The username is scott and the password is scott. The data file path is D:/orcl/scott. dmp.

Imp scott/scott @ orcl file = D:/orcl/scott. dmp log = D:/orcl/scott. log ignore = y tables = (emp, dept)

In addition, if the table mode is used for export and all table data needs to be imported during import, you can also use full = y, for example:

Imp scott/scott @ orcl file = D:/orcl/scott_empdept.dmp ignore = y full = y

3. [user mode] the backup data file contains the ng_lxj1 and ng_lxj2 user data. The data file path is D:/orcltest/ng_lxj_user.dmp.

Import the data of ng_lxj1 to ng_lxj:

Imp system/manager @ 243 file = D:/orcltest/ng_lxj_user.dmpfromuser = ng_lxj1 touser = ng_lxj

Import the data of ng_lxj1 and ng_lxj2 to ng_lxj:

Imp system/manager @ 243 file = D:/orcltest/ng_lxj_user.dmpfromuser = (ng_lxj1, ng_lxj2) touser = (ng_lxj, ng_lxj)

Note that ng_lxj should be written twice to correspond to the previous one. If only one is written, Oracle will not get the corresponding one by default and will import it to this user, for example:

Imp system/manager @ 243 file = D:/orcltest/ng_lxj_user.dmpfromuser = (ng_lxj1, ng_lxj2)

This command imports ng_lxj1 and ng_lxj1 from ng_lxj2 into ng_lxj2

Import ng_lxj1 data to ng_lxj1 and ng_lxj2 respectively:

Cannot be written as imp system/manager @ 243 file = D:/orcltest/ng_lxj_user.dmpfromuser = (ng_lxj1, ng_lxj1) touser = (ng_lxj1, ng_lxj2)

Otherwise, an error is reported:

IMP-00034: Warning: FromUser "NG_LXJ1" is not found in the exported file"

Guess: Oracle extracts data files to the cache, which is cleared once used.

Therefore, one user directs two users to write two command statements separately.

Import the data in table t1 under the ng_lxj1 user to the ng_lxj2 User:

Imp system/manager @ 243 file = D:/orcltest/ng_lxj_user.dmpfromuser = ng_lxj1 touser = ng_lxj2 tables = t1

4. From the backup data file, only the table creation statement is imported instead of the data record. The file path is D:/orcl/scott. dmp.

Imp scott/scott @ orcl file = D:/orcl/scott. dmp full = yignore = y rows = n

5. Use the parameter file

Imp system/manager @ 243 parfile = bible_tables.par

Bible_tables.par parameter file:

File = D:/orcltest/ng_lxj_user.dmp fromuser = ng_lxj1touser = ng_lxj

 

[All parameters]

USERID username/password FULL import the entire file (N)

BUFFER data BUFFER size FROMUSER owner username list

FILE input FILE (EXPDAT. DMP) TOUSER username list

SHOW only lists the file content (N) TABLES table names

IGNORE ignores creation error (N) recordlength io record length

GRANTS import permission (Y) INCTYPE incremental import type

INDEXES import index (Y) COMMIT submit array insert (N)

ROWS import data row (Y) PARFILE parameter file name

Restrictions on importing LOG files output by LOG screens (Y)

DESTROY overwrite the tablespace data file (N)

INDEXFILE writes table/index information to the specified file

SKIP_UNUSABLE_INDEXES skips maintenance of unavailable indexes (N)

The progress of FEEDBACK is displayed on every x rows (0)

TOID_NOVALIDATE skips the verification of the specified type ID

FILESIZE: maximum size of each dump.

STATISTICS always imports pre-calculation STATISTICS

When a space error occurs, the RESUMABLE Suspends (N)

RESUMABLE_NAME is a text string used to identify recoverable statements.

The waiting time of RESUMABLE_TIMEOUT RESUMABLE.

COMPILE compilation process, package and function (Y)

STREAMS_CONFIGURATION the general metadata of the imported stream (Y)

STREAMS_INSTANTIATION import stream instantiation metadata (N)

DATA_ONLY only imports data (N)

The following keywords are only used for table spaces that can be transferred.

TRANSPORT_TABLESPACE import the deletable tablespace metadata (N)

TABLESPACES tablespace to be transmitted to the database

Data files to be transmitted to the database

TTS_OWNERS has users who can transmit table space centralized data.

[P.S .]

1. The Import files exported from the higher version cannot be read from the lower version. The Import files exported from the lower version can be read from the higher version.

2. Data from earlier versions of Oracle can be imported to later versions of Oracle. However, due to Oracle's adjacent versions, an intermediate version should be used for conversion between two non-adjacent versions.

3. An error "IMP-00058" or "ORA-00942" table does not exist when the Oracle 10g export file and the large field is imported to the Oracle10g database through the Oracle 11g client, several large fields are reported several times, but the final data will be imported successfully without errors. If you do not want to report any error, change the 10g client or use the 11G client to import the 11G server database.

4. exp/imp can be migrated on different Oracle versions and databases. Errors that do not exist in tablespaces may occur during migration of different databases. Solution: First, we can create a table and specify a new tablespace, and then use imp and ignore = y to import only data. If the tablespace does not exist, the best way is to change exp, specify the tablespace of the table to be backed up as USERS (system default tablespace). You can also create a very small tablespace with the same name as the tablespace of the exported database on the database to be imported.

5. If both the primary and foreign key tables exist in the backup file when the primary and foreign key keys are critical, imp automatically identifies and sets constraints after the data is imported, if only the foreign key table is imported, but not the primary key table is not imported, the data is successfully imported, but an error is reported and the foreign key constraint is lost.

6. Multi-table multi-condition export: tables specifies multiple tables, while query cannot impose different conditions on different tables. You can write multiple statements to export each table separately, you can also use expdp to export multiple tables with multiple conditions.

7. Storage allocation fails when a large table is imported. compress = Y for the default EXP, that is, all data is compressed into one data block. If there is no continuous big data block during import, the import will fail. When exporting a large table larger than 80 MB, remember to compress = N, and this will not cause this error.

8. There are three modes available for import and export (full database, user, table). One mode must be used for exp/imp and no other mode can be used for full file mode;

9. When importing and exporting data in user mode, the USERID must use a high-permission user, such as system/manager;

10. When there are no large fields in the table using PL/SQL, we can also use the PL/SQL export tool to import and export data. Note that, the default time format of files exported from one database may be different from that of the other database, and the exported SQL insert statement cannot be used directly in DB2, the definitions of null and large fields are different.

11. Oracle 10 GB has provided the new data pump tool expdp/impdp, which provides high-speed parallel and big data migration for Oracle data. Imp/exp can be called on the client, but expdp/impdp can only be called on the server, because you need to create a Directory in the database before using expdp/impdp.

 

Differences between data pump characteristics and traditional export and import

1. EXP and IMP are client tool programs that can be used either on the client or on the server. EXPDP and IMPDP are tool programs on the server, and they can only be used on the ORACLE server, cannot be used on the client.

2. EXP and IMP are less efficient. EXPDP and IMPDP are more efficient.

3. Powerful data pump functions in parallel, filtering, conversion, compression, encryption, and interaction

4. Data Pump versions earlier than 9i are not supported, and EXP/IMP are applicable in the short term.

5. Data Pump export includes four modes: export table, export scheme, export tablespace, and export database, while imp/exp is

6. IMP only applies to EXP exported files, not EXPDP exported files; IMPDP only applies to EXPDP exported files, not EXP exported files

[Expdp/impdp]

This command can only be used on the server! That is to say, you must use this command on the database import or export server!

To use this command, both the logical directory and the physical directory must exist!

For example, to export the emp and dept tables of soctt users in the local orcl database, follow these steps:

1) create a logical directory

This command does not create a real directory in the operating system. It is best to create a directory as a system administrator.

Log on to SQLPLUS:

Sqlplus system/manager @ orcl as sysdba

Create a logical directory:

Create directory dptest as 'd: \ dptest ';

2) authorize the export user

Grant read, write on directory dptest to scott;

3) create a physical Directory

Create a directory named dptest on drive D.

4) export emp and dept tables

Expdp scott/scott @ orcl tables = emp, dept dumpfile = empdept_dp.dmp directory = dptest

[Common expdp parameters]

DIRECTORY objects used by dump files and log files must be created before export;

DUMPFILE specifies the name of the exported data file. If you do not enter the directory path, the default value is the directory specified by direcory. You can also specify the created directory object;

CONTENT specifies the CONTENT to be exported. The optional parameters include all, data_only, and metadata_only. The default value is all. When the parameter is all, DDL and data will be exported, and data_only will only export data, metadata_only only exports DDL;

EXCLUDE is used to EXCLUDE specific object types, such as EXCLUDE = TABLE: EMP;

INCLUDE includes specific object types, such as INCLUDE = TABLE_DATA;

JOB_NAME: name of the export task to be created. If this parameter is not specified, an object similar to SYS_EXPORT_SCHEMA_01 will be created by default as the JOB name;

LOGFILE specifies the log file name (export. log );

NOLOGFILE does not write log files (N );

FULL export the entire database (N );

List of schemes to be exported by SCHEMAS (logon scheme );

TABLES specifies the list of TABLES to be exported;

TABLESPACES specifies the list of TABLESPACES to be exported;

QUERY is the predicate clause used to export a subset of a table;

[Example]

1. Full database mode: export the local database to the created directory object dptest.

Expdp system/manager @ orcl dumpfile = orcl_dp.dmp directory = dptest full = y

2. In table mode, export the emp and dept tables in scott of the local database to the created directory object dptest.

Expdp scott/scott @ orcl tables = emp, dept dumpfile = empdept_dp.dmp directory = dptest

3. [user mode] exports scott user data from the local database to the created directory object dptest.

Expdp scott/scott @ orcl schemas = scott dumpfile = scott_dp.dmpdirectory = dptest

4. [tablespace mode] exports data in USERS and TEMP tablespaces in the local database to the created directory object dptest.

Expdp system/manager @ orcl dumpfile = users_temp_dp.dmpdirectory = dptest tablespace = users, temp

5. Export records with SAL fields greater than 1000 in table emp based on the table and specified conditions (multiple tables and multiple conditions can be implemented)

Expdp scott/scott @ orcl directory = dptest dumpfile = expdp. dmp tables = empquery = 'Where sal> 100'

6. parallel process parallel

Expdp scott/scott @ orcl directory = dptest dumpfile = scott1.dmp parallel = 40 job_name = scott1

[Impdp common parameters]

DIRECTORY objects used by dump files and log files must be created before import;

DUMPFILE specifies the name of the imported data file. If you do not enter the directory path, the default value is the directory specified by direcory. You can also specify the created directory object;

CONTENT specifies the CONTENT to be imported. The optional parameters include all, data_only, and metadata_only. The default value is all. When the parameter is all, DDL and data will be imported. data_only only imports data, metadata_only only imports DDL;

EXCLUDE is used to EXCLUDE specific object types, such as EXCLUDE = TABLE: EMP;

INCLUDE includes specific object types, such as INCLUDE = TABLE_DATA;

The name of the import task to be created in JOB_NAME;

LOGFILE specifies the log file name (import. log );

NOLOGFILE does not write log files (N );

FULL export the entire database (N );

List of schemes to be exported by SCHEMAS (logon scheme );

TABLES specifies the list of TABLES to be exported;

TABLESPACES specifies the list of TABLESPACES to be exported;

QUERY is the predicate clause used to export a subset of a table;

REMAP_SCHEMA loads objects in one scheme into another scheme;

REMAP_TABLESPACE re-maps the tablespace objects to another tablespace. It is used to change the tablespace during import and export of different databases;

[Example]

1. Full database mode: import the full database of data files to the local database, and store the data files in the created directory object dptest.

Impdp system/manager @ orcl dumpfile = orcl_dp.dmpdirectory = dptest full = y

2. [table mode] import the emp and dept tables in the data file to the local database scott

Impdp scott/scott @ orcl tables = emp, dept dumpfile = empdept_dp.dmp directory = dptest

Import the table emp in the data file to the local database and change its owner to system

Impdp system/manager dumpfile = empdept_dp.dmpdirectory = dptest tables = scott. dept remap_schema = scott: system

3. [user mode] import data files to the local database scott user

Impdp scott/scott @ orcldumpfile = scott_dp.dmp directory = dptest SCHEMAS = scott;

4. Change the tablespace

Impdp scott/scott @ orcl dumpfile = scott_dp.dmpdirectory = dptest remap_tablespace = users: temp

[Example]

Export from one user expdp and import impdp to another user

For example, if Windows is server A, Linux is server B, and database users are test, data of server A is migrated to server B.

On server:

1,

SQL> create directory expdp_dir as 'd: \ mzl \ backup ';

SQL> grant read, write on directory expdp_dir totest;

2. Create the directory D: \ mzl \ backup in the windows directory

3. Export in the doscommand window:

Expdp test/test DIRECTORY = expdp_dir DUMPFILE = test. dmplogfile = testexpdp. log

Operate on server B:

4. SQL> create directory impdp_diras '/home/oracle/impdp_dir ';

SQL> grantread, write on directory impdp_dir to test;

1. The/home/oracle/impdp_dir directory must be available in the system. The impdp_dir directory must have read and write permissions.

(Chmod 777 impdp_dir)

5. Use ftp to upload the data exported by server A to the/home/oracle/impdp_dir directory of server B.

Configure the server name of server B in server A and import data to server.

6. Export in the doscommand window:

Imppdp test/test @ B _database DIRECTORY = impdp_dirDUMPFILE = test. dmp logfile = testimpdp. log

(Note the case sensitivity. If test. dmp is capitalized in linux, it must be changed to uppercase. Linux case sensitive)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.