Oracle Base database Backup and Recovery

Source: Internet
Author: User

Original: Oracle Base database backup and Recovery

First, why need data backup

The main reasons for data loss:

1, media failure.

2, the user's error operation.

3, the server's complete collapse.

4, computer viruses.

5. Unforeseen factors.

The fault types in Oracle are divided into the following 4 categories.

  1. Statement failure:

A logical failure to execute a SQL statement procedure can cause a statement failure. If a user writes an invalid SQL statement, a statement failure occurs. Oracle can self-repair statement failures, undo statements produced by the impression, and give control to the application.

  

 2. User Process failure

A user process failure occurs when the user program fails to access the Oracle database. A user process failure only causes the current user to be unable to manipulate the database, but does not impress other user processes, and Process Monitor (Pmon) automatically performs process recovery when a user process fails.

  3. Instance failure

An instance failure occurs when an Oracle DB instance cannot continue to run because of a hardware or software problem. Hardware issues include accidental power outages, and the problem may be a server operating system crash. If an instance failure is found, Oracle automatically completes the instance fix. Instance repair restores the database to a transactionally consistent state before the failure, and Oracle automatically rolls back uncommitted data.

  4. Media failure

A media failure is a failure that occurs when a database file or disk is unreadable or cannot be written.

Second, backup

The backup is to create a copy of the database to disk. The following are categorized from different angles:

  1, from the physical angle and logical angle classification:

(1) Physical backup: A backup of the physical files of the database operating system (such as data files, control files, log files, etc.). Physical backups can also be divided into offline backups (cold backups) and online backups (hot backups), which are performed when the database is shut down, which backs up the database running in the archive log mode.

(2) Logical backup: A backup of a database logical component, such as a data object, such as tables and stored procedures.

  2, from the database backup strategy angle classification:

(1) Full backup: Perform a full backup of the data every time.

(2) Incremental backup: Values are those files that are backed up in the last full or incremental backup. The advantage is that the amount of backup data is small, the time required is short, the drawback is that the recovery needs to rely on the previous backup records, the risk of a larger problem. For example, if a full backup is performed in Monday, an incremental backup is performed from Tuesday to Friday. If the Friday data is compromised, then data recovery requires a full backup of Monday and all incremental backups from Tuesday to Friday.

(3) Differential backup: Backs up files that have been modified since the last full backup. Therefore, the amount of time to recover data from a differential backup requires only two data (the last full backup and the last differential backup), with the disadvantage of a longer time per backup. For example, if you have a full backup in Monday and a differential backup from Tuesday to Friday, if the Friday data is compromised, then the data recovery will only require a full week of backup and a differential backup of Thursday.

  difference between incremental and differential backups: Incremental backups need to retain data for all incremental backups, and differential backups only need to retain data for the last differential backup.

Third, recovery

Recovery is the failure, the use of the backed up data files or control files, re-establish a complete database. There are two types of recovery:

1. Instance Recovery: Oracle automates recovery When an Oracle instance fails.

2. Media recovery: When the media that holds the database fails, the recovery is done. Media recovery is also divided into full recovery and incomplete recovery .

Full recovery: The state when the database is restored to a database failure.

Incomplete recovery: Restores the database to the state of a time before the database fails.

Iv. Export

  (i) Use of data pump technology:

  1, EXPDP the export way:

1) Database mode: The entire data is imported into the operating system files.

2) User mode: Export all data and metadata under one or more users.

3) Table mode: All data and metadata for a set of tables everywhere.

4) tablespace mode: Used to extract all the data and metadata in a table space, and also to extract any dependent objects that depend on all objects in the specified table space list.

    Dump files: Files created by the data pump everywhere program become dump files, and all dump files are called dump file sets during a single data pump export job.

  2. Export based on command line

Grammar:

c:\expdb system/password directory=pbdir dumpfile=pb.dmp full=y tables=table_list tablespaces=tablesapce_list schemas =schema_list Remap_schema=user1:user2 nologfile=y Sqlfile=pb.sql

System/passwrod: User name and password

Directory: Database directory objects

DumpFile: Specifying a dump file

Full=y: Represents the full import of the implementation

Tables=table_list: Represents the list of imported tables

Schemas=schema_list: Represents the imported user mode

Tablespaces=tablesapce_list: Represents the imported table space

Remap_schema=user1:user2: Indicates import from User1 to User2

Nologfile=y: Indicates that no log operation is performed

Sqlfile: Indicates that the metadata (DDL statement) is written to the specified file.

  Note: If you need to export a full database, you must have Exp_full_database permissions.

  1) Database mode:    

Note: If you need to export a full database, you must have Exp_full_database permissions.

EXPDP Scott/[email protected] Directory=dump_dir dumpfile=full.dmp full=y

    

  2) User mode:    

EXPDP Scott/[email protected] Directory=dump_dir dumpfile=scottschema.dmp Schemas=scott

  3) Table Export method:    

  EXPDP scott/[email protected] directory=dump_dir dumpfile=tables. DMP Tables=emp,dept,bonus,salgrade Content =data_only

Description

Content=data_only:data_only means that only the data in the table is exported, metadata is not exported, and metadata_only: Only the metadata is exported and not the data in the table is exported. If you do not write, both are exported.

  4) Table Space Export method:    

EXPDP Scott/[email protected] Directory=dump_dir dumpfile=tablespace.dmp tablespaces=users

    

  3, Dbms_datapump for data pump export

It is more troublesome to use this method than to use command mode directly, but it is convenient to schedule the operation schedule of data pump export from database job scheduling, which provides better function and control for data pump export.

Declare   --Create a data pump work handleH1 Number;begin  --Create a user-defined data pump schema backup. H1:=Dbms_datapump.Open(Operation= 'Export', Job_mode= 'Schema'); --Defining storage FilesDbms_datapump.add_file (handle=H1,filename= 'es_shop.dmp'); --Defining filter ConditionsDbms_datapump.metadata_filter (handle=H1,name= 'schema_expr', value= 'inch"'Shop_user" "); --start a data pump sessionDbms_datapump.start_job (handle=H1); --disconnecting a data pump sessionDbms_datapump.detach (handle=H1);End;--Default save path: C:\Oracle11g\admin\orcl\dpdump

  

Five, import

  (a) Data pump import Mode

1. Export method

1) Full Import mode: The entire contents of an exported file set are loaded, and the set of exported files is not necessarily exported in full database mode.

2) User mode: The entire contents of a list of users in the specified set of files are imported.

3. Table mode: Specifies that tables and dependent objects are imported from the export file.

4. Tablespace mode: All content in the specified file set that is part of the table space list is imported.

  2. Import using the command line

Grammar:

C:\IMPDP system/password directory=pbdir dumpfile=pb.dmp full=y tables=table_list tablespaces=tablesapce_list schemas =schema_list Remap_schema=user1:user2 nologfile=y Sqlfile=pb.sql

System/passwrod: User name and password

Directory: Database directory objects

DumpFile: Specifying a dump file

Full=y: Represents the full import of the implementation

Tables=table_list: Represents the list of imported tables

Schemas=schema_list: Represents the imported user mode

Tablespaces=tablesapce_list: Represents the imported table space

remap_schema=user1:user2: Indicates import from User1 to User2

Nologfile=y: Indicates that no log operation is performed

Sqlfile: Indicates that the metadata (DDL statement) is written to the specified file.

  

Cases:

  1) Import the entire database:  

IMPDP Scott/Scott@accp Directory=dump_dir dumpfile=full. DMP full=y

  2) Import Table space: 

IMPDP Scott/Scott@accp Directory=dump_dir dumpfile=tablespace. DMP Tablespaces=mytest

  3) Import all the tables under the Scott User: 

IMPDP Scott/Scott@accp Directory=dump_dir dumpfile=tables. DMP Tables=emp,dept,bonus,salgrade

  4) Import the Dept and EMP tables under the exported Scott user into the mytest user

  IMPDP Scott/[email protected] directory=dump_dir Dumpfile=schema. DMP tables=dept,emp Remap_schema=scott:mytest

  

  3,dbms_datapump for data pump import

  

Declare   --Create a data pump handleH1 Number; begin  --establish a user-defined data pump to access the orcllib via a data pump connection. H1:=Dbms_datapump.Open(Operation= 'IMPORT', Job_mode= 'Schema', Remote_link= 'Orcllib'); --imports the Shop_dev_data object pattern into the Shop_back object mode. Dbms_datapump.metadata_remap (handle=H1,name= 'Remap_schema', Old_value= 'Shop_dev_data', value= 'Shop_back'); --write the log to the Shop.log fileDbms_datapump.add_file (handle=H1,filename= 'Shop.log', filetype=Dbms_datapump.    Ku$_file_type_log_file); --start the data pumpDbms_datapump.start_job (handle=H1); --disconnecting data Pump connectionsDbms_datapump.detach (handle=H1);End;

Oracle Base database Backup and Recovery

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.