1.2 backup and recovery Basics

Source: Internet
Author: User

Backup is actually a copy of data. The purpose of backup is to prevent unexpected data loss and application errors.

A restore refers to the process of restoring a backup copy file to the original database when the data file or control file is damaged.

Recovery refers to the application archiving log and redo log transaction to update the copy file to the status before the data file fails.

1.2.1 backup and recovery methods 

 

1. User-managed backup and recovery

User-managed backup and recovery refers to the method (also known as OS backup and recovery) for backing up and recovering databases by using SQL statement commands combined with OS commands ).

DB file ── copy or CP── →Backup File

User-managed backup refers to the method for backing up physical database files by executing the OS (copy) command.

User-managed recovery refers to the method of dumping the backup file to the dB when the DB fails to recover the database by executing the SQL command (recover tablespace.

Users tablespace backup file ── copy or CP── →Users tablespace after dumping ── ─ recover── →Restored users tablespace

 

2. backup and recovery managed by RMAN

Backup and recovery managed by RMAN (recovery manager) refers to using RMAN commands to back up and restore databases. Because this backup and recovery method requires the server process of the target dB, it is also called backup and recovery of server management.

RMAN backup refers to the method for backing up physical DB files by executing the RMAN backup command.

01. DBF 02.dbf 03.dbf 04.dbf ── ─ BACKUP command── →Demo_1.bak

RMAN recovery refers to the method of dumping and restoring the database using the RMAN command when the database fails to recover the media.

Users tablespace latest backup file ── ─ restore── →Users tablespace after dumping ── ─ recover── →Restored users tablespace

 

3. Logical backup and recovery

Logical backup refers to the process of using the Oracle tool program expdp or exp to export some or all of the DB structures and their data and store them in the OS file, this process is also called Export (the database must be in the open state ).

Dept EMP ── ─ Export── →OS file

Logical recovery refers to the process of using the Oracle tool impdp or IMP to import the object structure and data in the logical backup file to the database after the DB object is accidentally deleted or truncated, this process is also called import.

Dept EMP← ── ─Import ── ─ OS file

The Data Pump Export Import tools expdp and impdp are provided starting from the Oracle Database 10 Gb. Expdp and impdp are faster than exp and imp.

  • Exp and IMP are client tool programs that can be used either on the client or on the server.
  • Expdp and impdp are tool programs on the server. They can only be used on the Oracle server and cannot be used on the Oracle client.
  • IMP can only use exp to export files, but not expdp to export files. impdp can only use expdp to export files, rather than exp to export files.

 

1.2.2 formulating recovery policies

As a DBA, backup and recovery are the most important management responsibilities.

When formulating a recovery policy, DBA should consider how to handle user errors, media failures, and data block damages.

 

1. Create a recovery policy for user errors

When formulating backup and recovery policies, you should consider how to handle unforeseen error operations of users or applications, such as deleting tables, accidentally loading broken tables, and batch updating data.

You can use the following methods to handle user errors:

  • If you have used logical backup to export data from a misoperation table, you can import data to the misoperation table in some cases. The premise of this technology is that the table data is exported regularly, and the data changes between the export are not particularly important.
  • You can execute Incomplete recovery based on the time point to restore the tablespace or dB to the failed State. This method can avoid table data loss. When Incomplete recovery based on time points is used, the backup must exist before the failure point, and all the archive logs and redo logs at the backup point and the failure point must exist.
  • Use flashback to quickly restore table data. When using the flashback table to restore the deleted table, make sure that the table still exists in the Database recycle bin. When using the flashback table to restore the table data affected by DML misoperations, make sure that the row movement feature of the table is activated.

 

2. Create a recovery policy for media failures

When other external factors prevent oracle from reading and writing dB files during database operation, a media failure occurs.

Typical media failures include physical failures (such as damaged head), overwriting, or destroying dB files.

During Normal db operation, the media failure is much less than the user error or application error, but the backup and recovery policies should be prepared for the media failure. The media failure type determines the recovery technology to be used. For example, the recovery policy for DB files is different from that for recovery control files. The recovery policy for system tablespace is different from that for data tables.

 

3. Create a recovery policy for data block damage

If only a small number of data blocks are damaged for one or more data files, you can restore the data block media to avoid full data file recovery. You can use the PL/SQL System package dbms_repair to handle corrupted data. The RMAN blockrecover command can be used to recover damaged data blocks.

 

1.2.3 create backup policies

Data recovery policies are the basis of data backup policies. When creating a backup policy, in addition to providing the necessary backup types for various recovery policies, DBAs also need to take into account business, operation, technology, software, hardware and other requirements. When preparing backup and recovery plans, be sure to keep them in mind ".

1. Multiple redo logs

The purpose of the diversified log redo is to prevent damage to log members and improve the safe running time of the database (Mean-Time-between-failures, abbreviated as MTBF ). When multiple logs are redone, different log members in the same log group should be distributed to different disks to prevent disk corruption. If a log group contains only one log member and its unique log member fails to upload media, the database stops running when switching to the log group, and media recovery is required. If a log group contains multiple log members and a media member fails, the database can still run normally. the DBA only needs to delete the damaged log members.

2. diversified Control Files

The purpose of a diversified control file is to prevent control file corruption and reduce the recovery time of the control file (Mean-Time-to-recover, abbreviated as MTTR ). When managing multiple control files, different control files should be distributed to different disks to prevent disk corruption. If the database contains only one control file and the unique control file fails to contain media, the database cannot be loaded. In this case, you must re-create the control file or restore the control file. If the DB contains multiple control files and a control file fails to contain media, the DBA only needs to modify the initialization parameter control_files without re-establishing or restoring the control file.

3. Determine the log operation mode.

Redo logs record all transaction changes in the Oracle database. The Oracle database has two log operation modes: noarchivelog and archivelog. When the DB is in archivelog mode, only the redo logs after archiving can be overwritten, and all transaction changes are retained to archive logs. When the DB is in noarchivelog mode, redo logs can be overwritten directly, and all previous transaction changes are lost.

(1) noarchivelog Mode features

  • Online backup cannot be performed. To back up data, you must disable the database.
  • You cannot use any restoration techniques for archiving logs (full recovery, flashback database, dbpitr, and tspitr ).
  • When a data file fails to process the media, you can use either of the following methods: first, delete all objects contained in the data file, and then delete the data file, the rest of the data training can still work normally, but all the damaged data files are lost. The second method is to dump the recent full backup, but all the data changes since the backup are lost.

(2) features of archivelog Mode

  • You need to allocate a dedicated space for archiving logs and manage generated archived logs.
  • Online backup can be performed when the database is opened, without affecting the business operations of the database.
  • Multiple Recovery technologies can be used (full recovery, flashback database, dbpitr, and tspitr ).

4. Select a backup retention policy.

Backup retention policies are used to retain backup files to meet recovery and other requirements. Backup retention policies can be defined based on redundancy (redundancy) or recovery window, backup files that do not meet the retention policy are called old files and can be deleted. The backup retention policy must be implemented using RMAN.

RMAN> Configure retention policy to recovery window of 3 days;

RMAN> Configure retention policy to redundancy 3;

5. retain the old backup

There are several reasons for retaining early data files and archiving log backups:

  • When restoring the database to the time point before the latest backup, you must use the data files and archive logs of the early backup.
  • If the recent backup is corrupted and the database fails to use media, you can use the data files backed up earlier and all the archived logs since the early backup to completely restore the database.

6. Determine the backup cycle

When creating a backup policy, the backup cycle is also necessary. A reasonable backup cycle can reduce the media recovery time (MTTR ). The backup cycle should be determined based on the Database Change frequency. The more frequently the database changes, the shorter the backup cycle.

7. perform backup after the physical structure of the database changes

When the tablespace is created or deleted, the data file is added, and the name of the data file is changed, the physical structure of the database changes. When the physical structure of the database changes, the control file should be backed up in archivelog mode, and full database backup should be performed in noarchivelog mode.

8. Back up frequently used tablespaces

An Oracle database usually contains many tablespaces, but only a small number of tablespaces may frequently perform DML operations. If the tablespace data changes frequently, the number of backups is increased to reduce the recovery time (MTTR)

; If the tablespace data changes slowly, the number of backups is reduced. Read-Only tablespaces only need to be backed up once because their data does not change.

9. Back up data after nologging

When loading data, creating tables, and creating indexes, you can specify the nologging option to speed up data loading. When the nologging option is specified, data changes are not recorded in the redo log. To ensure that the data can be restored when the tablespace is damaged, the corresponding tablespace must be backed up again.

10. Use exp and expdp to export data

To prevent objects from being accidentally deleted or truncated, you can use exp or expdp to perform logical backup. After an object is accidentally deleted or truncated, you can use imp or impdp to import its structure and data. Logical backup and recovery increase the flexibility of database backup and recovery policies. However, this method cannot replace physical backup of database files or provide full recovery.

11. Do not back up and redo logs

Unlike archive logs, redo logs should not be backed up. Backing up redo logs does not help ". In archivelog mode, when the redo log is full, its content is automatically dumped to the archive log. In noarchivelog mode, only full backup can be performed after it is disabled, the backup of all data files and control files is completely consistent, so you do not need to use redo logs when dumping the backup. The most effective way to prevent redo log corruption is to redo logs in multiple ways and distribute different log members in the same log group to different disks.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.