Oracle Data Guard Theoretical knowledge

Source: Internet
Author: User
Tags log log

RAC, Data Gurad, Stream are three tools in the Oracle High-availability system, each of which can be used independently or in conjunction with each other. Their respective emphasis is different, the application scenario is also different.

RAC its strength lies in solving single point of failure and load balancing, so the RAC scheme is often used in the core system of 7*24, but the data in the RAC scheme is only one, although the storage failure can be avoided by mechanisms such as RAID, but the data itself is not redundant and easy to form a single point of failure.

Data Gurad through the redundant data to provide the protection, the information Gurad through the log synchronization mechanism guarantees the redundant data and the main data synchronization before, this kind of synchronization may be real-time, the delay, the synchronization, the asynchronous various forms. Data Gurad is often used in remote disaster-tolerant and small-business high-availability scenarios, although the performance pressures of primary Suchu crying can be spread by performing read-only queries on standby machines, but data gurad is by no means a performance solution.

Stream is data synchronization based on Oracle Advanced queue, provides multiple levels of flexible configuration, and Oracle provides rich API and other development support, stream is more applicable to data sharing at the application level.

In the data Gurad environment, there are at least two databases, one is open to provide services, and this database is called primary. The second one is in the recovery state, called the standby Database. Runtime primary database provides services for users to operate on primary database, and operations are recorded in online and archived logs, which are passed over the network to the standby database. This log is repeated on the standby database to enable data synchronization between the primary database and the standby database.

Oracle Data Gurad Further optimizes the design of the process, making it more automated and intelligent for log delivery and recovery, and provides a range of parameters and commands to simplify DBA work.

If it is foreseeable that the primary database needs to be shut down, such as hardware and software upgrades, the standby database can be switched to the primary database to continue its external service, which reduces the service stop time and the data is not lost. If the exception causes the primary database to be unavailable, you can also force the standby database to be switched to primary database to continue its external service, where data loss is related to the level of data protection that is configured in Chengdu. So primary and standby are just a role concept, not fixed in a database.

Data Guard Architecture

The DG Architecture can be divided into 3 parts by function:

1 log send (Redo send)

2) Log receive (Redo receive)

3 Log application (Redo apply)

1. Log send (Redo send)

During the Primary database run, there will be a steady stream of redo logs that need to be sent to the Standy database. This send action can be done by the LGWR or arch process of primary Database, different archive destinations can use different methods, but for a destination, only one method can be chosen. Choosing which process is a big difference between data protection and system availability.

1.1 Using the Arch process

1 Primary Database constantly produces redo log, which is written to the online log by LGWR process.

2 when a group of online logs is filled, a log switch is made, and a local archive is triggered, with the log_archive_dest_1= ' location=/path ' format defined in the local archive location.

such as: Alter system set log_archive_dest_1 = ' Location=/u01/arch ' Scope=both;

3 after the local archive is completed, the online logs can be overwritten and reused.

4 The ARCH process sends the archive log to the standby database RFS (Remote file Server) process through net.

5 The Standby Database-side RFS Process writes the received log to the archive log.

6) Standby Database-side MRP (Managed Recovery process) processes (Redo apply) or the LSP process (SQL apply) Apply these logs to the Standby database to synchronize data.

The standby redologs is transmitted in arch mode and stored directly into an archive file at the standby end.

Description

The logical standby is received and converted to an SQL statement, and execution of the SQL statement on the standby database is implemented synchronously, which is called SQL Apply.

After the physical standby receives the redo data generated by the primary database, it realizes synchronization in the form of media recovery, which is also called Redo Apply.

Note: To create a logical standby database, you first create a physical standby database and then convert it to a logical standby database.

The biggest problem with the arch process delivery is that the Primary database sends logs to standby database only when an archive occurs. If the primary Database abnormal downtime, the redo content in the online logs will be lost, so using the arch process can not avoid the problem of data loss, to avoid data loss, you must use the LGWR, and the use of LGWR Also divided into sync (synchronous) and async (asynchronous) two ways.

By default, the Primary database uses the arch process and the parameters are set as follows:

alter system set log_archive_dest_2 = ' service=st ' Scope=both;

1.2 Sync mode using LGWR process

1 Primary Database produces the redo log to write both the log file and the network. That is, the LGWR process writes the log to the local log file and sends it to the local LNSN process (network Server process), which is then LNSN (LGWR network Server Process) processes send logs over the network to remote destinations, each remote destination corresponds to a LNS process, and multiple LNS processes work in parallel.

2 LGWR must wait to write local log file operations and network transfer through the LNSN process is successful, Primary Database transactions can be submitted, which is the meaning of sync.

3 The RFS process of Standby database writes the received log to the Standby Redo log log.

4 Primary Database log switching will also trigger the log switch on the standby database, that is, standby database to standby Redo log archive, and then trigger standby database MRP or LSP process to restore archived logs.

Because the redo of the primary database is delivered in real time, there are two recovery methods available standby the database side:

Real-time recovery (real-time Apply): As long as RFS write the log standby Redo log will be immediately restored;

Archive recovery: Recovery is triggered after completing the standby Redo Log archive.

The Primary database uses the arch process by default, which must be explicitly specified if the LGWR process is used. When using the Lgwr sync method, you can use the Net_timeout parameter, which is a second, which indicates that the LGWR process throws an error if the network send does not respond for much longer. Examples are as follows:

alter system set Log_archive_dest_2 = ' service=st lgwr SYNC net_timeout=30 ' Scope=both;

1.3 Async ways to use the LGWR process

The possible problem with using the Lgwr sync method is that if the log is sent to the standby database process fails, the LGWR process will complain. That is to say, the LGWR process of primary database relies on network conditions, which can sometimes be too harsh to use LGWR async. Its working mechanism is as follows:

1 Primary Database A section produces redo log, lgwr the log file and the local LNS process, but the LGWR process can only write successfully to log files, do not have to wait for the LNSN process network transfer successfully.

2 The LNSN process asynchronously sends the log contents to the standby Database. Multiple LNSN processes can be sent concurrently.

More Wonderful content: http://www.bianceng.cn/database/Oracle/

3) Primary database on the online Redo log after the log Switch, triggering the archiving operation, but also triggers the standby database to standby database standby Redo Log archive; The MRP or LSP process resumes the archive log.

Because the LGWR process does not wait for the response result of the LNSN process, net_timeout parameters are not required to configure the LGWR async mode. Examples are as follows:

alter system set Log_archive_dest_2 = ' service=st lgwr ASYNC ' Scope=both;

2. Log receive (Redo receive)

After the RFS (Remote File Server) process in Standby Database receives the log, it writes the log to the Standby Redo log or archived log file, which file is written to, depending on the primary Log shipping mode and the location of the standby database. If you write to the standby Redo log file, the log switch of standby Redo log on the standby database is also triggered when log switching occurs on the primary database, and this standby Redo log is archived. If it is written to archived Log, then this action may also be considered an archival operation in the province.

In a log receipt, you need to be aware of where the archive log will be placed:

1 if the Standby_archive_dest parameter is configured, the directory specified by the parameter is used.

2 If a log_archive_dest_n parameter explicitly defines the valid_for= (standby_logfile,*) option, the directory specified by this parameter is used.

3 Select any Log_archive_dest_n value if the compatible parameter of the database is greater than or equal to 10.0.

4 if the standby_archive_dest and Log_archive_dest_n parameters are not configured, the default standby_archive_dest parameter value is used, which is the default value of $oracle_home/dbs/arc.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.