Oracle Data Guard Theoretical knowledge ____oracle

Source: Internet
Author: User
Tags log log

RAC, Data Gurad, Stream are three tools in the Oracle High-availability system, each of which can be used independently or in conjunction with each other. Their respective emphasis is different, the application scenario is also different.

RAC its strength lies in solving single point of failure and load balancing, so the RAC scheme is often used in the core system of 7*24, but the data in the RAC scheme is only one, although the storage failure can be avoided by mechanisms such as RAID, but the data itself is not redundant and easy to form a single point of failure.

Data Gurad through the redundant data to provide the protection, the information Gurad through the log synchronization mechanism guarantees the redundant data and the main data synchronization before, this kind of synchronization may be real-time, the delay, the synchronization, the asynchronous various forms. Datagurad is often used in remote disaster-tolerant and small-enterprise high-availability scenarios, although it is possible to perform read-only queries on standby machines, thereby dispersing the performance pressures of primary databases, but Datagurad is by no means a performance solution.

Stream is data synchronization based on Oracle Advanced queue, provides multiple levels of flexible configuration, and Oracle provides rich API and other development support, stream is more applicable to data sharing at the application level.

In the data Gurad environment, there are at least two databases, one is open to provide services, and this database is called Primarydatabase. The second is in a state of recovery called Standbydatabase. Runtime primary database provides services to users on Primarydatabase, operations are recorded in the online and archived logs, and these logs are passed to Standbydatabase via the network. This log will be repeated on the standbydatabase to enable data synchronization between the primary database and the standby database.

Oracle Data Gurad Further optimizes the design of the process, making it more automated and intelligent for log delivery and recovery, and provides a range of parameters and commands to simplify DBA work.

If it is foreseeable that the primary database needs to be shut down, such as hardware and software upgrades, the standby database can be switched to the primary database to continue its external service, which reduces the service stop time and the data is not lost. If the exception causes the primary database to be unavailable, you can also force the standbydatabase switch to primary database to continue external service, where the degree of data loss is related to the level of data protection configured. So primary and standby are just a role concept, not fixed in a database.

A Data Guard Architecture

The DG Architecture can be divided into 3 parts by function:

1 log send (Redo send)

2) Log receive (Redo receive)

3 Log application (Redo apply)

1. Log send (Redo send)

During the Primary Database run, there will be a steady stream of redo logs that need to be sent to Standydatabase. This send action can be done by Primarydatabase's LGWR or arch process, different file destinations can use different methods, but for a destination, only one method can be chosen. Choosing which process is a big difference between data protection and system availability.

1.1 Using the Arch process

1 Primary Database constantly produces redo log, which is written to the online log by LGWR process.

2 when a group of online logs is filled, a log switch is made, and a local archive is triggered, with the log_archive_dest_1= ' location=/path ' format defined in the local archive location.

such as: Alter system set log_archive_dest_1 = ' Location=/u01/arch ' Scope=both;

3 after the local archive is completed, the online logs can be overwritten and reused.

4 The ARCH process sends the archive log to the standby database RFS (REMOTEFILESERVER) process through net.

5 The Standby Database-side RFS Process writes the received log to the archive log.

6) Standby Database-side MRP (Managed Recovery process) processes (redoapply) or LSP processes (sqlapply) Apply these logs to the Standbydatabase to synchronize data.

The standby redologs is transmitted in arch mode and stored directly into an archive file at the standby end.

Description

The logical standby is received and converted to SQL statements, and the execution of the SQL statement on the standby database is synchronized, which is called sqlapply.

After the physical standby receives the redo data generated by the primary database, it realizes synchronization in the form of media recovery, which is also called redoapply.

Note: To create a logical standby database, you first create a physical standby database and then convert it to a logical standby database.

The biggest problem with the arch process delivery is that the Primary database sends logs to standbydatabase only when an archive occurs. If the primary Database abnormal downtime, the redo content in the online logs will be lost, so using the arch process can not avoid the problem of data loss, to avoid data loss, you must use the LGWR, The use of LGWR is also divided into sync (synchronous) and async (asynchronous) two ways.

By default, the Primary database uses the arch process and the parameters are set as follows:

alter system set log_archive_dest_2 = ' service=st ' Scope=both;

1.2 Sync mode using LGWR process

1 Primary Database produces the redo log to write both the log file and the network. This means that the LGWR process writes the log to the local log file and sends it to the local LNSN process (networkserverprocess). The LNSN (lgwrnetworkserverprocess) process then sends logs over the network to remote destinations, each remote destination corresponds to a LNS process, and multiple LNS processes work in parallel.

2 LGWR must wait to write local log file operations and network transfer through the LNSN process is successful, Primary Database transactions can be submitted, which is the meaning of sync.

3 The RFS process of Standby database writes the received log to the Standby Redo log log.

4 Primary Database log switching will also trigger the log switch on the standby database, that is, standby database to standby Redo log archive, and then trigger standby database MRP or LSP process to restore archived logs.

Because the redo of the primary database is delivered in real time, there are two recovery methods available standby the database side:

Real-time recovery (real-time Apply): As long as RFS write the log to Standbyredolog will be immediately restored;

Archive recovery: Recovery is triggered after completing the standby Redo Log archive.

The Primary database uses the arch process by default, which must be explicitly specified if the LGWR process is used. When you use the Lgwrsync method, you can use the Net_timeout parameter, which is a second, which indicates that the LGWR process throws an error if the network send does not respond for much longer. Examples are as follows:

alter system set Log_archive_dest_2 = ' service=st lgwr SYNC net_timeout=30 ' Scope=both;

1.3 Async ways to use the LGWR process

The possible problem with using the Lgwr sync method is that if the log is sent to the standby database process fails, the LGWR process will complain. That is, Primarydatabase's LGWR process relies on network conditions, which can sometimes be too demanding, and the lgwrasync approach is available. Its working mechanism is as follows:

1 Primary Database Once the redo log is generated, LGWR submits the log to both the log file and the local LNS process, but the LGWR process simply writes to the log file and does not have to wait for the LNSN process's network transfer to succeed.

2 The LNSN process asynchronously sends the log contents to the standby Database. Multiple LNSN processes can be sent concurrently.

3) Primary database online Redo log after the log Switch, triggering the archiving operation, but also triggered standbydatabase to Standbydatabase to Standbyredo log archive And then triggers the MRP or LSP process to resume the archive log.

Because the LGWR process does not wait for the response result of the LNSN process, net_timeout parameters are not required to configure the LGWR async mode. Examples are as follows:

alter system set Log_archive_dest_2 = ' service=st lgwr ASYNC ' Scope=both;

2. Log receive (Redo receive)

When the RFS (remotefileserver) process of Standby Database receives the log, it writes the log to Standbyredolog or Archivedlog file, which file is written to, depending on the primary Log shipping mode and the location of the standby database. If it is written to the Standbyredolog file, the standbyredolog log switch on the standbydatabase is also triggered when the primary database occurs, and the standby Redo log is archived. If it is written to Archivedlog, then this action may also be considered an archival operation.

In a log receipt, you need to be aware of where the archive log will be placed:

1 if the Standby_archive_dest parameter is configured, the directory specified by the parameter is used.

2 If a log_archive_dest_n parameter explicitly defines the valid_for= (standby_logfile,*) option, the directory specified by this parameter is used.

3 Select any Log_archive_dest_n value if the compatible parameter of the database is greater than or equal to 10.0.

4 if the standby_archive_dest and Log_archive_dest_n parameters are not configured, the default standby_archive_dest parameter value is used, which is the default value of $oracle_home/dbs/arc.

3. Log application (Redo apply)

Log application service is to replay the primary database log on the standby db to achieve data synchronization between the two databases. According to the different Standby database replay log mode, it can be divided into physical Standby (Physicalstandby) and logical Standby (Logical Standby).

Physical Standby uses mediarecovery technology to restore at the block level, which has no data type limitations and can guarantee two databases to be identical. The physical standby database can be restored only in Mount state, or open, but only read-only and cannot perform a restore operation when it is opened.

Logical Standby uses Logminer technology to restore the contents of the log to SQL statements, and then the SQL engine executes the statements, Logminerstandby does not support all data types, and can be in the view Dba_logstdby_ Unsupported data types are not supported, and if this data type is used, the database is not guaranteed to be exactly the same. The Logicalstandby database can be read and write while the recovery is in progress.

The associated process of the standby database reads the received redo data (possibly from the standby-side archive or from standby redologs) and writes it to the standby database. How is the data generated after the save? Two ways: physical standby through redo application, logic standby through SQL application

Depending on the time that redo apply to occur, it can be divided into two types:

One is the real-time application (real-time apply), which must be standbyredolog, which triggers recovery whenever the log is written to Standbyredo log. The benefits of using this approach are in the time that you can reduce database switching (switchover or failover), because switching time is primarily used for recovery of remaining logs.

The other is the application of archiving, this way in the primary database log switching, triggering standbydatabase archiving operations, the archive is completed after the recovery triggered. This is also the default recovery method.

If you are physical Standby, you can enable real-time using the following command:

Alter database recover managed standby database using current logfile;

If you are logical Standby, you can enable real-time using the following command:

Alter database start logical standby apply immediate;

See if using real-time apply:

Select Recovery_mode from V$archive_dest_status;

Sql> Set Wrap off
Sql> select Process,status,thread#,sequence#,client_pid from V$managed_standby;

PROCESS STATUS thread# sequence# client_pid
--------- ------------ ---------- ---------- -----------------------------------

ARCH CONNECTED 00 240
ARCH CONNECTED 00 196
ARCH CONNECTED 00 1944
ARCH CONNECTED 00 3956
MRP0 wait_for_log 130843 N/A
RFS receiving 130838 2620
RFS receiving 130837 2612
RFS receiving 130833 2652
RFS attached 130841 2628
RFS attached 130835 2604
RFS attached

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.