RAC is designed to solve single point of failure (spof) and load balancing. However, only one copy of data exists in the RAC solution. The data itself is not redundant and is easy to form a single point of failure (spof ).
Data Guard provides data protection through redundant data and ensures synchronization between redundant data and primary data through log synchronization.
Such synchronization can be real-time, delayed, synchronous, or asynchronous
(I) log sending
Logs of the master database are sent from the master database to one or more archiving targets by lgwr or arcn.
Different methods can be used for different archiving targets, but only one method can be used for the same destination.
The archive object is specified by log_archive_dest.
You can specify the local or remote service name.
The most common mode for starting a database archive is to set a local archive path.
① Use arcn process (latency)
The default master database is to use the arcn process to send logs.
(1) The primary database redo is written by lgwr to the online redo log.
(2) When a group of online redo logs are full, a log switch will occur, triggering local archiving.
(3) The arcn process sends archived logs to the RFS process of the standby database through net.
(4) The RFS process writes the received logs to the archive logs.
Slave slave database MRP process or LSP process application log to synchronize data
The biggest problem with arcn process transmission is that the master database sends logs to the slave database only when an archive occurs.
The redo entries in the online redo log may be lost.
② Lgwr Sync method (
Recommended for Oracle)
What is sync?
A:
Lgwr must wait until the online redo log write operation and network transfer through the lnsn process are successful.
Only the transactions in the master database can be commit. This is what sync means.
(1) lgwr writes redo entries to the online redo log and sends it to the local lnsn process.
Then, the lnsn sends logs to the remote archiving target through net.
Each remote destination corresponds to an lNS process. Multiple lNS processes can work in parallel.
(2) The RFS process of the standby database writes the received logs to the standby redo log.
(3) The switch log of the master database also triggers the archive of standby redo logs by the slave database.
Then trigger the MRP or LSP process application log of the slave database.
Because the redo of the master database is transmitted in real time, the slave database can be restored in two ways:
* Real-Time recovery: directly retrieve from standby redo logs
* Archive recovery: the archived log must be retrieved.
When using the lgwr Sync method, we recommend that you include the net_timeout parameter.
Log_archive_dest_2 = 'service = stdby lgwr sync net_timeout = 30'
The biggest problem with lgwr sync is that it depends on network conditions.
③ Lgwr async Mode
(1) lgwr only needs to write the noline redo log and does not have to wait for the network transfer of the lnsn to succeed.
(2) lnsn asynchronously sends redo to the slave database. Multiple lnsns can be concurrently sent.
(3) The master database log switch will also trigger the archive of standby redo logs by the slave database.
Then the MRP or LSP Application log is triggered.
Async only supports archive recovery, so you do not need to specify net_timeout.
(Ii) log receipt
The logs received by the RFS process are written to the standby redo log or archived log.
Depends on the log transmission mode of the master database and the configuration of the slave database.
When writing to the standby redo log, the master database log switch also archives the standby redo log.
Slave database configuration instructions:
① If standby_archive_dest is configured, use the directory specified by this parameter
② If a log_archive_dest_n explicitly defines valid_for = (standby_logfile, *), use this directory
③ If the compatible> = 10.0 of the database, select a value of log_archive_dest_n.
④ If neither standby_archive_dest nor log_archive_dest_n is specified, use
Default Value of standby_archive_dest: $ ORACLE_HOME/dbs/Arch
(Iii) log application
① Physical standby application using media Recovery Technology
That is, redo apply
(1) Real-time application
In this way, you must specify the standby redo log
Each time a log is written into the standby redo log, the recovery is triggered.
The advantage is that it can reduce the time for changing database roles.
Because the change time is mainly used to restore the remaining log Content.
(2) archive application
This method triggers the archive operation of the slave database when the master database logs are switched.
Recovery is triggered after archiving, which is also the default recovery method.
② Use logminer Technology for logical standby
That is, SQL apply
Physical standby enables real-time applications:
Alter database recover managed standby database using current logfile;
Logical standby enables real-time applications:
Alter database start logical standby apply immediate;
Check whether real-time applications are used:
Select recovery_mode from V $ archive_dest_status;
(Iv) Automatic Crack Detection
When some logs of the master database are not successfully sent to the slave database, archive gap occurs.
The missing logs are the gap logs.
DG can automatically detect and process gaps without DBA intervention
This requires configuring fal_client, fal_server (fetch archive log)
Who is the server to ask for logs? The server is not limited to the master database, but may be a slave database.
Therefore, in DG, the master database and slave database are only role concepts and are not fixed on a database.
Of course, DBA can intervene
① Confirm that the standby database has missing logs
② Copy the missing logs from the master database to the slave Database
③ Manually register these logs in the slave database:
Alter database register logfile 'logfilename ';
(V) three data protection Modes
① Maximum protection ):
All transactions are written to at least one standby database redo data before submission.
If any fault occurs that causes the standby database to be unavailable, the primary database will be shut down.
② Maximum performance (maximum performance ):
Transactions can be committed at any time. The redo data of the current primary database also needs to be written to at least one standby database.
However, such writing can be non-synchronous.
This is a default mode.
③ Maximum availability (maximum availability ):
Like maximum protection, at least one standby database redo data is written synchronously.
However, if the Import fails and cannot write the standby database redo log at the same time, the primary database will not shutdown.
Instead, the Standby database is automatically converted to the highest availability mode.
It is a tricky mode, usually with the maximum protection, once a fault occurs, it will automatically change to the maximum performance mode
This is the choice of many people. If the bandwidth is sufficient (of course, the current bandwidth is quite adequate), we strongly recommend switching to this mode.
(Vi) role conversion
A dg Environment has only two roles: Primary and standby.
But there are only two types:
Switchover: no loss of data, but not loss of data
Two phases:
① The master database is changed to the slave database.
② The slave database is changed to the master database.
Failover: the original master database is not part of the DG Environment after the data may be lost but changed.