[Oracle] Data Guard series (2)-log Transmission
From a macro perspective, Oracle Data Guard provides the following two services: 1) log transmission: the master database transfers the generated Redo logs to the slave database; 2) log application: redo logs transmitted from the master database by the slave database. 1. there are two log transmission modes (ARC and LGWR). The first is to use the ARC process to transmit logs. Note: The general process from Oracle RAC is as follows: 1) master database: logs are first written to online redo logs. When online redo logs are full (for future generations to switch), The ARC0 process archives the logs to the local archive directory of the master database. After archiving, ARC1 will immediately transmit the archived log to the Standby database; 2) the Standby Database: The RFS process receives the log. If the Standby database has the Standby redo log, the log is copied to the Standby redo log, archive the Standby redo log to the local archive directory of the slave database, and apply the archived log. If the Standby redo log is not configured, after the RFS process receives the log, directly put it in the archive directory of the slave database and then apply the log. The other method is to use the LGWR process to transmit logs. It is very different from the first method. The most obvious difference is that it does not need to wait until the master database completes log switching before log transmission, note: The process from Oracle RAC is roughly as follows: 1) master database: As long as new redo logs are generated, the LGWR process will trigger the LNSn process to transmit new logs to the standby database (Note: The LGWR cannot directly transmit the logs here, because the entire database instance has only one LGWR, to ensure that its main business performance is not affected, it cannot be directly transmitted); 2) slave Database: After the RFS process receives the log, it is written into the Standby redo log, if real-time applications are enabled for the Standby database, log applications will be implemented immediately. If the application is not enabled, the Standby database will be applied after the Standby logs are archived. 2. the log sending configuration of the master database is set by the log_archive_dest_n parameter (Note: there is also a corresponding Switch Parameter log_archive_dest_state_n, which is used to specify whether this parameter is valid ), the following describes the meaning of each attribute of the parameter. SERVICE (required): Specify the network connection name of the slave database; SYNC/ASYNC (ASYNC by default): Specify the log transmission mode (synchronous/asynchronous); NET_TIMEOUT: specify the number of seconds that the network times out when the SYNC transmission mode is used (30 s by default). When using SNYC mode, we strongly recommend that you set the parameter to change: affrem indicates that the log is successfully transferred only when the log is written into the Standby redo log. noaffrem does not have this requirement. DB_UNIQUE_NAME: Specifies the DB_UNIQUE_NAME of the Standby database; VALID_FOR: The format is (redo_log_type, database_ro ), logs are sent only when all the two conditions are met. The redo_log_type has the following values: ONLINE_LOGFILE, STANDBY_LOGFILE, and ALL_LOGFILES database_role have the following values: PRIMARY_ROLE, STAND BY_ROLE, ALL_ROLESREOPEN: specifies the number of seconds after a connection error occurs. COMPRESSION: Specifies whether to compress logs, which improves network transmission performance. The following is an example of logging: DB_UNIQUE_NAME = primary = 'dg _ CONFIG = (BOSTON, CHICAGO, HARTFORD) 'log_archive_dest_2 = 'service = chicago async noaffrem VALID_FOR = (ONLINE_LOGFILE, PRIMARY_ROLE) REOPEN = 60 COMPRESSION = ENABLE DB_UNIQUE_NAME = CHICAGO 'availability = 'enabled' availability = 'service = hartford sync affrem NET_TIMEOUT = 30 VALID_FOR = (ONLINE_LOGFILE, PRIMARY_ROLE) REOPEN = 60 COM Expression = ENABLE DB_UNIQUE_NAME = HARTFORD 'log_archive_dest_state_3 = 'enable' 3. log acceptance is configured in the slave database. When the RFS process receives the log, it is written to the Standby log file, the Standby database's Standby log file can be seen as an image of the online log file of the primary database. When the primary database performs log switching, the Standby database's Standby log is also switched accordingly, the switched Standby log is archived by the ARCH process of the slave database. Oracle requires that the size of the Standby database's Standby log file cannot be smaller than that of the primary database's online log file. Generally, for ease of management, it is best to set the size of all online logs and Standby logs to the same. Oracle also stipulates that the Standby database must have at least one group of Standby log files than the primary database's online log files. The following statement can be used to query the size and number of online logs of the master database: SQL> SELECT GROUP #, BYTES FROM V $ LOG; the following statement can be used to query the size and number of Standby database Standby logs: SQL> SELECT GROUP #, BYTES FROM V $ STANDBY_LOG; assume that the master database has two groups of online logs, each of which is 500 mb in size, we need at least three sets of Standby, and the SIZE of each log file is not less than 500 M: SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/oracle/dbs/slog1.rdo') 500 M; SQL> alter database add standby logfile ('/oracle/dbs/slog2.rdo') SIZE 500 M; SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/oracle/db S/slog3.rdo ') SIZE 500 M; As mentioned above, the Standby database's Standby log is also archived, and its archive path is set by the location attribute of the log_archive_dest_n parameter of the Standby database, LOG_ARCHIVE_DEST_1 = 'location =/disk2/archiveVALID_FOR = (STANDBY_LOGFILE, STANDBY_ROLE) 'LOG _ ARCHIVE_DEST_STATE_2 = ENABLE 4. log transmission status monitoring Step 1: execute the following statement on the master database to obtain the latest archive serial number: SQL> SELECT MAX (SEQUENCE #), THREAD # FROM V $ ARCHIVED_LOG GROUP BY THREAD #; step 2: run the following statement on the master database to check whether the latest ARCHIVED logs have been transmitted to the slave database: SQL> SELECT DESTINATION, STATUS, ARCHIVED _ THREAD #, ARCHIVED_SEQ # from v $ ARCHIVE_DEST_STATUS> where status <> 'referred' and status <> 'inactive '; destination status ARCHIVED_THREAD # ARCHIVED_SEQ # ------------------ ------ -----------------------/private1/prmy/glad VALID 1 947standby1 VALID 1 947 if the STATUS is not VALID, the log transmission fails. 5. manual solution to the GAP in log transmission Oracle DG will automatically detect the GAP in log transmission and resolve it automatically. However, sometimes the GAP cannot be automatically resolved and DBA manual intervention is required. The following describes the steps for manual solution: first, run the following statement in the standby database to check whether the GAP exists: SQL> SELECT * FROM V $ ARCHIVE_GAP; THREAD # LOW_SEQUENCE # HIGH_SEQUENCE # ------------------- ---------------- 1 7 10HIGH_SEQUENCE # Minus LOW_SEQUENCE # Is the current GAP number. The preceding example shows that the backup database lacks archive logs from 7 to 10 (excluding 10. Then, query the directory location of these archived logs in the master database: SQL> SELECT NAME FROM V $ ARCHIVED_LOG WHERE THREAD # = 1 AND DEST_ID = 1 AND SEQUENCE # BETWEEN 7 AND 10; NAME logs/primary/threadincludest/arcr_00007.arc/primary/threadincludest/arcr_00008.arc/primary/threadincludest/arcr_00009.arc copies these archived logs from the master database to the slave database, and registers the logs on the slave database: SQL> alter database register logfile '/physical_standby1/threadincludest/arcr_00007.arc'; SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/threadincludest/arcr_00008.arc '; SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/threadincludest/arcr_00009.arc ';