Oracle Dataguard Log Transfer

Source: Internet
Author: User

1. Log transfer Mode

There are two types of log transfers (arc and LGWR), the first of which is to use the ARC process to transfer the log, as follows:

Note: from "Big Talk Oracle RAC"

The approximate process is as follows:

1) Main Library: The log is written to the online redo log, when the online redo log is full (or artificially switched), the ARC0 process archives the log to the main library local archive directory, after the archive is completed, ARC1 immediately transfer the archive log to the repository;

2) Standby: RFS process receive log, if the repository has standby redo log, then copy the log to the standby redo log, and then archive the standby redo log to the repository local archive directory, and finally apply the archive log; If you do not configure the standby redo log, After the RFS process receives the log, it is placed directly into the archive directory of the repository and then the log is applied.

Another is the use of the LGWR process to transfer the log, it is divided into two ways, namely async and sync, the following is async:

Here is the following sync:

Note: from "Big Talk Oracle RAC"

The transmission with LGWR is roughly as follows:

1) Main Library: As soon as a new redo log is generated, the LGWR process will trigger the LNSN process to transfer the newly generated log to the standby (note: This cannot be transmitted directly by LGWR because the entire database instance has only one lgwr, so that its primary business performance is not affected, it cannot be transmitted directly);

2) Prepare the Library: RFS process receives the log, writes it to the standby redo log, if the storehouse opens the real-time application, immediately does the log application, if does not turn on, then waits for the standby redo log archives and then applies.

3) The difference between async and sync is that the LNSN process begins to transfer when the redo is still in memory, and async is redo buffered to the online redo log before LNSN starts the transfer.

From the above, from the real-time synchronization, LGWR (sync) > Lgwr (ASYNC) > ARCH

2. Log sending of the configuration Master library sent by the log is set by the Log_archive_dest_n parameter (note: There is also a switch parameter log_archive_dest_state_n corresponding to it to specify whether the parameter is valid), The following is a brief description of the meaning of each property of the parameter. SERVICE (required): Specifies the network connection name of the standby;

Sync/async (default to Async): Specifies the transport mode of the log (synchronous/asynchronous);

Net_timeout: Specifies that when the sync transfer mode is used, the network timeout (default is 30s) is more than the number of seconds, and it is strongly recommended to set the parameters when using the SNYC mode;

Affirm/noaffirm:affirm indicates that the log transfer succeeds only if the log is written to the standby redo log, noaffirm does not have this requirement;

Db_unique_name: Specifies the db_unique_name of the standby library;

Valid_for: The format is (redo_log_type,database_role), only these two conditions all meet, will send the log;

Where Redo_log_type has the following values: Online_logfile, Standby_logfile, All_logfiles

Database_role has the following values: Primary_role, Standby_role, All_roles

REOPEN: Specifies how many seconds to retry when a connection error occurs;

COMPRESSION: Specifies whether to compress the log, which improves network transport performance.

Here is an example of log_archive_dest_n:

Db_unique_name=bostonlog_archive_config=\ ' dg_config= (boston,chicago,hartford) \ ' log_archive_dest_2=\ ' SERVICE= CHICAGO ASYNC noaffirm valid_for= (online_logfile, primary_role) reopen=60 compression=enable db_unique_name=chicago\ ' Log_archive_dest_state_2=\ ' enable\ ' log_archive_dest_3=\ ' Service=hartford SYNC affirm NET_TIMEOUT=30 VALID_FOR= ( Online_logfile,primary_role) reopen=60 compression=enable db_unique_name=hartford\ ' LOG_ARCHIVE_DEST_STATE_3=\ ' Enable\ ' 3. The log accepts the configuration in the standby library, when the RFS process accepts the log, writes it to the standby log file, the storehouse standby log file can be regarded as the main library online log file The mirror, when the main library makes the log switch, the storehouse standby the log also does the corresponding switch, The switched standby logs are archived by the backup arch process.

Oracle Standby log file size cannot be less than the largest online log file of the main library, in general, for ease of management, it is best to set all the online logs and standby log size to the same.

Oracle also requires that the standby log files for the repository be at least one more group than the main library's online log files.

The following statement allows you to query the size and group count of the main library online logs:

Sql> SELECT group#, BYTES from V$log; The following statement allows you to query the size and group count of the repository standby log:

Sql> SELECT group#, BYTES from V$standby_log; Assuming that the main library has two sets of online logs, each with a log file size of 500M, we need at least 3 groups of STANDBY and each log file size not less than 500M:

sql> ALTER DATABASE ADD STANDBY LOGFILE ('/oracle/dbs/slog1.rdo ') SIZE 500M;
sql> ALTER DATABASE ADD STANDBY LOGFILE ('/oracle/dbs/slog2.rdo ') SIZE 500M;
sql> ALTER DATABASE ADD STANDBY LOGFILE ('/oracle/dbs/slog3.rdo ') SIZE 500M;

As mentioned above, the standby log of the repository is also archived, and its archive path is set by the Location property of the Log_archive_dest_n parameter of the standby, as follows:

log_archive_dest_1 = \ ' location =/disk2/archivevalid_for= (standby_logfile,standby_role) \ ' log_archive_dest_state_2 =enable

4. Monitoring of log transmission status

First step: Get the latest archived serial number in the main library by executing the following statement:

Sql> SELECT MAX (sequence#), thread# from V$archived_log GROUP by thread#;

Step two: In the main library, execute the following statement to confirm that the most recent archived logs have been transferred to the standby repository:

Sql> SELECT DESTINATION, STATUS, archived_thread#, archived_seq# from V$archive_dest_status WHERE STATUS <> ' DE Ferred ' and STATUS <> ' INACTIVE ';
DESTINATION STATUS archived_thread# archived_seq#
------------------  ------  ----------------  -------------
/private1/prmy/lad VALID 1 947 standby # 947 If the status is not valid, the log transfer fails.

5. Manually resolve log Transfers gap

Oracle DG automatically detects log transmission gap and automatically resolves it, but sometimes gap cannot be resolved automatically and must be manually intervened by the DBA, let's talk about the steps for manual resolution:

First, the following statements are executed in the repository to inquire whether there is a gap:

Sql> SELECT * from V$archive_gap;
thread# low_sequence# high_sequence#
-----------  -------------  --------------
1 7 # #HIGH_SEQUENCE #减去LOW_SEQUENCE # is the current GA P number, the previous example shows that the repository is missing an archive log from 7 to 10 (10 not included).

Next, query the directory location of these archive logs in the main library:

Sql> SELECT NAME from V$archived_log WHERE thread#=1 and Dest_id=1 and sequence# between 7 and 10;
NAME
--------------------------------------------------------------------------------
/primary/thread1_dest/arcr_1_7.arc/primary/thread1_dest/arcr_1_8.arc/primary/thread1_dest/arcr_1_9.arc

Copy these archive logs from the main library to the repository and register them on the standby library:

sql> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_7.arc ';
sql> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_8.arc ';
sql> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_9.arc ';

Oracle Dataguard Log Transfer

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.