Dataguard log transmission service

Source: Internet
Author: User

1. Log transmission can be divided into lgwr and arch. The default value is arch. lgwr transmission can be divided into async and sync.
Sync: synchronization service. transactions can be submitted only when the logs of the transaction parameters are successfully transferred to the destination of the slave database. Although the synchronization service does not limit the distance between the master database and the synchronization destination, the delay in synchronizing data from the master database to the destination increases the transaction commit time.
This synchronization mode is generally used for maximum protection and maximum availability.
Async: asynchronous service. This mode does not wait until the logs generated by this transaction are successfully synchronized to the slave database when the transaction is committed. This mode is generally used for maximum performance.




2. Redo Transmission Security
The Oracle Net service used by redo transmission, which uses SSL protection for authentication or remote password files




3. parameter file
Log_archive_dest_n: initialization parameter, used to specify the transmission destination of the slave Database
Log_archive_dest_state_n: Used to open and close the destination. It has three values.
Enable: default value. The control log can be transmitted to the address specified by log_archive_dest_n.
Defer: the control log cannot be transmitted to the address specified by log_archive_dest_n.
Alternate: after the specified address fails to be transmitted, it can be transmitted to this address.


1. Reopen try archiving again after specified time
The reopen = seconds (300 seconds by default) attribute is used to repeatedly archive data to the archiving destination at the specified time. If the value of this parameter is set to 0, once the redo Data fails, it will not try again and send it again until the next redo data is archived again.
For example, you can set reopen to 100 seconds:
Log_archive_dest_2 = 'service = daveprimary lgwr async reopen = 100'
2. Alternate specifies the backup archiving destination
The alternate attribute defines a backup archive destination. The so-called backup means that once the master archive destination is unavailable for various reasons, it is temporarily written to the path specified in the alternate attribute.
For example:
Log_archive_dest_1 = 'location =/disk1 alternate = log_archive_dest_2'
Log_archive_dest_state_1 = Enable
Log_archive_dest_2 = 'location =/disk2'
Log_archive_dest_state_2 = Alternate
The above parameter sets the archive path to/disk1. When the/disk1 path cannot be archived successfully, the system automatically tries to archive the file under/disk2 path.
In terms of functions, reopen and alternate must be repeated. However, note that the reopen attribute has a higher priority than the alternate attribute. If you specify a value greater than 0 for the reopen attribute, the lgwr (or arcn) process first tries to write data to the primary archive destination until the maximum number of retries is reached. If the write still fails, the data is written to the path specified by the alternate attribute.
 
3. max_failure controls the number of failed attempts
Use reopen to specify the time period for retry upon failure. max_failure controls the number of failed attempts.
For example, when you set log_archive_dest_1 to archive a local file, if an error occurs, you can try it once every 100 seconds. The total number of attempts cannot exceed 3. The settings are as follows:
Log_archive_dest_1 = 'location = E:/ora10g/oradata/jsspdg/reopen = 100 max_failure = 3'


Sync: Synchronize transmission logs
Async: asynchronously transmits logs.
Net_timeout: specify that the lgwr process transmits logs to the standby database within the specified time.
Noaffrem: logs generated by the master database are successful only when they are written to the slave database.
Affrem: logs generated by the primary database are successfully written to the standby database.
Db_unique_name: Unique name of the database, used to specify the address for log Transmission
Valid_for: used to specify the destination of the redo transmission service
Compression: the destination for log transmission in compressed form
Reopen: used to specify the minimum time for each connection to reconnect the log to the destination due to a previous error
Log_archive_dest_n: used to specify the log transmission destination.


*. Standby_file_management = 'auto': whether the actions for adding or reducing data files in the master database are automatically applied to the slave database.






4. Standby redo log
1) synchronous and asynchronous transmission modes must have standbylog at the transmission destination. Standby is used to receive logs transmitted from other data. Its structure is the same as that of Redo, and its management is the same as that of reodo.
Logs are transmitted by other databases and written to the standby reod log through The RFS process of the background process.


2) The standby redo log is greater than or equal to the redo log of the master database. It is recommended that the standby redo log of the slave database be equal to the redo log of the slave database.
3) standby redo of the slave database must have one or more logs than the master database.
MASTER:
SQL> select group #, bytes from V $ log;
Backup:
SQL> select group #, bytes from V $ standby_log;
4) create a standby redo log
If it is a single database, the Standby database creates one or more standby redo than the primary database's redo, and the log size is the same as that of the primary database.
SQL> alter database add standby logfile ('/Oracle/dbs/slog1.rdo') size 500 m;
For Clusters
SQL> alter database add standby logfile thread 1 size 500 m;
SQL> alter database add standby logfile thread 2 size 500 m;

5) Configure standby redo log Archiving
SQL> shutdown immediate;
SQL> startup Mount;
SQL> alter database archivelog;
Configure archiving to the quick recovery area:
Log_archive_dest_2 = 'location = use_db_recovery_file_dest valid_for = (standby_logfile, standby_role )'
Log_archive_dest_state_2 = Enable
Or configure archiving to a local area:
Log_archive_dest_2 = 'location =/disk2/archive valid_for = (standby_logfile, standby_role) 'log_archive_dest_state_2 = Enable




5. login ing a cascaded destination
1) Select the configuration cascade slave Database
2) configure the network service tnsname. ora to the master database and slave database on the cascade slave database, and then configure fal_server
3) Configure log_archive_dest_n, and the service is the Network Name of the slave database.






Primary Database


Db_unique_name = Boston
Fal_server = boston2
Log_archive_config = 'dg _ Config = (Boston, boston2, Denver )'
Log_archive_dest_1 = 'location = use_db_recovery_file_dest valid_for = (all_logfiles, all_roles) db_unique_name = Boston'
Log_archive_dest_2 = 'service = boston2 sync valid_for = (online_logfiles, primary_role) db_unique_name = boston2'

Cascading physical standby Database


Db_unique_name = boston2
Fal_server = Boston
Log_archive_config = 'dg _ Config = (Boston, boston2, Denver )'
Log_archive_dest_1 = 'location = use_db_recovery_file_dest valid_for = (all_logfiles, all_roles) db_unique_name = boston2'
Log_archive_dest_2 = 'service = Denver valid_for = (standby_logfiles, standby_role) db_unique_name = Denver'
 
Cascaded physical standby Database
Db_unique_name = Denver
Fal_server = boston2
Log_archive_config = 'dg _ Config = (Boston, boston2, Denver )'
Log_archive_dest_1 = 'location = use_db_recovery_file_dest valid_for = (all_logfiles, all_roles) db_unique_name = Denver'




6. Monitoring log transmission:
1. view recently archived log files
SQL> select max (sequence #), thread # from V $ archived_log group by thread #;
2. Check the most recently archived logs in the master database.
Select destination, status, archived_thread #, archived_seq # from V $ archive_dest_status where status <> 'referred' and status <> 'inactive ';
3. Check the logs that are not archived in the master database to the slave database.
Select Local. Thread #, local. Sequence # From (select thread #, sequence # from V $ archived_log where dest_id = 1) Local where
Local. Sequence # Not in (select sequence # from V $ archived_log where dest_id = 2 and thread # = Local. Thread #);
 
4. response time of log synchronization monitored by the master database
SQL> select frequency, duration from V $ redo_dest_resp_histogram where dest_id = 2 and frequency> 1;
Frequency indicates the monitoring response time several times and the duration Response Time

How long is the maximum response time?
SQL> select max (duration) from V $ redo_dest_resp_histogram where dest_id = 2 and frequency> 1;

Minimum response time of the primary database
SQL> select Min (duration) from V $ redo_dest_resp_histogram where dest_id = 2 and frequency> 1;


The highest observed response time for a destination cannot exceed the highest specified net_timeout value specified for that destination, because synchronous redo transport mode sessions are terminated
If a redo transport destination does not respond to a redo transport message within net_timeout seconds.
The maximum response time cannot exceed the time specified by net_timeout. Otherwise, log synchronization will be interrupted.








7. gap solution:


Standby database check for gap
SQL> select * from V $ archive_gap;
Thread # low_sequence # high_sequence #
--------------------------------------
1 7 10



For a physical standby database:
Run the command on the master database to determine the gap and the log name missing from the slave database.
SQL> select name from V $ archived_log where thread # = 1 and dest_id = 1 and sequence # between 7 and 10;
Name
--------------------------------------------------------------------------------
/Primary/threadincludest/arcr_00007.arc
/Primary/threadincludest/arcr_00008.arc
/Primary/threadincludest/arcr_00009.arc




Copy logs from the master database to the slave database, and then register
SQL> alter database register logfile '/physical_standby1/threadincludest/arcr_00007.arc ';


SQL> alter database register logfile '/physical_standby1/threadincludest/arcr_1_8.arc ';


SQL> alter database register logfile '/physical_standby1/threadincludest/arcr_00009.arc ';


For the logic standby database query dba_logstdby_log:
Run
SQL> column file_name format A55
SQL> select thread #, sequence #, file_name from dba_logstdby_log L where next_change # Not in
(Select first_change # From dba_logstdby_log where L. Thread # = thread #)
Order by thread #, sequence #;
 
Thread # sequence # file_name
-------------------------------------------------------------------
1 6/disk1/Oracle/dbs/log-1292880008_6.arc
1 10/disk1/Oracle/dbs/log-1292880008_10.arc
Copy logs from the master database to the slave database, and then register

SQL> alter database register logical logfile '/disk1/Oracle/dbs/log-1292880008_7.arc ';


SQL> alter database register logical logfile '/disk1/Oracle/dbs/log-1292880008_8.arc ';


SQL> alter database register logical logfile '/disk1/Oracle/dbs/log-1292880008_9.arc ';




8. Wait events caused by log Transmission
Wait event found in V $ system_event


Wait Event Description
LNS wait on attach: total time spent waiting for redo transport sessions to be established to all async and sync redo transport destinations
LNS wait on sendreq: total time spent waiting for redo data to be written to all async and sync redo transport destinations
LNS wait on detach: total time spent waiting for redo transport connections to be terminated to all async and sync redo transport destinations





View the log status:
SQL> select name, dest_id, sequence #, resetlogs_id, Registrar, applied, deleted, status, FAL, standby_dest, archived from V $ archived_log;
Name dest_id sequence # resetlogs_id registr applied del s FAL sta arc
-----------------------------------------------------------------------------------------------------
1 403 815224926 srmn yes d No Yes
1 404 815224926 srmn yes d No Yes
/Archive/records 405_815224926.arc 2 405 815224926 RFS Yes No A Yes No Yes
/Archive/415406_815224926.arc 2 406 815224926 RFS Yes No A Nono Yes
/Archive/415407_815224926.arc 2 407 815224926 RFS Yes No A Nono Yes
/Archive/records 408_815224926.arc 1 408 815224926 RFS Yes No A Nono Yes
/Archive/records 409_815224926.arc 1 409 815224926 RFS Yes No A Nono Yes
/Archive/1_410_815224926.arc 1 410 815224926 RFS Yes No A Nono Yes
/Archive/415411_815224926.arc 1 411 815224926 RFS Yes No A Nono Yes
/Archive/ipv412_815224926.arc 1 412 815224926 RFS Yes No A Nono Yes
/Archive/415413_815224926.arc 1 413 815224926 RFS Yes No A Nono Yes
/Archive/415414_815224926.arc 1 414 815224926 RFS Yes No A Nono Yes
/Archive/1_415_815224926.arc 1 415 815224926 RFS Yes No A Nono Yes
/Archive/export 416_815224926.arc 0 416 815224926 RFS Yes No A Yes No Yes
/Archive/1_417_815224926.arc 0 417 815224926 RFS Yes No A Yes No Yes
/Archive/415418_815224926.arc 1 418 815224926 RFS Yes No A Nono Yes
/Archive/ipv419_815224926.arc 1 419 815224926 RFS Yes No A Nono Yes

Dataguard log transmission service

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.