Oracle 11g Dataguard Parameter detailed _oracle

Source: Internet
Author: User
Tags deprecated failover set time

Note: This article is translated from Oracle Data Guard 11g Handbook Page 78–page 88

In terms of data Guard (later written as DG), we only focus on the following three types of parameters:

1. Parameters independent of database roles
2. Parameters when the database role is primary
3. Parameters when the database role is standby

Although DG has a lot of configuration parameters, we actually use only a few of them, and since many of the DG features are now integrated into the code, many of the configuration parameters in the recent DG version have been deprecated. It is to be noted that in order to facilitate the role transformation of the database (roles transition), the parameters related to the TNS NAMES,LISTENER,SRL (Standby Redo log) file need to be configured in all databases. So now let's take a look at these parameters:

The role-independent parameters

db_unique_name This parameter defines the unique name of the database. Because the db_name parameter needs to be consistent with the physical standby database (physical standby) name, and the logical standby database (logical standby) name is not the same, So in 10g This parameter is introduced to differentiate each database role in the DG configuration. This parameter needs to be configured in all databases and requires a restart of the database to take effect. If you do not configure this parameter, the db_name parameter is used by default, which means that we do not need to shut down the production library to complete the configuration of the standby database, which we can configure later.

Copy Code code as follows:
Db_unique_name= ' Matrix '

log_archive_config This parameter defines the list of db_unique_name parameter values available in the DG configuration. When used in conjunction with target parameters (discussed later) Db_unique_name values, DG uses them to implement security checks for connections between the two databases. This parameter is dynamic as long as you do not specify the send and receive properties, which are inherited by the old parameter remote_archive_enable and are no longer needed, so do not use it anymore.

When used in practice, all you need to do is add the unique name of the other database to the configuration, and the unique name of the current database will be added automatically based on the scene, but the unique name of the current database will be added explicitly to clear the period and maintain consistency of the parameter in all databases. There is no requirement for the configuration order of names, which must be configured in an environment with RAC and should always be used.

Copy Code code as follows:
Log_archive_config= ' dg_config= (matrix,matrix_dr0) '

Control_files Everyone knows the purpose of this parameter (note: The location of the current database control file), and remember that for the standby database, it points to the location of the standby control file (Standby). This control file is created automatically, or created manually, depending on how you create the standby database. (Note: Automatic creation usually occurs during the process of generating an alternate database using the Rman feature, and if you are using manual methods, the control files need to be manually transferred from the main library)

Copy Code code as follows:
control_files= '/oracle/oradata/matrix/control01.ctl '

log_archive_max_processes mentions this parameter because its default value is still 2, too small. In the main library, the archive process is responsible for archiving the online log files that have been filled (Redo log) and transferring them to the standby database as a redo stream (Redo Steam) to complete the interval processing (GAP), in which the archive process is responsible for archiving the repository log files (Standby Redo LOG) and forwards it to its cascaded standby database. (Note: A cascaded standby database is the next level of standby database, that is, the standby standby, from here you can see that no matter what the database role, the archive process is the same content of the work: 1, archive log file; 2, forward log file to standby)

In the main library, there is an archive process that is limited to servicing the online log files and has no right to communicate with the standby, a special arch process called a dedicated arch process that can be accomplished by other archiving processes. When the archive process sends an archive log file to the repository, it cannot assist in archiving the ORL file, although the main instruction of the archive process is "to archive the online log files before processing the primary standby," but in the worst case, there may still be only one archive process in the archive task. If there is not enough archiving process, in a slow network, a large log interval between primary and standby, you may have only one process processing log files. There's a very tricky problem here: If all your log files are already full, the production library will stall until one of the files is archived. The Multithreading interval processing feature (max_connections), introduced in 10g, allows DG to use multiple archive processes to send a single log file to the standby database, which means that we use more archive log processes, so this parameter should be set at least 4, with a maximum value of 30.

Copy Code code as follows:
Log_archive_max_processes= ' 4 '

The dedicated arch process for the standby library

Note that there is also a "dedicated arch process" in the standby database, but this only means that there is one less archive of the Srl file in the repository, and that the dedicated arch process does not archive the Srl file feature in physical standby.

While using multiple archive processes, it is important to note that while increasing the archiving process can reduce the likelihood of disruption to the production environment, a large number of archiving processes increase the time of primary-standby switching (switchover), as this requires waking up all the archiving processes and withdrawing them. We can avoid this by dimming the parameter before performing the switch. In addition, new streaming features (streaming Capability) have been introduced into the 11g, and if the log interval between the primary and standby libraries is very large, excessive archival process transmission will fill the entire network bandwidth.

Db_create_file_dest Although this is not a DG-specific parameter, it needs to be introduced, because if you use ASM in the repository, this parameter is defined.

Copy Code code as follows:

Db_create_file_dest=+data

Second, the main library role parameters

Log_archive_dest_n This is the main parameter of the DG Redo log transmission, which usually works in the main library, with exceptions, such as the scenario for processing a cascaded database, which can also be used to specify that the online redo log (ORL) or standby redo log (SRL) The transmission destination of the resulting archive log file, but with the introduction of the flash Recovery zone in the 10GR1 version, the locally archived log files are placed in the Flashback recovery area by default, so there is no need to set up a local archive in this case; we'll discuss the local archive and location properties. But you should have used the flashback recovery area, so you don't need to set the Log_archive_dest_n parameter again.

This parameter has 17 properties, all of these properties are used to set up the main library to the backup of the redo log transmission; in fact, you only need to set 7 of them can make log transfer work properly; we'll go through the 7 properties and use some examples to illustrate their usage. Then we'll explore the remaining 10 properties and their usage scenarios and usage reasons, and we don't recommend setting 6 of them.

The following are the required properties:

SERVICE Specifies the TNSNames descriptor of the standby that has been created, and this is where early network tuning begins. (Note: This is a network-related attribute that will be encountered earlier in the DG Setup)

Sync Specifies that the redo data is routed using a synchronous method, that is, the commit of the client transaction occurs after the LGWR process receives the confirmation message from the LNS, and for maximum available mode and maximum protection mode, this requires at least one standby (Standby).

ASYNC the default value; Redo data occurs asynchronously if no log transport type is specified; this is the log transfer method under Maximum performance mode.

net_timeout Specifies the time that the LGWR process waits for the LNS process to respond, and if no response is received, the standby fails (failed), the default value is 30 seconds, but 10-15 may be a more appropriate value depending on your network reliability. Do not set this value to 101, or you may not be able to establish a connection after the standby is restored, because it can take a few seconds to reconnect to the repository;

1. Stop the old LNS process
2. Start a new LNS process
3. Establish a connection with the standby library
4. Detect and stop the old RFS process
5. Start a new RFS process
6. Select and open a new Srl
7. Initializing the SR header (note: The Redo log data for the standby library)
8. Respond to the LNS process and inform that you have completed your preparations

After all these operations are complete, the LNS process tells the LGWR that the process repository is connected successfully, and if the process takes more time than the Net_timeout value, then LGWR discards the standby again, which is done every time a log switch occurs.

REOPEN This property controls the waiting time for the primary library to try to reconnect to the failed standby, which is 300 (5 minutes), which is usually the reason why the library will not be heavily connected until the library is stopped. Generally speaking, the test time will be relatively fast; first shutdown the abort, observe the main library alert log to see if it is disconnected from the standby, and then start the standby, and switch logs in the main library to see if there is a reconnection, these operations will be completed in 5 minutes, so if you are quick, DG will not be in the first time ( or more times) when the log switch is connected. This property is designed to avoid the situation where the main library switches logs immediately after the repository fails, so you might consider setting this property to 30 seconds or even 15 seconds, so DG will complete the heavy work as soon as possible.

Db_unique_name To use this property in the parameter Log_archive_dest_n parameter, you need to set the Log_archive_config parameter at the same time, otherwise DG will refuse to connect to the target library; This service target (remote Name is the unique name of the database (that is, the standby database) that you use to connect to the other end.

You must also add the unique name to the Log_archive_config parameter in the database at both ends. When the main library initiates a connection to the standby, it sends its own database unique name to the standby, requiring the repository to return a unique name. The Log_archive_config parameter will be checked in the repository. To ensure that the unique name of the main library does exist, if it does not exist, the connection request is rejected, and if so, the fallback returns its own unique name back to the LNS process of the main library, if the returned value does not match the value of the property in the main library. The connection will be terminated.

As with the Log_archive_config parameter, this property must be configured in a RAC environment.

Valid_for This is the last property that must be configured. Although you don't think DG can work properly without configuring this attribute, it is recommended that you use it. The primary function of this parameter is to define when to use the target parameter log_archive_dest_n and what type of log file it acts on.

The following is the legal value of the log file:

1.online_logfile is only valid when ORL files are archived
2.standby_logfile is only valid when the Srl file is archived
3.All_logfiles Both the type of redo log file is valid

The following are the legal values for the role:

1.primary_role only takes effect in the main library
2.standby_role only takes effect in the standby library
3.all_roles master role is valid

If the replies to both parameters are true,valid_for, the target parameter is allowed. (Note: This means that the target parameter is used when the above two subkeys of Valid_for are true.) For example, set to Valid_for= (Online_logfiles,primary_role) The property setting in Log_archive_dest_n takes effect if the current database is satisfied with the primary library and the conditions for ORL files are archived. With this argument, we can predefined all the target parameters for all the databases in DG, and they only take effect when the Valid_for property is true, so there is no need to enable and disable the target when the role is converted.

So what exactly would log_archive_dest_n be like? You can set up to 9 targets, which means that we can have up to 9 repositories. You can actually use 10, but one is reserved for the default local archive target, which we'll discuss later. Here we use the number 2nd parameter to add one of the highest available repositories in Manchester. (modified for ease of display)

Copy Code code as follows:

Log_archive_dest_2= ' Service=matrix_dr0
SYNC reopen=15 net_timeout=15
Valid_for= (Online_logfiles,primary_role)
Db_unique_name=matrix_dr0 '

Now add a standby in Newark as a number 3rd parameter, its network latency is longer than the sync, so this is transmitted in asynchronous mode:

Copy Code code as follows:

Log_archive_dest_3= ' SERVICE=MATRIX_DR1
ASYNC reopen=15
Valid_for= (Online_logfiles,primary_role)
Db_unique_name=matrix_dr1 '


Of course we use the appropriate db_unique_name attribute, so we also need to configure the Log_archive_config parameter:

Copy Code code as follows:

Log_archive_config= ' dg_config= (MATRIX,MATRIX_DR0,MATRIX_DR1) '

The following are optional attributes:

affirm This is the default value for using the Sync method target. Requires the LNS process to wait for the RFS process to complete direct I/O to the Srl file and then return the success message. It also requires a "maximum available" or "Maximum protection" mode, because this property is based on the default value of the target, so it is not required, although in 10g you can specify this property for the target of the async method. But there is still no reason. In fact, it slows down the LNS process. In 11g, the Affirm property is ignored by the async target.

Noaffirm If not specifically specified, it will be the default value for the async target. For maximum performance mode; Again, because it is the default value of async, there is no need to specify it, and if you set the Noaffirm attribute on the sync target, your protection mode will violate the rule and be marked as "resync". If this is your only sync backup and is in the maximum available mode, you will not be able to perform 0 data loss failover (Failover), and if this is your only sync target and is in maximum protection mode, setting the affirm attribute will crash your main library.

COMPRESSION This property will enable advanced compression for the standby target. By default, this means that any archive process that sends an interval log to that target will compress the archive when it is sent. If you set this hidden attribute, it also compresses the currently sent redo log stream. For example, if you set this hidden parameter, we add the compression attribute to the current two target libraries:

Copy Code code as follows:

Log_archive_dest_2= ' Service=matrix_dr0
LGWR SYNC reopen=15 net_timeout=15
Compression=enable
Valid_for= (Online_logfiles,primary_role)
Db_unique_name=matrix_dr0 '

Log_archive_dest_3= ' SERVICE=MATRIX_DR1
LGWR ASYNC reopen=15
Compression=enable
Valid_for= (Online_logfiles,primary_role)
Db_unique_name=matrix_dr1 '

The MATRIX_DR0 Target Library uses compression only when the arch process sends an archive log for interval processing (not the archive log for Sync Sync), and the MATRIX_DR1 library compresses the redo logs from start to finish. This indicates that the log does not remain compressed on the disk, because only the log is compressed during transmission, so the data transferred to the repository is decompressed and then written to the Srl file.

max_connections This attribute is introduced in 10GR2, which allows you to specify the number of archive processes to be processed for the standby interval, which has been deprecated in 11g. But if your version is 10g, you can specify a value of 1-5 (the default value of 1); If you set a value greater than 1 o'clock, the main library will allocate the corresponding number of archive processes to send the archived log files, which will be fragmented to the archive process whenever the backup needs to be processed at intervals. It is also transmitted as a parallel stream in the network and reassembled when transferred to the standby.

Copy Code code as follows:

Log_archive_dest_2= ' Service=matrix_dr0
LGWR SYNC reopen=15 net_timeout=15
Max_connections=5
Valid_for= (Online_logfiles,primary_role)
Db_unique_name=matrix_dr0 '

Now when the MATRIX_DR0 library disconnects from the main library, the primary library's interval processing process will use multiple redo streams for each of the exact archive log files.

Attention:

Do not use the max_connections property in a 11g database, which can degrade the performance of log transmissions.

DELAY This property does not delay the transfer of the redo data as most imagine, it is only used to indicate that the log application process of the repository target applies the redo data after the DELAY property set time (in seconds). With a flashback database, this attribute is almost deprecated, especially since we recommend that the Flash-back database feature be enabled in the primary repository all the time. You may need to set this latency if you are inclined to perform a number of tasks that cannot be processed by the flash-back database. The 8th chapter will discuss the flash back database and data Guard.

The original purpose of the alternate substitution (alternate) goal is to keep the database running continuously when the disk space for the ORL log file is being archived. With an alternate target, you can redirect the archive log file to an alternate disk. With the Flashback recovery area (automated management space), the problem basically disappears.

If you have multiple network paths to the standby, you can also use this attribute for remote standby destinations. Obviously, you will use multiple standby network paths in a RAC environment, but this is not the original purpose of the alternate attribute design. For a single instance or RAC environment with multiple NICs, it is easier to use connect-time failover in the TNS descriptor of the repository. (Note: see Connect-time failover)

It is recommended that you do not use the following properties:

LOCATION before 10GR2, this property must specify a file location for the archive process to store the archived log files, and this is true for the main library (for ORL files) and for the Srl file. However, with the Flashback recovery area and the use of the default local archive, this property is no longer required. A target of Number 10 is automatically set to the Flashback recovery area.

Copy Code code as follows:

sql> SELECT destination from v$archive_dest WHERE dest_id=10;
Use_db_recovery_file_dest

sql> ARCHIVE LOG LIST
Database Log Mode Archive mode
Automatic Archival Enabled
Archive Destination Use_db_recovery_file_dest
Oldest online log sequence 19
Next Log sequence to archive 21
Current Log sequence 2

If you are using the Flashback recovery area and you want to define a local target, you should use the same syntax:

Copy Code code as follows:

Log_archive_dest_1= ' Location=use_db_recovery_file_dest
Valid_for= (Online_logfiles,primary_role)
Db_unique_name=matrix '

If you still don't use the Flashback recovery area, you can also use the old disk path to do the same thing:

Copy Code code as follows:

Log_archive_dest_1= ' location=/u03/oradata/matrix/arch/
Valid_for= (Online_logfiles,primary_role)
Db_unique_name=matrix '

Note that in both cases, the Db_unique_name points to a database on which you defined the target (note: That is, where the archive resides), not the remote repository. In the example above, the archive location target is on the matrix library, so if you want to use the Db_unique_name attribute here, you need to specify a Db_unique_name value for the matrix.

Attention:

If you use the Flash-back recovery area, do not use the Location property to specify the local archive location.

Mandatory This is one of the most dangerous properties on the repository. Basically, it stipulates that the redo information in the ORL file must be sent to the repository. If the redo information cannot be sent to the standby, the Orl file containing the redo information in the master library will not be reused (reuse) until it is successfully sent to the repository. Just imagine that the production system will stall when the repository is inaccessible and all the log files available in the main library are traversed. Of course, a local target is mandatory to keep the file on disk, but there's no need to set up another one. By default, a target in the local archive is set to mandatory.

Attention:

Do not set the mandatory property.

max_failure This attribute is one of the most misunderstood in all attributes. People tend to think that this attribute represents the number of times the LGWR process reconnected the standby until it abandons the failed standby and continues to generate the log. This is not the case, if you set the attribute, it actually defines the number of times the log group is switched when LGWR tries to connect to a faulty standby (note: The original text is more misleading, which means to switch a log, and to reset the repository). For example, if you set the value of Max_failure to 5, then LGWR will initiate a total of 5 connection requests to the failover library during its cyclic switching, and if you switch 5 times or you cannot connect to the failed standby, then LGWR will give up the reconnection. Then you can either wait to manually re-enable it or wait for the main library reboot to take effect on the property.

Attention:

Do not set the Max_failure property.

Noregister This is the last attribute of the Log_archive_dest_n argument we are discussing. By default, DG requires any redo data sent to the standby to complete the registration of the repository when it is archived to disk. For a physical repository, it means that the data is registered in the repository's control file, and for the logical repository it means that SQL apply registers the log files in the metadata. DG does not need this attribute, it can be used in streams target libraries that use the downstream attribute.

Attention:

Do not set the Noregister property.

Log_archive_dest_state_n This is the parameter that is used in support of Log_archive_dest_n. In the past, there are two reasons why we need to configure it. The first is to enable the Log_archive_dest_n parameter of the primary role in the standby to enable the archive process to work with log_archive_dest_n, and another reason is to configure a alternate target as described earlier. The first reason has no effect (now used valid_for), and unless you use the alternate attribute, the second reason is not tenable, because now this parameter defaults to enable, you no longer need to set it for your target library.

Copy Code code as follows:

Log_archive_dest_state_1=enable

Third, alternate role parameters

Db_file_name_convert in the repository, this parameter allows you to logically migrate data files from the main library to the repository, if you are using a disk-based storage structure and the storage path is not the same on both systems, then it is necessary to configure it. The conversion is performed only if the standby is switched to the main library. These values are written to the control file and the data file header once a primary standby switch or failover is made to the standby repository. Functionality can be achieved by simple character substitution.

Copy Code code as follows:

Db_file_name_convert= '/matrix/', '/matrix_dr0/'

The above command will have the following data file name:
Copy Code code as follows:

'/U03/ORADATA/MATRIX/SYSAUX.DBF '

Convert to:
Copy Code code as follows:

'/U03/ORADATA/MATRIX_DR0/SYSAUX.DBF '

Similarly, the following configuration points the data file to the +recovery disk group rather than the +data;
Copy Code code as follows:

Db_file_name_convert= ' +data ', ' +recovery '

The rest of the path will remain the same, and in this case the ASM is used to create the repository, and you do not need to define this parameter.

Log_file_name_convert Its function is the same as the Db_file_name_convert parameter, except that the log file is converted here, including the ORL file and any SRL files.

Copy Code code as follows:

Log_file_name_convert= '/matrix/', '/matrix_dr0/'

Fal_serverThe FAL (Fetch Archive Log) feature has made great strides compared to the 9ir1 DG. It is used only for physical repositories, configured to enable physical repositories to obtain missing archive log files from a single database in the DG configuration (primary or standby) in case of problem discovery, and sometimes as passive interval processing (reactive gap resolution), However, the FAL technology has been greatly enhanced in the previous three versions so that it is almost no longer necessary to define FAL parameters. With the use of the active interval processing (proactive gap resolution) technology introduced in the 9IR2 version, almost any type of interval request on the physical or logical repository can be handled by the ping process on the main library.

During the normal work of the main library, the archive process (designated as the ping process) takes turns querying all the spares to find the redo interval, while processing any unresolved interval requests from any application process. FAL technology is used when the physical repository needs to obtain an interval file from a database other than the main library. For example, if the physical repository now needs to be spaced, but the main library is inaccessible, it needs to request additional spares to complete the interval processing, so you define the Fal_server parameter as a list of TNS identifiers that point to the main library or any repository. For example, add the main library (Matrix) and other repositories (MATRIX_DR1) to the MATRIX_DR0 library:

Copy Code code as follows:

Fal_server= ' Matrix, Matrix_dr1 '

fal_clientThe FAL client is the TNS name of the database that initiates the interval request, and the recipient of the interval request (Fal_server) needs the TNS name so that the database on the FAL server can be connected back to the requester. On the standby matrix_dr0, we send Matrix_dr0 as the client name so that the matrix and MATRIX_DR1 libraries can connect back to the MATRIX_DR0 library to send missing archive log files.

Copy Code code as follows:

Fal_client= ' Matrix_dr0 '

' matrix_dr0′ must be defined in the TNS file of the FAL server so that DG can successfully connect to the repository, because we will set the redo transmission parameters in all of these databases, so we also have to configure TNS names for them. If you use the same TNS name in the FAL parameter, then the TNS names are defined. If you choose a different name, you will need to add the name to the TNS file for all systems. As with the Fal_server parameter, the fal_client parameter is valid only for physical repositories.

Standby_file_management This is the last argument in this chapter. This simple parameter is used only for physical repositories. When this parameter is set to auto, the data files are added and deleted in the main library, and the corresponding changes are made automatically in the standby library. As long as the top-level directory in the repository exists or can be aided by db_

File_name_convert parameter is found, then DG will execute the DDL statement to create the data file in the repository. It will even create the missing subdirectories as much as possible. By default, the value of this parameter is manual, which means that the application process on the standby does not create new data files, and you need to create them manually.

Copy Code code as follows:

Standby_file_management= ' AUTO '

It is possible to set this parameter to manual only if you need to perform a definition operation on the ORL file on the physical repository. An SRL file can be added without changing this parameter. If you really want to add or remove online log files from the physical repository (for example, because of changes on the main library), you can also dynamically set this parameter to manual, perform DDL operations, and then revert to auto values without restarting the repository.

Summary of parameters and attributes

After understanding all the parameters and attributes above, you should have a deep understanding of their features and features and can be configured correctly.

I hope you don't get headaches, because you might be surprised to find that if you use data Guard broker (even without grid control) you don't have to configure these parameters yourself, DG Broker will do everything for you.

End

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.