Important configuration parameters of Oracle Data Guard

Source: Internet
Author: User

Oracle Data Guard is an important configuration parameter. Oracle Data Guard provides one or more backup databases for the production database (a copy of the database ), to ensure that data is not lost when the master database is unavailable or when exceptions occur, and continue to provide services through the slave database. For the configuration of Oracle DG, we can use Grid Control or Data Guard Broker and SQL * Plus. The preceding two methods can be completed on the GUI, which is easy to operate. For the SQL * Plus command line method, we need to perform a lot of configuration, especially some of these parameters. This document describes important parameters for configuring Oracle Data Guard. Next we will refer to Data Guard for short as DG. 1. There are several different types of parameters when configuring DG for several types of parameters of DG, which are generally divided into parameters unrelated to the role, master database role parameters, and slave database role parameters. Correct settings should be set for these different types of parameters. Otherwise, the DG cannot run normally or the role conversion cannot be implemented normally. Note: parameter classification of the master and slave databases does not mean that these parameters can only be set in the master database or slave database. If some parameters are switched, the master and slave databases should be set. 2. parameters a and COMPATIBLE = release_number, which are irrelevant to the role, are used to set version compatibility parameters. The same value must be specified. Otherwise, the switchover exception and redo transmission service fail. B. DB_NAME = database name is a common parameter. Any database must have the database name. Note: In the same DG, all roles have the same DB_NAME. For example *. db_name = 'oradb' c, DB_UNIQUE_NAME = Unique name for the Unique name of the database, which corresponds to DB_NAME. This is a logical name corresponding to the database instance name, by default, it is equal to DB_NAME. If LOG_ARCHIVE_CONFIG is configured, it is recommended to set it to a unique value. This parameter will not be changed when the primary and backup roles change. Master Database :*. db_unique_name = 'oradb' standby Database :*. db_unique_name = 'oradbdg' d, LOG_ARCHIVE_CONFIG = 'dg _ CONFIG (db_unique_name, db_unique_name ,...) 'this parameter is used to define a list of all valid DB_UNIQUE_NAME names in the DG. It is separated by commas to provide security checks for the DG. We recommend that you always configure this parameter. The master database and slave database use the same settings. Master Database :*. log_archive_config = 'dg _ CONFIG = (oradb, oradbdg) 'slave Database :*. log_archive_config = 'dg _ CONFIG = (oradb, oradbdg) 'E, CONTROL_FILES = 'control _ file_name ', 'control _ file_name ','... 'this parameter is used to define the location of the control file and modify it based on the path of the master and slave databases. See Oracle control file f, LOG_ARCHIVE_MAX_PROCESSES = integer. this parameter is used to define the number of processes used for archiving. On the master database, the archiving process is used to archive online redo log files and to process the interval between redo log streams in the slave database. On the slave database, the archiving process archives Standby Redo logs and forwards the archived logs to the cascade slave database. We recommend that you set the minimum value of this parameter to 4. The maximum value is 30 GB, and LOG_ARCHIVE_FORMAT = log % d _ % t _ % s _ % r. the arc parameter is used to define the format of archive logs. h, REMOTE_LOGIN_PASSWORDFILE = {EXCLUSIVE | SHARED} this parameter is used to set the authentication method. We recommend that you set the parameter value to EXCLUSIVE or SHARED, ensure that all db servers in the same Data Guard configuration have the same sys password. For more information about this parameter, see :*. REMOTE_LOGIN_PASSWORDFILE = EXCLUSIVE standby Database :*. REMOTE_LOGIN_PASSWORDFILE = EXCLUSIVE 3. parameters a and LOG_ARCHIVE_DEST_n of the master database role = {LOCATION = path_name | SERVICE = service_name, attribute, attribute ,...} set the path for archiving logs in the master database and redo log transmission. This parameter is complex in the DG Environment. For more information about archiving, see: If the flash back area is enabled for Oracle archiving logs, you do not need to set it for local archiving. You can set 10 available targets for this parameter, one of which is local archive. That is, there can be nine slave databases. This parameter has many features, which are listed below. SERIVCE: used to specify the TNSNAMES descriptor of the standby database. Oralce will send the redo log to the standby database specified by TNSNAMES. SYNC: used to specify the synchronous transmission method to the slave database. That is, the LGWR process needs to wait for the confirmation message from LNS and then inform the client that the transaction has been committed. In the highest availability and maximum protection mode, at least one standby target should be specified as SYNC. ASYNC: In contrast to SYNC, it specifies the asynchronous transmission mode, which is the default transmission method. NET_TIMEOUT: specifies the maximum time for the LGWR process to wait for the LNS process. The unit is seconds (30 by default ). If this value is exceeded, the master database abandons the slave database and continues to execute transactions on the master database. REOPEN: the waiting time for the master database to try to reconnect to the slave database after the slave database fails. The unit is second (default: 300 ). DB_UNIQUE_NAME: when the master database is connected to the slave database, it will send its own unique name. At the same time, it is required that the slave database return its unique name and use LOG_ARCHIVE_CONFIG to verify its existence. VALID_FOR: defines when to use the (Role-related) LOG_ARCHIVE_DEST_n parameter and the type of redo log files to run. Available log file types: online_logfile, standby_logfile, all_logfiles available role types: primary_role, standby_role, all_roles master database: log_archive_dest_1 = 'location =/u01/database/sybo3/arch db_unique_name = oradb valid_for = (ALL_LOGFILES, ALL_ROLES) '; slave database: response = 'service = oradbdg ASYNC db_unique_name = oradbdg valid_for = (ONLINE_LOGFILES, PRIMARY_ROLE) '; response = 'service = oradbdg2 ASYNC db_unique_name = oradbdg2 valid _ For = (ONLINE_LOGFILES, PRIMARY_ROLE) '; if you follow the preceding settings, log_archive_config should be set to log_archive_config = 'dg _ CONFIG = (oradb, oradbdg, oradbdg2) 'define log_archive_dest_1 = 'location = USE_DB_RECOVERY_FILE_DEST db_unique_name = oradb valid_for = (ALL_LOGFILES, ALL_ROLES) 'B, LOG_ARCHIVE_DEST_STATE_n = {ENABLE | DEFER | ALTERNATE} specifies the parameter value as ENABLE, allowing the redo transmission service to transmit redo log data to the specified path. The default value is ENABLE. 4. parameters related to the standby database role a, DB_FILE_NAME_CONVERT = 'location _ of_primary_database_datafile ', the 'location _ of_standby_database_datafile 'parameter should be set if the master database and the slave database are on the same host or different hosts and use different paths. When the master database is switched or fails to the slave database, this parameter is converted and its value is forcibly written to the control file. This parameter can also be used to specify the conversion location of the data file when RMAN copies the database. Master database:/u01/database/oradb/oradata *. db_file_name_convert = ('oradbdg', 'oradb') standby database:/u01/database/oradbdg/oradata *. db_file_name_convert = ('oradb', 'oradbdg') If you use ASM, you can set it *. db_file_name_convert = ('+ data',' + RECOVERY ') B, LOG_FILE_NAME_CONVERT = 'location _ of_primary_database_redo_logs ', 'location _ of_standby_database_redo_logs 'indicates the online redo log file conversion relationship between the master database and the backup database. The function is equivalent to DB_FILE_NAME_CONVERT master database:/u01/database/oradb/redo *. db_file_name_convert = ('oradbdg', 'oradb') standby database:/u01/database/oradbdg/redo *. db_file_name_convert = ('oradb', 'oradbdg') c, FAL_SERVER (Fetch Archive Log) = Oracle_Net_service_name this parameter is defined as a list of TNS names that exist on the slave server (pointing to the master database and any slave database ). This parameter is valid only for physical standby databases. It is mainly used to query the lost redo log intervals and process any uncertain interval requests published by the application process. When the physical standby database encounters a redo interval, it cannot connect to the primary database, or it can extract logs from other standby databases. Master Database :*. FAL_SERVER = oradbdg (the master database is set to switch the master and slave roles after switching :*. FAL_SERVER = oradb d, FAL_CLIENT = Oracle_Net_service_name the log interval requests the client name, TNSNAMES name. The archiving process on FAL_SERVER can reverse connect to the requester. This parameter is valid only for physical standby databases. The value of this parameter must be defined in the TNSNAMES file of the master database. Master Database :*. FAL_CLIENT = oradb (the master database is set to switch the master and slave roles after switching) slave Database :*. FAL_CLIENT = oradbdg e, STANDBY_FILE_MANAGEMENT = {AUTO | MANUAL} this parameter is only applicable to physical standby databases. We recommend that you set the value to AUTO. In this way, when a data file is added or deleted to the master database, the corresponding changes are automatically made to the slave database. Master Database: *. STANDBY_FILE_MANAGEMENT = AUTO (the master database is set to switch the master and slave roles after switching) slave Database: *. STANDBY_FILE_MANAGEMENT = AUTO

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.