This article mainly summarizes some related knowledge from the official Oracle 11g Data guard concepts and administration and other documents to help you understand the architecture of Data guard.
RAC 11.2 architecture (iii) http://blog.csdn.net/wildwave/article/details/6882567some concepts related to data guard are also discussed. Here, the repeated parts will skip
New Features of Oracle Data guard 11.2
New features common to redo apply and SQL apply
New Features of redo apply
- A Data guard configuration can be composed of a master database and up to 30 slave databases.
- Fal_client initialization parameters are no longer required
- The default archive path used by Oracle ASM and quick recovery region is changed from log_archive_dest_10 to log_archive_dest_1.
- The compression of redo transmission is no longer limited to be used only when the gap is solved. When compression is specified in a destination, all the redo data sent to this destination will be compressed.
- A new alter system flush redo expression is added. During failover, the SQL statement can refresh unsent redo logs from a mounted primary database to a standby database, therefore, no data loss can be achieved even if the master database does not run in the zero data loss protection mode.
New SQL apply features
- You can use the standby_max_data_delay parameter in the Real-Time query environment to set the maximum allowable latency.
- You can use the alter session sync with primary statement to ensure that the physical standby database is synchronized with the master database.
- The V $ effecuard_stats view is more accurate in many fields, including apply lag and transport lag.
- You can query the new view v $ standby_event_histogram to view the histogram of latency values applied on the physical standby.
- In Real-Time query mode, Bad blocks in the master database are automatically replaced by copies of the blocks in the physical standby database. The opposite is true.
- Logical standby and logminer tools support tables that use basic table compression, OLTP table compression, and hybrid column compression (hybrid column compression is one of the features of Oracle exadata)
- The logical standby and logminer tools support tables that contain the securefile lob field. Compression and encryption on the securefile lob field are also supported. (De-duplication and fragment-based operations are not supported)
- Changes in the global transaction context of XA (Extended architecture) on the Oracle RAC primary database are replicated on the logical standby database.
- Online redefinition executed using the dbms_redefinition package on the primary database will be copied to the logical standby database.
- Logical standby supports version usage in the primary database, including version-based redefinition for upgrading applications, to minimize downtime (for version-based redefinition edition-based redefinition, you can view details in the Oracle Database Advanced Application Developer's guide)
- The logical standby Database supports stream capture. This feature allows you to detach one-way information from the master database and use the logical standby as the hub to spread the information to multiple databases. Stream capture can also spread local changes to the logical standby database.
New features in Oracle Data guard 11.1
New features common to redo apply and SQL apply
New Features of redo apply
- Redo traffic is compressed. When a redo gap occurs, the redo will be compressed before it is transmitted over the network to improve the transmission performance.
- Redo transmission response time histogram. The V $ redo_dest_resp_histogram view contains a histogram of the response time of the destination transmitted by each sync redo. The data in the view can help you determine the appropriate value of the net_timeout attribute in log_archive_dest_n.
- Faster role conversion
- Verify the redo Transmission Network Session (through SSL)
- Simplify the data guard management interface. Discard unnecessary SQL expressions and initialization parameters
- Enhanced db_unique_name. A column primary_db_unique_name is added to V $ database to query the db_unique_name of the master database. Communication cannot be performed between databases with the same db_unique_name in the 11g.
- Use the physical standby database for Rolling upgrade. By adding the keep identity option to alter database recover to logical standby, You can temporarily convert a physical standby database to a logical standby database to implement rolling upgrade.
- Diverse data guard configurations. This feature allows you to mix the master and slave libraries of Linux and Windows in the same data guard configuration.
New SQL apply features
- Real-Time query of physical standby
- Snapshot standby
- Use physical standby to detect write loss (lost-wirte detection)
- Enhanced integration with RMAN
- Supports more object data types and PL/SQL packages. (Store XML in clob mode; dbms_rls; dbns_gfa)
- Supports transparent data encryption (TDE)
- Use the dbms_logstdby.apply_set package to dynamically set the SQL apply parameter.
- When the logical standby database is used, switchover's support for RAC is enhanced. No need to close any instance
- Improves the DDL processing capability of SQL apply. Parallel Execution of parallel DDL
- Use the dbms_sched package to create a scheduling task on the standby Database
Data guard concepts and management data guard Introduction Data guard configuration is composed of one product database and one or more standby databases. These databases in the configuration can be distributed in different geographic locations and connected through Oracle Net. You can use the SQL command line or data guard Broker Interface to manage the master and slave databases, including the command line interface dgmgrl and the graphical user interface integrated with Oracle em. The product database included in the DG configuration, also known as the primary database, serves as the primary role for most applications to access. The master database can be a single-instance database or RAC. Standby databases can be created using backup copies of the master database and configured in DG. The DG automatically maintains the standby database by transmitting and applying the redo in the master database. Like the master database, standby databases can also be single instances or RAC databases. Standby types include physical standby, logical standby, and snapshot standby. For details, refer to RAC 11.2 architecture (III ). Data guard mainly includes three services: redo transmission service, application service, and role switching. Redo transmission mainly involves the following tasks: transfers redo data between the master and slave systems, manages the parsing process of the archive log gap caused by network problems, and automatically transfers the redo data from the master database when the archived log files on standby are detected to be missing or damaged. or another standby database retrieves the corresponding archive log file for replacement. After the redo is transmitted to the standby database, the application service automatically applies the redo data to synchronize with the master database. You can refer to the diagram below. The above is a flowchart of the log transmission application from the master database to the physical standby database. The logic standby database is different in that SQL apply is used instead of redo apply and can be opened in read-write mode. Role Switching includes switchover and failover. Switchover is mainly used for scheduled downtime maintenance of the master database without data loss. When the master database is unavailable, use failover for failover. The possible data loss is related to the selected data protection mode. Data guard broker is a distributed management framework that automates the creation, maintenance, and monitoring of DG configurations. You can use the web interface provided by EM grid control or dgmgrl. If the latter is used, you can use scripts for maintenance and monitoring. The high-availability architecture requires both the database and database client to have fast failover capabilities. For client failover best practices, refer to http://www.oracle.com/goto/maa Oracle to provide some techniques that can be used as a supplement to data guard for higher level of high availability and data protection: RAC, RAC and DG can be protected together through system-level, site-level, and data-level data protection; flashback database provides fast recovery when Logical Data is damaged and user errors occur; rman, you can use the duplicate command to create a standby database from the backup of the master database. You can back up the database from the physical standby and automatically delete the archived log files that have been backed up after the backup. Data Guard provides the following benefits: disaster recovery, data protection and high availability, complete data protection, efficient system resource utilization, and data protection flexibility, balance availability and performance; automatic gap detection and processing; centralized and simplified management; integrated in Oracle Database without independent installation; automatic Role Switching, you need to enable the fast-start failover in the broker.
Software and Hardware requirements
Oracle 11g enhances the flexibility of Data guard configuration. The master and slave nodes can have different CPU architectures, operating systems (such as Windows and Linux), and operating system binary files (32-bit/64-bit) and Oracle Database binary files (32-bit/64-bit ). For the support and restrictions of this hybrid platform, refer to Metalink documentation 413484.1 (physical standby) and 1085687.1 (logical standby ). To simplify operations and reduce bugs, try to select the same platform. Although the CPU and operating system can be different, you must select the same oracle version unless you are using logical standby for database rolling upgrade (when one side completes the upgrade and the other side is not completed, the version will be slightly different ). In terms of software, you must use Oracle Enterprise Edition (or you are willing to use the standard edition to simulate standby through the copy of the operating system); the minimum version of Oracle Database software before rolling upgrade is 10.1.0.3; the compatible in the database initialization parameters must be consistent in all databases in the DG configuration. Unless logical standby is in use, its compatible can be higher than the master database; the master database must run in the archive mode; the master and slave databases must have their own control files. If the master and slave databases are on the same system, they must use different directory structures; otherwise, the slave database will overwrite the database files of the master database; to prevent direct writing without logs from being transferred to the standby database, you should enable force before the primary database creates a backup for the standby database.
Logging option; users used to manage master-slave database instances must have sysdba system permissions. For ease of operation, Oracle recommends using Symmetric ASM and OMF for master and slave databases, that is, if ASM or OMF is used for a database in the DG configuration, other databases should also be used accordingly unless you intentionally use a hybrid configuration for migration or maintenance purposes.
Create a physical standby database This section explains the configuration of the physical standby by creating an example of the physical standby, using the maximum performance mode, which is also the default mode. Before creating a backup database, make sure that the master database is correctly configured. These conditions include: Enable force logging, configure redo transfer verification, configure the master database to receive redo data, set the initialization parameters of the master database, and enable archiving. The following describes how to enable force logging to place the database in force logging mode using the following SQL statement: SQL> alter database force logging. This operation may take some time to complete, because it will wait until all Unlogged direct write I/O ends. Configure redo transmission verification data Guard uses Oracle Net sessions to transmit redo data and control information. These redo transmission sessions are verified through the secure plug-in Layer (SSL) protocol or remote login password files. SSL verification must meet the following conditions: the primary and standby databases have the same oracle network directory (OID) Enterprise Domain Name and allow connection using the current user database; the initialization parameters log_archive_dest_n and fal_server corresponding to each database use the configured Oracle Net connection descriptor for SSL. Each database has an oracle wallet or supported hardware security module, the latter contains a user certificate with a recognized name (DN), which matches the DN of the database OID portal. If the prerequisites for SSL authentication are not met, all members of the DG configuration must be configured with a remote logon password file, and each physical standby database must have the latest copy of the password file of the master database. (If you grant or revoke sysdba or sysoper permissions to a user or modify the logon password of a user with these permissions, you must replace the password file with the latest copy of the master database password file on each physical standby or snapshot standby.) configure the master database to accept redo data. This step is optional, however, Oracle recommends that the master database also perform this configuration. In this way, after switching the role, the master database can quickly switch to standby and start to receive redo data. Set the initialization parameters of the master database on the master database. You need to configure some initialization parameters. The following is an example of the master database initialization parameter in the DG configuration (Chicago is the master database, Boston is the physical standby)
DB_NAME=chicagoDB_UNIQUE_NAME=chicagoLOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)'CONTROL_FILES='/arch1/chicago/control1.ctl', '/arch2/chicago/control2.ctl'LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/VALID_FOR=(ALL_LOGFILES,ALL_ROLES)DB_UNIQUE_NAME=chicago'LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNCVALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=boston'LOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLEREMOTE_LOGIN_PASSWORDFILE=EXCLUSIVELOG_ARCHIVE_FORMAT=%t_%s_%r.arcLOG_ARCHIVE_MAX_PROCESSES=30
Note The async in log_archive_dest_2 above. This is the recommended setting for Oracle and requires standby redo log.
There are also several initialization parameters that take effect after the master database is switched to the slave database, so that no parameter needs to be modified after the role is switched:
Below is a simple parameter list
||Specify the name after the database is created on the master database. In the physical standby database, use the db_name of the master database.
||Specify a unique name for each database. This name is no longer changed after it is set, even if the role of the primary and backup databases is changed
||The dg_config attribute of this parameter must be explicitly set on each database to activate all functions of data guard. The dg_config file must contain db_unique_name of all databases in the DG configuration, separated by commas (,).
||Specify the path of the control file on the master database
||Specifies the location where redo data is archived in the master and slave databases. In the preceding example, log_archive_dest_1 is used to archive local online logs. log_archive_dest_2 only works for the master database and transfers the redo database to the remote physical standby destination Boston. Tip: If the quick recovery zone is configured through the db_recovery_file_dest initialization parameter, you do not need to explicitly specify the local archive path through the location attribute.
Guard automatically uses log_archive_dest_1 (if not set) as the default local archiving destination.
||Set to enable to allow the redo transmission service to transmit redo data to the specified destination.
||If you use the remote login password file to verify the management user or redo transfer session, this parameter must be set to exclusive or shared
||Specify the format of the archived log. % t indicates the thread, % s indicates the sequence, and % R indicates the resetlogs ID.
||Specifies the maximum number (1-30) of archiving processes called when Oracle software starts. The default value is 4.
||Specify the Oracle network service name of the FAL server (usually the database running in the primary role ). When the Chicago database runs in the standby role, the Boston database is used as the FAL server. When Boston cannot automatically send missing log files, standby can request these files from the FAL server.
||Specifies the path name of the data file in the master database, followed by the standby position. This parameter converts the path name of the data file in the master database to the path name of the data file in standby. If the master and slave databases are on the same system, or the path of the data file storage of the master and slave databases is different, you need to specify this parameter. Note that this parameter is only valid for physical standby. You can specify multiple pair paths.
||This parameter corresponds to the path of the online log file.
||Set to auto, so that when the data files in the master database are added or deleted, the backup database will also change accordingly.
Enable archiving if archive is not enabled, use the following expression to enable the archive log mode of the master database, and automatically archive SQL> shutdown immediate;
SQL> startup Mount;
SQL> alter database archivelog;
SQL> alter database open; gradually create a physical standby database to create backup copies of data files in the master database. You can use any backup copies in the master database to create a physical standby database, as long as you have enough archiving logs for recovery. We recommend that you use RMAN for Oracle. (You can refer to the backup recommendations in Oracle Database High Availability recovery ure and best practices to perform backup operations according to the instructions in Oracle database backup and recovery User's Guide) create a control file for the standby database. If you need to disable the master database during the backup process, run the following SQL * Plus command to start the master database SQL> startup Mount; then, create a control file for the standby database, and open the master database. SQL> alter database create standby controlfile as '/tmp/Boston. CTL'; SQL> alter database open; note that the same control file cannot be used for the master and slave databases. Step 1: Create a parameter file for the standby database. Create a parameter file (pfile) SQL> Create pfile = '/tmp/initboston from the server parameter file (spfile) used by the master database. ora 'from spfile; Step 2: Modify the parameter file generated in the previous step. The following is an example.
..DB_NAME=chicagoDB_UNIQUE_NAME=bostonLOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)'CONTROL_FILES='/arch1/boston/control1.ctl', '/arch2/boston/control2.ctl'DB_FILE_NAME_CONVERT='chicago','boston'LOG_FILE_NAME_CONVERT='/arch1/chicago/','/arch1/boston/','/arch2/chicago/','/arch2/boston/'LOG_ARCHIVE_FORMAT=log%t_%s_%r.arcLOG_ARCHIVE_DEST_1='LOCATION=/arch1/boston/VALID_FOR=(ALL_LOGFILES,ALL_ROLES)DB_UNIQUE_NAME=boston'LOG_ARCHIVE_DEST_2='SERVICE=chicago ASYNCVALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=chicago'LOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLEREMOTE_LOGIN_PASSWORDFILE=EXCLUSIVESTANDBY_FILE_MANAGEMENT=AUTOFAL_SERVER=chicago..
The changed parameters include db_unique_name; control_files; db_file_name_convert; log_file_name_convert; log_archive_dest_1; compatible; fal_server confirms that the initialization parameter compatible of the master and slave databases is consistent, otherwise, the redo transmission service cannot transmit redo data from the master database to the slave database. It is a good habit to use the show parameters command to confirm that no other parameters need to be changed. The files to be copied from the primary system to the standby system include: created data file backup; standby control file; initialized parameter file setting environment to support the first step of standby database, if the standby database is created on Windows, use oradin to create a Windows Service: winnt> oradim-New-Sid Boston-startmode manual step 2, copy the remote logon password file from the system where the master database is located to the standby system. If the master database has a remote logon password file, copy it to the directory corresponding to the physical standby system. If the Administrator uses OS verification and the redo transmission uses SSL verification, this step is the third optional step. Configure the listener on the master and slave databases. Use Oracle Net manager to configure the listener, and restart the listener to configure the application. For more information, see step 4 of the Oracle Database Net Services Administrator's Guide to create an oracle network service name. In all Master/Slave systems, Oracle Net manager is used to create a network service name for the master and slave databases for redo transmission. Step 5: Create a spfile for the standby database. For example, create spfile from pfile = 'initboston. ora '; Step 6: copy the encrypted wallet of the master database to the standby system. If the master database uses the database encryption wallet, copy it to the standby system, and configure the standby database to use the wallet. View more information about transparent database encryption in the Oracle Database Advanced Security Administrator's Guide. Open the physical standby database and execute the following steps to start the physical standby database and redo application: Step 1, open the physical standby Database: SQL> startup Mount; Step 2, prepare the standby database to receive redo data. Step 3 is described in section 6.2.3 to create online redo logs on the standby database. Although this step is optional, we recommend that you do this when creating the standby database. based on best practices, the Standby database will be ready to quickly switch to the master database role. Step 4: Open the redo application. Run the following command: Alter database recover managed standby database using current logfile disconnect from session; For more information, see section 7.3, "Apply redo data to the physical standby Database ". Verify that the physical standby database runs properly. Step 1: confirm the existing archive log files. On the standby database, query the V $ archive_log view to confirm the existing archive log files. For example, select sequence #, first_time, next_time from V $ archived_log order by sequence #; Step 2: Force log switching to archive the current online log file. On the master database, execute alter system switch logfile; step 3, confirm that the new redo data has been archived in the standby database. Select sequence #, first_time, next_time from V $ archived_log order by sequence #; Step 4: confirm that the received redo has been applied. Select sequence #, applied from V $ archived_log order by sequence #; if the log has been applied, the value of the applied field is usually in-memory or yes. At this time, the physical standby database is running and provides data protection at the maximum performance level. Here are some additional preparations you can make on the physical standby database. Upgrade data protection mode: by default, the configuration of DG is set to the maximum performance mode. Enable flashback Database: After the flashback database is enabled, the master database does not need to be rebuilt after failover. You can enable this feature on the master and slave databases. This section describes how to use flashback database in the DG Environment in sections 13.2 and 13.3.