Oracle Data Guard theoretical knowledge

Source: Internet
Author: User
Tags oracle documentation unsupported

Oracle Data Guard, RAC, Data Gurad, and Stream are three tools in the Oracle high availability system. Each tool can be used independently or in combination. They have different focuses and different application scenarios. RAC is designed to solve single point of failure (spof) and Server Load balancer. Therefore, the RAC solution is commonly used in core systems, but only one copy of data is included in the RAC solution, although storage failures can be avoided through RAID and other mechanisms, data itself is not redundant and is easy to form a single point of failure. Www.2cto.com Data Gurad provides Data protection through redundant Data. Data Gurad uses the log synchronization mechanism to ensure the synchronization of redundant Data and primary Data. Such synchronization can be real-time, delayed, and synchronized, asynchronous mode. Data Gurad is often used in high availability solutions for Remote Disaster Tolerance and small businesses. Although read-only queries can be executed on Standby machines to distribute the performance pressure of the Primary database, Data Gurad is by no means a performance solution. Www.2cto.com Stream is data synchronization based on Oracle Advanced Queue. It provides multiple levels of flexible configuration, and Oracle provides rich API development support, stream is more suitable for data sharing at the application layer. In the Data Gurad environment, there are at least two databases, one of which is Open to provide external services. This Database is called Primary Database. The second is in the recovery status, called Standby Database. During running, primary Database provides external services. You can perform operations on Primary Database. operations are recorded in online logs and archived logs. These logs are transmitted to Standby Database over the network. This log will be replayed on Standby Database to synchronize data between Primary Database and Standby Database. Oracle Data Gurad further optimizes the design of this process, making log transmission and recovery more automated and intelligent. It also provides a series of parameters and commands to simplify DBA work. If the Primary Database needs to be disabled for foreseeable reasons, such as software and hardware upgrades, you can switch Standby Database to Primary Database to continue external services, which reduces the service stop time and prevents data loss. If the Primary Database is unavailable due to an exception, you can forcibly switch the Standby Database to the Primary Database to continue external services. At this time, the degree of data loss is related to the configured data protection level. Therefore, Primary and Standby are only role concepts and are not fixed in a database. I. data Guard architecture the DG architecture can be divided into three parts by function: 1) log sending (Redo Send) 2) log receiving (Redo Receive) 3) log application (Redo Apply) 1. redo Send (Redo Send) logs are continuously generated when the Primary Database is running. These logs need to be sent to the Standy Database. This sending action can be completed by the LGWR or ARCH process of the Primary Database. Different archiving destinations can use different methods, but only one method can be used for one destination. The data protection capability and system availability vary greatly depending on which process to select. 1.1 Use ARCH process 1) The Primary Database continuously generates Redo logs, which are written to online logs by the LGWR process. 2) When a group of online logs is fully written, Log Switch will occur and local archiving will be triggered, the local archiving LOCATION is defined in the format of LOG_ARCHIVE_DEST_1 = 'location =/path. For example, alter system set log_archive_dest_1 = 'location =/u01/arch 'scope = both; 3) after local archiving, online logs can be overwritten and reused. 4) The ARCH process sends archive logs to the RFS (Remote File Server) process of Standby Database through Net. 5) The RFS process at the Standby Database End writes the received logs to the archive logs. 6) The MRP (Managed Recovery Process) Process (Redo Apply) or LSP Process (SQL Apply) at the Standby Database End Apply these logs on the Standby Database to synchronize data. If Standby Redologs is not written during ARCH transmission, archive files are saved on the Standby end. Note: After receiving the logic Standby, convert it to an SQL statement and execute the SQL statement on the Standby database for synchronization. This method is called SQL Apply. After receiving the REDO data generated by the Primary database, physical Standby synchronizes the data in the media recovery mode. This mode is also called Redo Apply. Note: to create a logical Standby database, you must first create a physical Standby database and then convert it into a logical Standby database. The biggest problem with using the ARCH process is that the Primary Database sends logs to the Standby Database only when an archive occurs. If the Primary Database goes down abnormally, the Redo content in the online logs will be lost. Therefore, the ARCH process cannot avoid data loss. To avoid data loss, you must use LGWR, the use of LGWR is divided into two methods: SYNC (synchronous) and ASYNC (asynchronous. By default, Primary Database uses the ARCH process. The parameter settings are as follows: alter system set log_archive_dest_2 = 'service = st' scope = both; 1.2 uses the SYNC method of LGWR process 1) redo logs generated by Primary Database must write log files and networks at the same time. That is to say, when the LGWR Process writes logs to a local log file, it also sends the logs to the local LNSn process, and then the LNSn (LGWR Network Server Process) processes Send Logs to remote destinations over the network. Each remote destination corresponds to an LNS process, and multiple LNS processes can work in parallel. 2) LGWR must wait until the local log file write operation and network transfer through the LNSn process are successful, and transactions on the Primary Database can be committed. This is what SYNC means. 3) The RFS process of the Standby Database writes the received logs to the Standby Redo Log. 4) The Log switching of the Primary Database also triggers the Log switching on the Standby Database, that is, the archiving of the Standby Redo Log on the Standby Database, and then triggers the MRP or LSP process of the Standby Database to restore the archive Log. Because the Redo of the Primary Database is transmitted in Real Time, Standby Database can use two recovery methods: Real-Time recovery (Real-Time Apply ): as long as the RFS writes the Log to the Standby Redo Log, it will be immediately restored; archive recovery: the recovery will be triggered only after the Standby Redo Log is archived. Primary Database uses the ARCH process by default. If the LGWR process is used, it must be specified explicitly. When using the lgwr sync method, you can use the NET_TIMEOUT parameter at the same time. The unit of this parameter is seconds. This parameter indicates how long the LGWR process will throw an error if no response is sent over the network. Example: alter system set log_archive_dest_2 = 'service = st lgwr sync NET_TIMEOUT = 30' scope = both; 1.3 The lgwr sync method may be used in ASYNC mode of the LGWR process, if a log fails to be sent to the Standby Database, the LGWR process reports an error. That is to say, the LGWR process of Primary Database depends on the network condition. Sometimes this requirement may be too harsh, and the lgwr async method can be used. Its working mechanism is as follows: 1) After the Primary Database generates a Redo log, LGWR submits the log to the log file and the local LNS process at the same time, but the LGWR process only needs to write the log file successfully, you do not have to wait for the network transfer of the LNSn process to succeed. 2) The LNSn process asynchronously sends the log Content to Standby Database. Multiple LNSn processes can be concurrently sent. 3) when the Online Redo Log of the Primary Database is full, a Log Switch occurs, triggering the archiving operation and archiving of Standby Database to Standby Redo Log; then, the MRP or LSP process is triggered to restore the archived logs. Because the LGWR process does not wait for the response from the LNSn process, the NET_TIMEOUT parameter is not required when configuring the lgwr async mode. Example: alter system set log_archive_dest_2 = 'service = st lgwr async 'scope = both; 2. after the RFS (Remote File Server) process of the Redo Receive Standby Database receives the Log, it writes the Log to the Standby Redo Log or Archived Log File, the specific File to which the Log is written, depends on the log transmission mode of Primary and the location of Standby database. If it is written to the Standby Redo Log file, the Standby Redo Log switch on the Standby Database will also be triggered when the Primary Database logs are switched, and the Standby Redo Log is archived. If it is written to the Archived Log, this operation can also be considered as an archive operation in this province. When receiving logs, you must note where the archived logs are stored: 1) if the STANDBY_ARCHIVE_DEST parameter is configured, the directory specified by this parameter is used. 2) if a LOG_ARCHIVE_DEST_n parameter explicitly defines the VALID_FOR = (STANDBY_LOGFILE, *) option, the directory specified by this parameter is used. 3) if the COMPATIBLE parameter of the database is greater than or equal to 10.0, select any LOG_ARCHIVE_DEST_n value. 4) if neither STANDBY_ARCHIVE_DEST nor LOG_ARCHIVE_DEST_n is configured, use the default value of STANDBY_ARCHIVE_DEST. The default value is $ ORACLE_HOME/dbs/arc.3. log application (Redo Apply, it is to repeat the Primary Database log on Standby Database to synchronize data between the two databases. Depending on how Standby databases repeat logs, they can be divided into Physical Standby (Physical Standby) and Logical Standby (Logical Standby ). Physical Standby uses Media Recovery Technology to recover data at the block level. This method has no restrictions on data types and ensures that the two databases are completely consistent. The Physical Standby database can only be restored or opened in the Mount state, but can only be opened in read-only mode and cannot be restored when enabled. Logminer technology is used to restore logs to SQL statements, and then the SQL engine executes these statements. Logminer Standby does not support all data types, you can view unsupported data types in the DBA_LOGSTDBY_UNSUPPORTED view. If this data type is used, the database cannot be completely consistent. The Logical Standby database can be read and written while being recovered. The related process of the Standby database reads the received REDO data (may be from the archive file on the Standby end or from the Standby Redologs) and then writes it to the Standby database. How is the stored data generated? Two Methods: Physical Standby can be applied through REDO. Logical Standby can be divided into two types based on the Time when Redo Apply occurs through SQL: Real-Time Apply ), this method must be Standby Redo Log. Every time a Log is written into the Standby Redo Log, recovery is triggered. The advantage of this method is that it can reduce the time needed for database switching (Switchover or Failover, because the switching time is mainly used to restore the remaining logs. Another method is archive application. This method triggers the Standby Database archiving operation when logs are switched over to the Primary Database. After archiving is complete, recovery is triggered. This is also the default recovery method. For Physical Standby, run the following command to enable Real-Time: Alter database recover managed standby database using current logfile. For Logical Standby, run the following command to enable Real-Time: alter database start logical standby apply immediate; check whether Real-Time apply: Select recovery_mode from v $ archive_dest_status is used; SQL> set wrap offSQL> select process, status, thread #, sequence #, client_pid from v $ managed_standby; process status thread # SEQUENCE # CLIENT_PID --------- ------------ ---------- ----------------------------------- arch connected 0 0 240 arch connected 0 0 196 arch connected 0 0 1944 arch connected 0 0 0 3956MRP0 limit 1 30843 N/ARFS indexing 1 30838 2620RFS indexing ing 1 30837 found indexing 1 30833 2652RFS ATTACHED 1 30841 2628RFS ATTACHED 1 30835 2604RFS ATTACHED 1 30842 2608 selected 11 rows. 2. Data Protection mode Data Guard allows you to define a three-minute Data Protection mode, including Maximum Protection, Maximum Availability, and Maximum Performance ). 1. Maximum Protection ensures no data loss. There is a price to implement this step. It requires that all transactions not only be written to the local Online Redologs, but also to the Standby Redologs of the Standby database before committing, confirm that the REDO data is available in at least one Standby database (if there are multiple) before submission on the Primary database. If a fault occurs that causes the Standby database to become unavailable (for example, network interruption), the Primary database will be shut down to prevent data loss. In this way, Standby Database must be configured with Standby Redo Log, while Primary Database must be archived to Standby Database.2. Maximum availability (Maximum availability) this mode provides the highest level of data protection policies without affecting the availability of the Primary database. The implementation method is similar to the maximum protection mode. It also requires that local transactions be written to the Standby Redologs of a Standby database at least before committing. However, unlike the maximum protection mode, if the Standby database is inaccessible due to a fault, the Primary database will not be shut down, but will be automatically switched to the highest-performance mode. After the Standby database returns to normal, the Primary database is automatically converted to the highest availability mode. Although this method will avoid data loss as much as possible, it cannot guarantee full data consistency. This method requires that Standby Database must be configured with Standby Redo Log, while Primary Database must use LGWR, SYNC, and affrem to archive to the default mode of Standby Database.3. Maximum performance (Maximum performance. This mode provides the highest level of data protection policies without affecting the performance of the Primary database. Transactions can be committed at any time. The REDO data of the current Primary database must be written to at least one Standby database, but such writing can be non-synchronous. If the network conditions are ideal, this mode can provide data protection similar to the highest availability, but only has a slight impact on the performance of the Primary database. This is also the default protection mode when the Standby database is created. This method can be implemented using the lgwr async or ARCH process, and Standby Database does not require Standby Redo Log. 4. Modify Data Protection Mode Step 1) shut down the database and restart to the Mount state. If it is RAC, you need to shut down all instances and start only one instance to the mount state. 2) Modify mode: Syntax: alter database set standby database to maximize {PROTECTION | AVAILABILITY | PERFORMANCE}; for example: SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION; 3) Open the DATABASE: alter database open; 4) confirm to modify the data protection mode: SQL> select protection_mode, protection_level from v $ database; 3. automatic crack detection and resolution when some logs of the Primary Database are not successfully sent to the Standby Database, an Archive Gap occurs ). The missing logs are the gaps ). Data Guard can automatically detect archive cracks without DBA intervention. This requires configuring the FAL_CLIENT and FAL_SERVER parameters (FAL: Fetch Archive Log ). From the FAL name, we can see that this process is actively initiated by the Standby Database, and the Standby Database is FAL_CLIENT. it extracts these gaps from FAL_SERVER. In 10 Gb, The FAL_SERVER can be a Primary Database or another Standby Database. For example, FAL_SERVER = 'pr1, ST1, st2'; both FAL_CLIENT and FAL_SERVER parameters are Oracle Net Names. FAL_CLIENT sends a request to FAL_SERVER over the network, and FAL_SERVER sends the missing log to FAL_CLIENT over the network. However, these two connections are not necessarily a single connection. Therefore, when FAL_CLIENT sends a request to FAL_SERVER, it carries the FAL_CLIENT parameter value to tell FAL_SERVER where to send the missing log. This parameter value is also an Oracle Net Name, which is defined on FAL_SERVER and used to point to FAL_CLIENT. Of course, in addition to Automatic Log missing solutions, DBA can also solve it manually. The procedure is as follows: 1) check whether there is a log GAP: SQL> SELECT UNIQUE THREAD #, MAX (SEQUENCE #) OVER (PARTITION BY THREAD #) LAST FROM V $ ARCHIVED_LOG; SQL> SELECT THREAD #, LOW_SEQUENCE #, HIGH_SEQUENCE # FROM V $ ARCHIVE_GAP; 2) If yes, copy it. 3) manually register these logs: SQL> ALTER DATABASE REGISTER LOGFILE 'path'; 4. specify the log sending object 1. the VALID_FOR attribute specifies the VALID_FOR attribute in the parameter LOG_ARCHIVE_DEST_n of the transmission and receiving objects to specify the transmitted content. From the literal understanding, VALID_FOR is based on who is valid. This attribute has two parameter values to be specified: REDO_LOG_TYPE and DATABASE_ROLE. We can basically understand it: this parameter is used to send log files of a specified type generated by a specified role. The main purpose of this parameter is to ensure the normal operation of the database once a role is switched. The REDO_LOG_TYPE and DATABASE_ROLE parameters can be selected as follows: REDO_LOG_TYPE: ONLINE_LOGFILE, STANDBY_LOGFILE, and ALL_LOGFILES. DATABASE_ROLE: it can be set to PRIMARY_ROLE, STANDBY_ROLE, and ALL_ROLES. Note: The default value of the VALID_FOR parameter is VALID_FOR = (ALL_LOGFILES, ALL_ROLES ). We recommend that you manually set this parameter instead of the default value. In some cases, the default value is not necessarily appropriate. For example, logical Standby is in open read write mode by default, not only REDO data, but also multiple log files (Online Redologs, Archived Redologs, and Standby Redologs ). By default, the archive files generated by the logical Standby database and the received archive files are in the same path, which is neither easy to manage nor likely to cause some risks. We recommend that you set the VALID_FOR attribute for each LOG_ARCHIVE_DEST_n parameter. It is best to save the locally generated and received archive files in different paths. 2. Using the DB_UNIQUE_NAME attribute to specify the DB_UNIQUE_NAME attribute of the database is a New Keyword added to the 10G version. This is not the case in earlier versions. This attribute is used to specify a unique Oracle database name. It is also because of DB_UNIQUE_NAME that the REDO data can be transmitted to the database that the DBA wants to be transmitted during transmission. Of course, to ensure that the REDO data is transmitted to the specified server, in addition to specifying the correct DB_UNIQUE_NAME attribute in the LOG_ARCHIVE_DEST_n parameter, the initialization parameter LOG_ARCHIVE_CONFIG also needs to be correctly configured. This parameter not only specifies the unique database name in the Data Guard environment, but also includes several attributes to control the transmission and receipt of REDO Data: SEND: allows the database to SEND Data to the remote end. RECEIVE: Allows Standby to RECEIVE data from other databases. NOSEND, NORECEIVE: Naturally, it is forbidden. For example, if the Primary database does not receive any archive data, you can set LOG_ARCHIVE_CONFIG = 'noreceive, DG_CONFIG = (PRI, ST) 'as follows, if a role switch occurs on the server, it does not have the ability to receive REDO data. 5. in the Data Guard environment, the following initialization parameters should be configured: the initialization parameter DB_NAME related to the Primary role must be the same for all databases in the same Data Guard, for example: DB_NAME = DaveDB_UNIQUE_NAME specifies a unique name for each database. Once specified, this parameter will not change unless the DBA actively modifies it, for example: DB_UNIQUE_NAME = DavePreLOG_ARCHIVE_CONFIG this parameter is used to control the receipt or sending of REDO Data from a remote database. It lists all DB_UNIQUE_NAME (including the Primary database and Standby database) in the same Data Guard using the DG_CONFIG attribute, separated by commas, the SEND/NOSEND attribute controls whether the message can be sent. The RECEIVE/NORECEIVE attribute controls whether the message can be received. For example, LOG_ARCHIVE_CONFIG = 'dg _ CONFIG = (DavePre, DaveDG) 'LO G_ARCHIVE_DEST_n. This parameter is very important and has many attributes and subparameters (you can directly query the official Oracle documentation. Chapter 14th of the Data Guard White Paper specifically introduces the attributes of this parameter and the functions and settings of sub-parameters ). Example: LOG_ARCHIVE_DEST_1 = 'location = l:/oracle/oradata/Dave VALID_FOR = (ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME = DavePre 'Log _ ARCHIVE_DEST_STATE_n: whether to allow the REDO transmission service to transmit REDO data to the specified path. This parameter has a total of four attribute values, with different functions. REMOTE_LOGIN_PASSWORDFILE we recommend that you set the parameter value to EXCLUSIVE or SHARED, ensure that all DB servers in the same Data Guard configuration have the same SYS password. The following parameters are related to the Standby role (we recommend that you set them in the initialization parameters of the Primary database, even if Role Switching occurs, the new Standby can also run normally.) FAL_SERVER specifies a Net service name. The database corresponding to this parameter value should be the Primary role. When the local database is a Standby role, if there is an archive interruption, this parameter is used to specify the server for obtaining the interrupted archive file, for example, FAL_SERVER = DavePre prompt: the FAL parameter is the abbreviation of the Fetch Archived Log. The FAL_SERVER parameter supports multiple parameter values. The FAL_CLIENT is separated by commas and a Net service name is specified. The database corresponding to this parameter should be a Standby role. When the local database runs with the Primary role, the archive file that is interrupted is sent to the site specified in the parameter value. For example, the FAL_CLIENT = DaveDGFAL_CLIENT parameter also supports multiple parameter values, which are separated by commas. When the data file path of the DB_FILE_NAME_CONVERTStandby database is inconsistent with that of the Primary database, you can set the DB_FILE_NAME_CONVERT parameter to enable automatic conversion. This parameter value should appear in pairs. The preceding value indicates the form before conversion, and the following value indicates the form after conversion, for example, DB_FILE_NAME_CONVERT = 'f:/oradata/DavePre ', 'l: /oradata/DaveDG 'Log _ FILE_NAME_CONVERT is used in the same way as above, except that LOG_FILE_NAME_CONVERT is dedicated to conversion of LOG File paths such as: LOG_FILE_NAME_CONVERT = 'f:/oradata/DavePre: /oradata/davedg' STANDBY _ FILE_MANAGEMENT if the data file of the Primary database is modified (such as new database creation or renaming), modify the data file in the Standby database according to the settings of this parameter. AUTO indicates automatic management. If it is set to MANUAL, MANUAL management is required. For example, STANDBY_FILE_MANAGEMENT = AUTO has several attributes for archiving failure. The LOG_ARCHIVE_DEST_n parameter can be used to control the measures to be taken when a fault occurs in the archiving process. 1. REOPEN tries to archive again after the specified time and uses the REOPEN = seconds (default: 300 seconds) attribute. at the specified time, the archive operation is repeatedly performed on the archive destination. If the value of this parameter is set to 0, once the REDO Data fails, it will not try again and send it again until the next REDO data is archived again. For example, set REOPEN to 100 seconds: LOG_ARCHIVE_DEST_2 = 'service = DavePrimary lgwr async reopen = 100' 2. ALTERNATE specifies the backup archiving destination. The ALTERNATE attribute defines a backup archiving destination. The so-called backup means that once the master archiving destination cannot be used for various reasons, it will temporarily write to the path specified in the ALTERNATE attribute. For example: LOG_ARCHIVE_DEST_1 = 'location =/disk1 ALTERNATE = LOG_ARCHIVE_DEST_2 'Log _ partition = 'location =/disk2 'Log _ ARCHIVE_DEST_STATE_2 = ALTERNATE the above parameter sets the archive path, if the file cannot be archived in the/disk1 path, the system automatically attempts to archive the file in the/disk2 path. In terms of functions, REOPEN and ALTERNATE must be repeated. However, note that the REOPEN attribute has a higher priority than the ALTERNATE attribute. If you specify a value greater than 0 for the REOPEN attribute, the LGWR (or ARCn) process first tries to write data to the primary archive destination until the maximum number of retries is reached. If the write still fails, the data is written to the path specified by the ALTERNATE attribute. 3. MAX_FAILURE controls the number of failed attempts. Use REOPEN to specify the time period for retry upon failure. MAX_FAILURE controls the number of failed attempts. For example, when you set LOG_ARCHIVE_DEST_1 to archive a local file, if an error occurs, you can try it once every 100 seconds. The total number of attempts cannot exceed 3. The setting is as follows: LOG_ARCHIVE_DEST_1 = 'location = E: /ora10g/oradata/jsspdg/REOPEN = 100 MAX_FAILURE = 3' 6. the difference between physical Standby and logical Standby databases is divided into two types: Physical Standby and logical Standby. 1. Physical Standby we know that physical Standby is exactly the same as the Primary database, and DG maintains the physical Standby database through the REDO application. When physical Standby does not perform REDO application operations, you can open the physical Standby database in read only mode. If Flashback Area is specified in the database, it can even be temporarily set to the read write mode. After the operation, the Flashback Database feature is used to restore the status before read write, so as to continue to receive and apply the REDO sent by the Primary end. REDO application. Physical Standby Maintains consistency with the Primary database through the REDO application. The so-called REDO application actually uses the restoration mechanism of Oracle to apply REDO data in the archive file (or Standby Redologs file. The Restoration Operation is a block-to-block application. If you are performing operations on the REDO application, the Oracle database cannot be opened. Read only mode. After enabling it in read only mode, you can perform query or backup operations in the Standby database (reducing the pressure on the Primary database in disguise ). At this time, the Standby database can continue to receive the REDO data sent by the Primary database, but it will not be applied until the Standby database resumes the REDO application. That is to say, the REDO application cannot be executed in read only mode. When a REDO application is used, the database is definitely not enabled. If necessary, you can switch between the two states, for example, apply REDO first, and then set the database to read only. You need to execute the REDO application command again when synchronizing with Primary, switch back to the REDO application status. Oh, life is a loop, and the same is true for databases. Note: The physical Standby application function is enhanced in Oracle 11g version. In 11g version, physical Standby can continue to apply REDO data in open read only mode, this greatly improves the application of physical Standby databases. Read write mode is enabled. If it is enabled in read write mode, the Standby database suspends receiving REDO data from the Primary database and temporarily loses the disaster protection function. Of course, opening in read write mode is not useless. For example, you may need to debug some data temporarily, but it is not convenient to operate in the official database, then you can temporarily set the Standby database to READ and WRITE mode. After the operation, the database will flash back to the status before the operation. (After the flash, Data Guard will automatically synchronize Data without the need to recreate the physical Standby, however, if the flash back is not enabled in another direction, the status before read write will not be returned ). Physical Standby features: (1) Disaster recovery and high availability. Physical Standby provides a sound and efficient disaster recovery and high availability solution. Easier management of switchover/failover role conversion and shorter planned or unplanned downtime. (2) data protection. Using a physical Standby database, DG ensures that data is not lost even in the face of unpredictable disasters. As mentioned above, physical Standby is based on block-to-block replication, so it has nothing to do with objects and statements. What is available on the Primary database and what is available on the physical Standby database. (3) share the pressure on the Primary database. By transferring backup tasks and query-only requests to the physical Standby database, you can effectively save the CPU and I/O resources of the Primary database. (4) improve performance. The REDO application technology used by physical Standby uses the underlying recovery mechanism, which can bypass the SQL-level code layer and thus achieve the highest efficiency. 2. Logical Standby logical Standby must also be created through the Primary database (or its backup, or its replication database, such as physical Standby). Therefore, it is similar to the physical Standby database at the beginning of creation. However, because logical Standby applies REDO data through SQL applications, the physical file structure of logical Standby and even the logical structure of data can be different from that of Primary. Unlike physical Standby, logical Standby is normally enabled in read write mode. You can access the logical Standby database at any time, that is, logical Standby executes SQL applications in the OPEN state. It also has advantages and disadvantages. Due to the characteristics of SQL applications, logical Standby imposes operational restrictions on some data types and some DDL/DML statements. You can view unsupported data types in the DBA_LOGSTDBY_UNSUPPORTED view. If this data type is used, the database cannot be completely consistent. The read/write of logical Standby enables it to be used as a report system, which reduces the pressure on the system. In addition to the features similar to disaster recovery, high availability, and data protection mentioned in physical Standby, logical Standby also has the following features: (1) effectively utilizes the hardware resources of the Standby machine. In addition to disaster recovery, the logical Standby database can also be used for other business needs. For example, you can create additional indexes and materialized views in the Standby database to improve query performance and meet specific business needs. You can also create a new SCHEMA (This SCHEMA does not exist on the Primary database ), then execute DDL or DML operations that are not suitable for execution on the Primary database in These schemas. (2) share the pressure on the Primary database. The logical Standby database can still be turned on when it is synchronized with Primary, which enables the logical Standby database to be used for both data protection and report operations, this frees the primary database from report and query tasks and saves valuable CPU and I/O resources. (3) smooth upgrade. You can use logical Standby to perform operations such as cross-version upgrade and database patching. It should be said that there is a lot of space for the application, but the risk is very small (if you have enough technical strength. In addition, although physical Standby can also implement some upgrade operations, I am afraid it will not work if it is cross-platform, so this item is not listed as a feature of physical Standby ), I personally think this is a feasible online smooth and rolling upgrade method, if your application supports creating logical Standby. VII. Log application Services Data Guard Maintains consistency between the Primary database and Standby databases through the Application REDO. What is supported in the background is the legendary Log application service. The Log application service has two methods: REDO application: dedicated for physical Standby databases, and the synchronization with the Primary database through media recovery. SQL application: dedicated to logical Standby databases. The core is to analyze and execute SQL statements on the Standby end using LogMiner. Therefore, physical Standby must be in the MOUNT state when applying REDO data, while logical Standby opens and applies REDO data in read write mode. However, maintained objects are READ-only by default, it cannot be directly modified on the logical Standby side. 7.1 Log application service configuration options by default, Log application service will wait for a single archive file to be received before starting the application. If the Standby database is configured with Standby Redologs, you can open the Real-Time application, so that Data Guard does not need to wait until the archive file is received, as long as the RFS process writes REDO Data to Standby Redologs, you can use MRP/LSP to write data to the Standby database in real time. 7.1.1.REDO the advantage of real-time application startup of real-time applications is that REDO data does not need to be archived and can be applied once it is received. In this way, the operation can be executed faster during role switching, because logs are applied instantly. It is also easy to start real-time applications, provided that the Standby database is configured with Standby Redologs. To enable real-time applications for physical Standby, add the using current logfie clause after the statement of the REDO application is started, for example, SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE; to enable real-time applications for LOGICAL Standby, you only need to add the IMMEDIATE clause after the statement of the REDO application is started. For example: SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; 7.1.2.REDO data delay the application may have latency in real time. In some cases, you may not want the Standby database to be too sync with Primary, you can specify the DELAY attribute (in minutes) in the corresponding LOG_ARCHIVE_DEST_n parameter of the REDO data sent by the Primary database. If the DELAY attribute is specified but no value is specified, the default value is 30 minutes ). Note: This attribute does not mean delayed sending of REDO data to Standby, but specifies the time when the application starts after archiving to Standby. FOR example, SET the DELAY attribute of the queue to 15 minutes: SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_3 = 'service = DavePrimary ARCH VALID _ FOR = (ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME = Dave DELAY = 15 '; however, if DBA specifies a real-time application when starting a REDO application, the Standby database ignores the DELAY attribute even if the DELAY attribute is specified in the LOG _ ARCHIVE_DEST_n parameter. In addition, the Standby end can cancel the delayed application by attaching the NODELAY clause when starting the REDO application. For physical Standby, you can use the following statements to cancel the delayed application: SQL> alter database recover managed standby database nodelay; for LOGICAL Standby, you can use the following statements to cancel the delayed application: SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NODELAY; generally, the requirements for setting latency applications are based on fault tolerance considerations. For example, if the data of the Primary database is accidentally modified or deleted due to misoperations, as long as the Standby database has not applied these changes, you can quickly recover this part of data from the Standby database. However, since Oracle has provided the FLASHBACK feature since 9i, it is more convenient and convenient to use the FLASHBACK feature in maloperations. Therefore, it is rare to use the DELAY method to DELAY applications. 7.2 apply REDO data to Standby database 7.2.1. the physical Standby application REDO data physical Standby starts the REDO application. The database must be in the MOUNT or open read only state. The commands for starting the REDO application are already very familiar to everyone. Foreground application: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE; after the statement is executed, the control is not returned to the command line window unless you manually stop the application. In this case, if you still need to operate the database, you can only open a new command line connection. When Oracle 8i launched the Standby feature (not called Data Guard at that time ), only this method is provided. Background application: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT; this is a common method now. After the statement is executed, the control is automatically returned to the current command line mode, the REDO application will run in subsequent processes. Start the real-time application and add the using current logfile clause: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE; to stop the REDO application, run the following statement: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; 7.2.2. the principle of the REDO data SQL application of logical Standby is to convert the received REDO data into SQL statements and execute them on the logical Standby database. Therefore, logical Standby needs to be started to the OPEN state. (1) Start the SQL application. The logic Standby database does not start the SQL application before or after running in the background. After the statement is executed, the control will automatically return to the current command line window. To START the SQL application, run SQL> ALTER DATABASE START LOGICAL STANDBY APPLY. To START the real-time application, add the IMMEDIATE clause. For example: SQL> alter database start logical standby apply immediate; (2) stop SQL applications, such as: SQL> alter database stop logical standby apply; Because REDO data is used to execute SQL statements, therefore, the execution of the preceding statement can stop the REDO application only after the transaction triggered by the current SQL statement ends. If you stop the REDO application immediately without considering the transaction execution, you can use the following statements: SQL> ALTER DATABASE ABORT LOGICAL STANDBY APPLY;

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.