Configure Oracle 11g Data Guard (logical standby) based on the same host)

Source: Internet
Author: User
Tags unsupported

Configuring the Oracle 11g Data Guard (logical standby) Oracle Data Guard logical standby database based on the same host uses a backup of the master database to first create a physical standby database, then convert it to a logical standby database. After that, the master database transmits the logs to the slave database. The slave database uses logminer to parse the SQL statements executed by the master database from the logs of the master database and re-execute them on the slave database, this ensures that the data in the master database is logically consistent with that in the master database. In contrast to the physical standby database, the physical standby database uses redo apply and the logical standby database uses SQL apply. Therefore, the logical standby database only ensures that the data is logically consistent with the master database, so that the logical standby database can be in the open state and perform corresponding DML operations. This article describes the precautions for creating a logical standby database and how to create a logical standby database.
1. Some restrictions on the logical standby database have many restrictions on the logical standby database. For example, for some special data types such as object, nested table, rowid, and object type, custom data types are not supported, segment compression is not supported, and some specific DDL statements are not supported. For more information, see Oracle Data Guard Concepts and Administration. However, the Standby database of the series still has many features that are not available in the physical standby database. The following lists the important information of the logical standby database. A. Determine the Unsupported schema. The schema associated with the Oracle database is skipped. Therefore, do not create objects or test Objects Based On These schemas, you can use the following query to view SQL> SELECT OWNER FROM DBA_LOGSTDBY_SKIP WHERE STATEMENT_OPT = 'internal schema '; b. Determine unsupported data objects and data types-use the following query to find unsupported table SQL> SELECT DISTINCT owner, table_name FROM dba_logstdby_unsupported ORDER BY owner, table_name; -- the preceding query results can be further refined to find out why the table is not supported. The following query is performed because the object is used, this table cannot be logically standby SQL> SELECT column_name, Data_type FROM dba_logstdby_unsupported WHERE owner = 'pm 'AND table_name = 'print _ MEDIA '; COLUMN_NAME DATA_TYPE ---------------------------- ------------------------------------------- AD_TEXTDOCS_NTAB nested table AD_GRAPHIC BFILE AD_HEADER OBJECT c. Determine the objects with uniqueness problems because the logic of standby is the same as that of the original database, therefore, the rowid on the logical standby is not equivalent to the rowid on the master database. For more information about rowid, see Oracle ROWID for the update and delete operations on the master database, oracle uses primary keys and unique indexes/logs to ensure that the objects operated on by the primary and standby databases are the same records on the same object. If the primary key and unique indexes are enabled, logs are supplemented, how does one update statement identify the updated row? In the following scenarios, when writing redo statements, the table has a primary key with the unique information of column values. The primary key value will be used as a part of the update statement along with the updated column. The table has no primary key, if a non-null unique index/constraint exists, the shortest non-null unique index/constraint will be used as a part of the update statement along with the updated column. The table has no primary key and no unique index/constraint, all columns with a fixed length (except long, lob, long raw, object type, and collection type columns) and updated columns are used as part of the update statement, tables with unique function indexes can be implemented with SQL Apply, as long as the modified rows can be uniquely identified, however, this index function cannot be used as a unique function to identify updated rows. You can create a RELY constraint when the application ensures that the Row Records on the table are unique and do not want to create a primary key, to avoid additional overhead caused by primary key maintenance-you can use the following method to ADD a RELY constraint to the table SQL> ALTER TABLE tb_name ADD PRIMARY KEY (id, name) Rely disable; -- Data Dictionary DBA_LOGSTDBY_NOT_UNIQUE records tables that do not have primary keys and unique indexes, or tables that do not have enough information to ensure that the master database and logical standby lock the same object SQL> SELECT owner, table_name FROM dba_logstdby_not_unique 2 WHERE (owner, table_name) not in (select distinct owner, table_name FROM dba_logstdby_unsupported) AND bad_column = 'y'; -- check whether supplemental logs are enabled for the master database, start SQL> select supplemental_log_data_pk, supplemental_log_ui from v $ databa Se; 2. Several important processes in the logical standby database requires a series of processes to capture logs and apply logs. It consists of two components: the mining engine and the application engine. That is, an SQL statement set is used to extract the SQL statement set from the archive log or the standby log, and an SQL statement set is used to apply the SQL statement set to the logical standby database. The related processes of these two engines can be queried in the V $ LOGSTDBY_PROCESS view to obtain related information. Mining Engine process: READER: redo record (redo record) PREPARER is parsed from the archive or standby redo logfile uploaded by the process from the master database: the process is responsible for converting the redo record parsed by the READER process to the LCR (Logical change record) which can have multiple PREPARER processes. The parsed LCR is stored in a region named LCR cache in the shared pool. BUILDER: The process packs the LCR into a transaction and combines multiple LCR into a single LCR. In addition, the process is responsible for managing the LCR cache. For example, for memory paging, the application engine process such as the log mining checkpoint is promoted: ANALYZER: This process is responsible for checking the transaction fragments contained in a group of LCR and filtering out transactions that do not need to be applied, check the dependencies of different transactions and other COORDINATOR: The process assigns a transaction to the APPLIER process, monitors the transaction dependency and coordinates the submission order APPLIER: there can be multiple COORDINATOR processes, it is responsible for applying LCR to the standby database. 3. Creating a logical standby database [SQL] view plaincopyprint? A. There are many ways to create a physical standby database first. For Oracle 11g, you can create a physical standby database directly from the active database, it can also be created using the duplicate Method Based on 10 Gb RMAN. The creation of the physical standby database is not demonstrated here. B. Verify the master database and physical standby database-master database: CNBO, slave database: HKBO-Information on the master database CNBO> select name, database_role, switchover_status from v $ database; NAME DATABASE_ROLE contains information on the slave database HKBO> select name, open_mode, database_role, protection_mode from v $ database; NAME OPEN_MODE DATABASE_ROLE PROTECTION_MODE ------- certificate --------------------------------------- ------ -------------------- Hkbo mounted physical standby maximum performance -- When SRL is applied HKBO> select sequence #, first_time, next_time, applied from v $ archived_log where rownum <3 order by first_time desc; SEQUENCE # FIRST_TIME NEXT_TIME APPLIED ---------- ------------------- ------------------------------- 7 10:38:03 10:46:11 2013/0 8/16 10:38:03 YES c. to switch the physical standby database to the logical standby database, we need to build a LogMiner dictionary in the master database and enable supplemental logs, therefore, stop the MRP process of the logical standby database to avoid additional redo apply. If you are using the Broker to manage the existing physical standby database, you should first disable the target database in the Broker. HKBO> alter database recover managed standby database cancel; d. Modify the LOG_ARCHIVE_DEST_n parameter of the master database. In the future, you need to convert the logical standby database to the master database, but the master database to the logical standby database, we recommend that you first modify the LOG_ARCHIVE_DEST_n parameter. 1. Change the VALID_FOR attribute in the LOG_ARCHIVE_DEST_1 parameter of the master database to valid only online redo logs, excluding the backup redo log, the second is to add a new archive path for the backup redo log. That is to say, the online log is separated from the backup log. Modify the following example, in this example, we have not modified LOG_ARCHIVE_DEST_1 = -- master database: cnbo slave Database: hkbo 'location = canonical -- When cnbo is the master database, it is used to store arch VALID_FOR = (ONLINE_LOGFILES, ALL_ROL ES) -- when the cnbo is switched to the standby database role, it is used to store the archive DB_UNIQUE_NAME = cnbo 'generated by itself as the standby database; LOG_ARCHIVE_DEST_3 = -- this archive path is used for backup, only when the master database cnbo is converted to the slave database is valid 'location = USE_DB_RECOVERY_FILE_DEST -- when the cnbo is the slave database role, it is used to store STANDBY_LOGFILES VALID_FOR = (STANDBY_LOGFILES, STANDBY_ROLE) received from) DB_UNIQUE_NAME = cnbo '; LOG_ARCHIVE_DEST_STATE_3 = ENABLE d; Create LogMiner data dictionary CNBO> exec dbms_logstdby.build on the master database; e. Convert the physical standby database to the logical standby database HKBO> show parameter db_name name type value ---- Revoke privileges ------------------------------ db_name string cnbo HKBO> alter database recover to logical standby hkbo; -- if you are using a pfile file, the ORA-16254 will be prompted that db_name HKBO> shutdown abort; HKBO> startup mount; --> restart the slave database because the logical slave database name, including DBID and INCARNATION, has been reinitialized. HKBO> select name, database_role from v $ database; NAME DATABASE_ROLE ----------------------- ---- ---------------------------------------------- Hkbo logical standby f. It is no longer necessary to recreate the backup Database Password File in Oracle 11g. It is required in Oracle 10 Gb, And the password must be the same as that in the master database during reconstruction. The parameter LOG_ARCHIVE_DEST_n in the slave database is different from that in the physical standby database, after the logical standby database is opened, it will generate its own redo log. Therefore, we need to configure the parameter LOG_ARCHIVE_DEST_n. For the logical standby database, there are three types of log files: online redo log, archived redo log, and standby redolog. HKBO> select name, value from v $ parameter where name in ('Log _ archive_dest_1 ', 'Log _ archive_dest_2 '); name value specified parameter LOCATION = mongodb_unique_name = hkbo valid _ for = (ALL_LOGFILES, ALL_ROLES) using SERVICE = cnbo ASYNC db_unique_name = cnbo valid_for = (ONLINE_LOG FILES, PRIMARY_ROLES) HKBO> ho ls/u02/database/hkbo/fr_area/HKBO --> This is the folder archivelog onlinelog HKBO> alter system set log_archive_dest_1 = 'location = Limit 2 VALID_FOR = (ONLINE_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME = hkbo '; HKBO> alter system set log_archive_dest_3 = 'location = 1_2 VALID_FOR = (STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME = hkbo'; HKBO> select name, value from v $ parameter where name in ('Log _ archive_dest_1 ', 2 'Log _ archive_dest_2', 'Log _ archive_dest_3 '); name value description ------------------------- ------------------------------------------- log_archive_dest_1 LOCATION = USE_DB_RECOVERY_FILE_DEST -- the arch VALID_FOR = (ONLINE_LOGFILES, ALL_ROLES) generated by the slave database when hkbo is used as the master database, arch DB_UNIQUE_NAME = hkbo log_archive_dest_2 SERVICE = cnbo ASYNC db_unique_name = cnbo -- When hkbo is a slave database, VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLES) is ignored) -- When hkbo is the master database, send redo data to the slave database cnbo log_archive_dest_3 LOCATION = standby -- When hkbo is the slave database, archive the standby log VALID_FOR = (STANDBY_LOGFILES received from the master database, STANDBY_ROLE) -- When hkbo is converted to the master database, this parameter is ignored DB_UNIQUE_NAME = hkbo -- it is recommended to add the archive process HKBO> alter system set log_archive_max_processes = 9 for the master and slave databases; CNBO> alter system set log_archive_max_processes = 9; h. open the logical standby database -- use resetlog to open the database HKBO> alter database open resetlogs; -- for the logical standby database on the same host, before enabling SQL Apply for the first time, you need to perform the following operations so that SQL skips any ALTER TABLESPACE DDL statements-whether this step is required, from Oracle Data Guard Concepts and Administration, we understand that we want to execute HKBO> exec dbms_logstdby.skip ('alter tablespace'); I. enable SQL Apply HKBO> ALTER database start logical standby apply immediate; HKBO> ho ls/u02/database/hkbo/fr_area/HKBO -- an extra foreign_archivelog folder is added to store logs from the master database archivelog foreign_archivelog onlinelog j. Verification Result CNBO> create user robin identified xxx 2 default tablespace users; CNBO> grant dba to robin; CNBO> conn robin/xxx; CNBO> create table t (what varchar (20), dt varchar (20 )); CNBO> insert into t select 'logicalstdby', to_char (sysdate, 'yyyymmdd hh24: mi: ss') from dual; CNBO> commit; CNBO> alter system switch logfile; HKBO> select * from robin. t; -- verify what dt ---------------------- ------------------------------------- LogicalStdby 20130820 17:33:19

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.