Quick Configuration Guide for Oracle11gDataGuard physical standby database (I)

Source: Internet
Author: User
Tags sdo dedicated server

Quick Configuration Guide for Oracle11gDataGuard physical standby database (I)
Origin

I recently conducted a physical standby Database Configuration experiment of 10g and 11g, and found that Data Guard is actually very easy, but there is a lack of good documentation. I started my experiment by referring to the official document and thought it was not very clear.

Google has two pdf documents, and I think they are much better than the official documents. Translation may be useful to some friends. At the same time, I am more familiar with these two documents. I haven't translated English for a long time. You can practice it by the way.

Original document (documents ):

Configure upload uard 11gR2 Physical Standby Part 1
Configure upload uard 11gR2 Physical Standby Part 2
Part 1
Introduction
Data Guard is a feature of Oracle databases that provides database redundancy. Redundancy is achieved by creating a standby (physical copy) database. The standby database should be in different geographical locations or on different disks. The slave database maintains data synchronization through changes in the application master database. The standby database can use the redo log application (physical standby database) or SQL application synchronization (logical standby database ).

This article aims to show that the configuration of Data Guard is not complex and requires no special skills or training to be built. It will quickly show readers how to build a physical standby database. My goal is to help you quickly build a physical standby database that runs normally even if you are new to Data Guard for the first time or worry about how difficult it will be to configure.

Why use Data Guard?
Each type of Oracle High Availability tool has its own purpose. Data Guard is used for the following reasons:

Database Redundancy
Rapid fault recovery
After the fault occurs, the client can automatically reconnect
Run backup in the slave Database
Good average fault repair time
Not Complex
System Environment
After writing this article, I used DBCA to create a new database JED and re-run the configuration steps in this article to confirm that it is applicable to a basic Oracle 11g Database. The master database is called JED and runs on a server called dev-db1. The slave database is called JED2 and runs on a server called dev-db2.

Basic Prerequisites
There are some basic settings that any production database should have. One of them is the archive mode. For production databases, this should be an obvious requirement. If your production database does not apply the archive mode, you either need to start reading the book immediately, or you have to have a very good reason. I'm not sure who can find a reason, but there are exceptions to any criterion.

How to change your database to archive mode:

SQL> shutdown immediateSQL> startup mountSQL> alter database archivelog;SQL> alter database open;SQL> archive log list;

Master database preparation
First, to make the standby database identical to the primary database, it must receive redo logs from the primary database. In Oracle databases, a user can use a specified operation to generate no logs (for example, using the NOLOGGING Statement ). This is a problem for the slave database. You must confirm that the user cannot instruct the database not to generate redo logs. This requires the forced log feature of the database to be enabled. You can use the following method to enable the Service:

SQL> alter database force logging;
SQL> select name, force_logging from v $ database;

You should see the force_logging column as YES.

Second, you need to confirm that when the master database adds or deletes data files, these files will also be added or deleted in the slave database. To enable this function, follow these steps:

SQL> alter system set standby_file_management = 'auto ';

Again, we need to confirm that the library has a Standby Log file (Standby Log Files ). The slave database uses the backup log file to save the redo logs received from the master database. There are two reasons for the backup log file to be created on the master database: one is that the master database may be converted to the slave database, and the other is that if the backup log is created on the master database, the slave database will be automatically created. Backup logs should be as large as online logs, and the number of groups should be at least the same as online logs, or more. I like to give the backup log a number that is different from the online log range. For example, the online log group is 1 to 6, and the backup log is 11 to 16. The method for creating backup logs is as follows:

SQL> alter database add standby logfile group 11 ('/oradata/JED/g11m01. sdo','/oradata/JED/g11m02. sdo') size 50 M;

If you do not use SSL for redo log transfer verification (generally not), you need to use the password file for verification. You must create a password file and set the REMOTE_LOGIN_PASSWORDFILE parameter to EXCLUSIVE or SHARED. Generally, the database has a password file by default, and this parameter is EXECUSIVE by default. Check the two items first. If they are not the default items, set them as follows:

SQL> alter system set remote_login_passwordfile = exclusive scope = spfile;
OS> orapwd password =

Finally, check whether the db_unique_name parameter of the database is set. If not, use alter system for settings:

SQL> show paramter db_unique_name;
SQL> alter system set db_unique_name = some_name scope = spfile;

Flash back database
I strongly recommend that you enable the database flashback function. Flash back allows you to restore the database to a previous time point. This function is very useful when a Failover occurs. It allows you to flash the old master database back to the fault and convert it into a slave database. If the flash back function is not enabled, you must recreate the slave database, which means you need to copy the data file again. In addition to this benefit, flash back can also prevent you from restoring data from backup in some cases.

To enable the Flash Recovery function, you must first configure the Flash/Fast Recovery Area. The method is as follows:

SQL> alter system set db_recovery_file_dest = '& Quick Recovery partition directory or ASM disk name ';
SQL> alter system set db_recovery_file_dest_size = 400G;

After the quick recovery area is configured, you can enable the flashback log function:

SQL> alter database flashback on;
SQL> select flashback_on from v $ database;

The value of FLASHBACK_ON should be YES. If you encounter an ORA-01153 error, it must be performed in the slave database. You need to cancel the redo log application, enable the flashback log, and then re-enable the log application.

Enable the flash back log in the master database, and do not enable the synchronization of the slave database. You must manually enable the flashback log on both the master and slave databases. If flash back logs are not enabled, you need to create a slave database completely when a Failover occurs.

SQL * NET Configuration
Before creating a slave database, make sure that the databases on the two servers can communicate with each other. If we want to use the RMAN duplicate from active database command to create the slave database. We need to configure the listener and TNS name. You can manually configure or use the network configuration tool (netca ). I prefer manual configuration, because I am a veteran and these configuration files are not complex,

First, configure the listener for the master and slave databases. Although the database automatically registers the listener, if you want to use the RMAN duplicate command to create a slave database, the slave database must first be In the NOMOUNT state. In the NOMOUNT status, the database instance does not automatically register the listener. You must configure a static listener. In addition, you must note that the database in NOMOUNT state must be connected using the dedicated server.

The TNS name files on the two servers must be configured so that the master and slave databases can find the target database using the Service name (Service Names) in the LOG_ARCHIVE_DEST_N and FAL_SERVER parameters (these parameters will be described later. The specific configuration should be similar to the following example.

Listening configuration for the master database (dev-db1:

SID_LIST_LISTENER=(SID_LIST =(SID_DESC =(GLOBAL_DBNAME = JED)(ORACLE_HOME = /oracle/product/11.2.0)(SID_NAME = JED)))

Slave database (dev-db2) Monitoring Configuration:

SID_LIST_LISTENER=(SID_LIST =(SID_DESC =(GLOBAL_DBNAME = JED2)(ORACLE_HOME = /oracle/product/11.2.0)(SID_NAME = JED2)))

Configuration of the TNS name file of the master database:

JED2 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = dev-db2)(PORT = 1521))(CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = JED2)))

Configuration of the TNS name file of the standby database:

JED =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = dev-db1)(PORT = 1521))(CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = JED)))

Redo log transmission configuration
Currently, the master and slave databases can still communicate with each other. The next step is to configure the archiving location and redo log transmission. We will first configure the master database, and then modify the configuration of the slave database after the slave database is created.

Configure the archiving location:

SQL> alter system set log_archive_dest_1 = 'location = use_db_recovery_file_dest valid_for = (all_logfiles, all_roles) db_unique_name = JED ';

This command specifies the recovery area as the archiving location, which is used to archive all log files under all database roles. Valid_for = (online_logfiles, all_roles) is used in official documents. This will cause the slave database to fail to archive backup log files because they are not online logs. However, if the all_logfiles option is used, both the active and standby databases can archive online and standby logs. If you want to back up data in the slave database and archive logs at the same time, you must use all_logfiles.

Then configure the redo log to be transmitted to the slave database:

SQL> alter system set log_archive_dest_2 = 'service = JED2 async valid_for = (online_logfile, primary_role) db_unique_name = jed2 ';

This statement indicates that if this is the master database, the service name JED2 is used to transmit online logs, and the target database name is jed2.

Note that the STANDBY_ARCHIVE_DEST parameter is not required and has been disabled by the official website. When debugging, many people kindly suggest setting this parameter, but after setting this parameter to start the database, only the ORA-32004: obsolete or deprecated parameter (s) specified for RDBMS instance error will be reported.

Another parameter to be set is FAL_SERVER. This parameter specifies where to find the missing archive log when there is a problem with log transmission. It is used when there is a gap between the redo logs received by the slave database. This situation occurs when log transmission is interrupted. For example, you need to maintain the slave database. When no logs are transmitted during the maintenance of the slave database, the gap arises. With this parameter set, the slave database will take the initiative to find the missing logs and require the master database to transmit them.

SQL> alter system set fal_server = 'jed2 ';

Note that the FAL_CLIENT parameter has been deprecated in 11g.

Then let the master database know the name of another database in the Data Guard configuration:

SQL> alter system set log_archive_config = 'dg _ config = (JED, JED2 )';

After completing this step, we can prepare the standby database environment and start creating the standby database.

Prepare the standby database environment
Prepare the standby database environment. There are many ways to perform these steps. I am writing about the method that I think best suits me. You should experiment with multiple methods to see which one is more suitable for you.

First, we need to create a password file and a parameter file (spfile) for the slave database ). The password file can be copied directly. You only need to change the name. For example, the password file on the master database is $ ORACLE_HOME/dbs/orapwJED. Copy it to the same location of the slave database server, replace the primary database with the SID of the slave database, and change its name to orapwJED2.

To create a slave database spfile, create a pfile ):

SQL> create pfile from spfile;

I would like to introduce a new feature that looks good, using RMAN to create a slave database SPFILE. The reason why I do not use this function is:

I also need to copy the password file to the slave database server, so it does not save time for copying files.
To use this function, you still need to use the parameter_value_convert parameter for a lot of replacement work, and use the SPFILE statement and multiple SET statements to ensure that everything is correct.
I found it easier to copy pfile (you can even paste and copy it directly). You just need to change the name and the parameters in it. This is easy. You can also learn a lot during manual modification and debugging. I found that manual modification is faster than using RMAN's SPFILE creation function.

After the pfile of the master database is created, copy it to the same location on the slave database server and change its name using the SID of the slave database. You need to modify pfile as follows:

Depending on the configuration and file location of your slave database, you may need to modify the AUDIT_FILE_DEST, CONTROL_FILES, and DISPATCHERS parameters (there may be other parameters to be modified ).
Change db_unique_name in the LOG_ARCHIVE_DEST_1 parameter to the unique name of the slave database (JED2 here ).
Modify the LOG_ARCHIVE_DEST_2 parameter to the service name and unique database name (JED) corresponding to the master database ).
The FAL_SERVER parameter modifies the service Name Pointing to the master database.
Add the following parameters:
Db_unique_name = JED2
Db_file_name_convert and log_file_name_convert. If the data files and log files of the master and slave databases are in different locations, you need to set these two parameters.
Then, create the required directory structure and modify related files on the slave database server. At least modify the following to create directories and files:

$ ORACLE_BASE/admin/$ ORACLE_SID
$ ORACLE_BASE/admin/$ ORACLE_SID/adump (directory configured by audit_file_dest)
Data File directory
Control File directory
Log File directory
Quick Recovery directory
Add the standby database information to the/etc/oratab File
Now you can start the slave database instance to create a database. Create a spfile during startup.

SQL> startup nomount pfile=initJED2.oraSQL> create spfile from pfile;SQL> shutdownSQL> startup nomountSQL> show parameter spfileSQL> exit

Show parameter spfile: the position of the spfile is displayed, and the slave database is in the NOMOUNT state.

Slave database creation
Just like the previous steps, you can create a database in multiple ways. In 11g, I will use the copy function of RMAN because it is easy. In the previous step, we copied the password file and parameter file to the backup database server, modified the parameter file, and created the spfile. This makes it easier to use the RMAN copy function. Of course, you can also skip the step of manually copying passwords and parameter files, so that RMAN can use commands such as SPFILE, PARAMETER_VALUE_CONVERT, and SET to automatically complete the process.

The command to create a slave database using RMAN is very simple. It instructs RMAN to directly copy the active database (master database) to the secondary database (slave database ). In this way, you do not need to copy the backup of the master database to the backup database server and then restore the database. With today's storage technology, we have a faster and simpler way to copy databases, but to demonstrate this new function of 11 GB, and this function is very simple, I like to use it as much as possible.

RMAN> connect target sys@JEDRMAN> connect catalog @RMAN> connect auxiliary sys@JED2RMAN> duplicate target database for standby from active database;

After version 11.2.0.2.0, you can directly connect to the secondary database using connect target. However, if you do not specify the username and password, an invalid username/password error will be reported when the database is copied to the standby database.

When the command is being copied, I like the alarm log file of the tail slave database to check the copy progress and whether an error is reported. Note that it is normal to report the ORA-27037: unable to obtain file status error for online and standby log files.

You can also perform parallel replication to improve performance. After multiple channels are assigned to the master and slave databases, run the Copy command:

run{allocate channel chan1 type disk;allocate channel chan2 type disk;allocate channel chan3 type disk;allocate channel chan4 type disk;allocate auxiliary channel aux1 type disk;allocate auxiliary channel aux2 type disk;allocate auxiliary channel aux3 type disk;allocate auxiliary channel aux4 type disk;duplicate target database for standby from active database;}

If everything is normal, you will see RMAN reporting similar information:

Finished Duplicate Db at 07-MAY-10

After the slave database is copied, I like to enable the flashback log in the slave database:

SQL> alter database flashback on;

Start the redo log application
It is very easy to start or stop the redo log application. Start the log application:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE

Disconnect from session;

This command indicates that the slave database starts to use the backup log file for recovery. It also tells the slave database to return to the command line interface after the command is completed. If you want to stop the restoration:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Check that the log application is normal.
Make sure that the redo log is being applied to the slave database. First, make sure that the archive destination configuration in the master and slave databases is valid:

SQL> select DEST_ID, STATUS, DESTINATION, ERROR from V $ ARCHIVE_DEST where DEST_ID <= 2;

The destination status should be shown as VALID.

Then confirm whether the redo log is actually applied and run the following command in the master database:

SQL> select SEQUENCE #, FIRST_TIME, NEXT_TIME, APPLIED, ARCHIVED from V $ ARCHIVED_LOG where name = 'jed2' order by FIRST_TIME;

If both the archive and log application are normal, the APPLIED and ARCHIVED columns should both be YES. In many tutorials, this query is sorted by SEQUENCE #, but I do not recommend it. If you sort the records in the SEQUENCE # column, after a failover, the serial number starts from 1 again. Using this query, you will not be able to see the latest records at the end of the result. I used to wonder why I couldn't find a new record because it wasn't at the end. I didn't see it. Therefore, all queries are sorted by the FIRST_TIME column.

If you find that the log is not applied, there may be a gap in the redo log. In this case, the slave database cannot apply the log. However, if your FAL_SERVER parameter is set correctly, this should not be a problem. You can check whether there is a redo log gap in the master database:

SQL> select STATUS, GAP_STATUS from V $ ARCHIVE_DEST_STATUS where DEST_ID = 2;

If everything is normal, VALID and no gap should be returned. If you want to test how the FAL_SERVER parameter works. You can first turn off the slave database, and then switch the logs several times in the master database. Wait a moment, start the slave database, and then switch the logs again. In this way, the gap will soon emerge. If FAL_SERVER is set properly, the missing redo logs are transmitted and applied.

The V $ monitoring uard_status view is useful for finding errors and understanding what has happened. You can run the following query on the Master/Slave database to view the database status:

SQL> select * from V $ DATAGUARD_STATUS order by TIMESTAMP;

Sometimes you want to manually confirm that the data is actually synchronized. A more convincing method is to directly query the slave database to check whether new data exists. You can enable the slave database to read-only, cancel the log application, and then execute the following command:

SQL> ALTER DATABASE OPEN READ ONLY;

In this case, you can check whether the changed data is synchronized. The 11g version supports active standby databases, enabling the database to be opened in read-only mode and enabling log applications at the same time.

Summary
Now you have a configured Data Guard and a redundant database. I don't want to leave a lecture on master-slave conversion, failover, and database reconstruction. These topics will be placed in the second part of this article.

I hope this article will help you create your Data Guard environment more easily and quickly.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.