Explanation of "Live cloning" for Oracle 11g physical standby database (1)

Source: Internet
Author: User
Tags vmware server

BKJIA exclusive feature: Compared with Oracle 8i and Oracle 9i, Oracle 11g has greatly improved database backup, especially the true application cluster RAC, which is part of Oracle's maximum availability architecture (locally. Oracle 11g now makes it easier to create a backup DATABASE, because the recovery manager RMAN) supports directly cloning a backup DATABASE from the master DATABASE using the duplicate database command set over the network, as long as the target database is active. This means that you no longer need to transfer data, and then restore and restore the RMAN backup set of the master database in the backup database in a complex and manual manner. On the contrary, RMAN automatically generates a conversion script in the memory on the master site, and then uses the script on the slave site to manage the clone operation. In fact, no DBA intervention is required.

The following describes the "live cloning" feature of the standby database. The basic hardware information of the author is: dual-core AMD Athlon 64-bit CPUWinchester 420), 4 GB memory. The host runs Windows XP and runs VMWare Server 1.0.8 to access the virtual database Server environment, each virtual machine uses one CPU and memory mb. I chose Oracle Enterprise Linux (OEL) 4.5.1Linux kernel version 2.6.9-55.0.0.0.2.ELsmp) as the Virtual Machine Client operating system.

After each vmwarevm is configured, add appropriate entries to the/etc/hosts file of each VM to establish a network connection between the master site training and the slave site 11 stststdby, then, the Oracle 11g database is installed on each node. Finally, the standard 11g R1 seed database is created on the master site, including the standard sample solution. The ORACLE_SID of this database is orcl, and you can start the live cloning operation.

Preparations before cloning: Adjust the master database

Before cloning the primary database to the corresponding standby environment, I need to make some adjustments to the primary database. The steps below are not particularly described in order, you only need to execute these steps before issuing the duplicate database Command. There should be nothing unexpected during the cloning operation.

Forcibly record all transactions

The main reason for most organizations to implement data guard configuration is to ensure that all transactions are not lost, but unfortunately, by default, Oracle databases run in noforce logging mode, this means that changes to objects may be lost because their storage properties are set to NOLOGGING. To ensure that all changes are recorded, I will execute the ALTER DATABASE FORCE LOGGING command, this command needs to be executed before the alter database archivelog command runs the database archivelog mode, as shown in Listing 1.

Listing 1 switch the primary database to ARCHIVELOG Mode

-- Set an appropriate format for archiving log files
Alter system set log_archive_format = 'Log _ % s _ % t _ % r. arc' SCOPE = SPFILE;
-- Set the new DB_UNIQUE_NAME parameter, which cannot be modified dynamically
Alter system set db_unique_name = 'orcl 'SCOPE = SPFILE;
Shutdown immediate;
Startup mount;
Alter database force logging;
Alter database archivelog;
Alter database open;

Create a standby redo log Group

Since Oracle 9i R2 supports standby redo log (SRL) groups, it is recommended that Oracle configure it. SRL is required for Real-Time application of Real Time Apply, or DBA also needs to achieve the purpose of redo log concatenation, in addition, it is still a backup database configuration option. Another advantage of Oracle 11g is that if SRL has been configured on the master DATABASE, the duplicate database Command will automatically create them on the slave DATABASE. Listing 2 shows the commands for creating SRL on the primary database. Note that I also use multiple SRL files to protect the entire SRL group and avoid data loss, this is similar to the online redo log group.

Listing 2 create backup redo log files on the master database

ALTER DATABASE
ADD STANDBY LOGFILE
'/U01/app/oracle/oradata/orcl/srl01.log'
SIZE 50 M
REUSE;
ALTER DATABASE
ADD STANDBY LOGFILE
'/U01/app/oracle/oradata/orcl/srl02.log'
SIZE 50 M
REUSE;
ALTER DATABASE
ADD STANDBY LOGFILE
'/U01/app/oracle/oradata/orcl/srl03.log'
SIZE 50 M
REUSE;

File Name Conversion

Generally, the Standby database is created on a host different from the primary database. Otherwise, the primary and backup databases may be compromised in the event of a disaster, the best practice is to make the Directory and file name of the corresponding backup database the same, but if the mount point is different, the directory name needs to be modified, at this time, you need to use DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT to initialize the parameters for conversion.

Modify master site initialization parameters

Set the following initialization parameters on the primary DATABASE to ensure that the duplicate database command can be used to configure the backup DATABASE. I listed these initialization parameter settings in detail in listing 3:

1) DB_UNIQUE_NAME

I use this parameter to define a unique instance name for the primary database. This parameter makes it easier to distinguish between the "original" primary database and the standby database because it is a static parameter, I have set it to SCOPE = SPFILE in Listing 1, which will take effect when the master database instance starts.

2) LOG_ARCHIVE_CONFIG

This parameter controls whether the master or slave database should accept and/or send archived redo logs from remote sources. It allows us to include all the master and slave databases, because it lists the DB_UNIQUE_NAME values of all databases in the configuration, I set it to the current my data guard database orcl and stdby.

3) STANDBY_FILE_MANAGEMENT

I set this parameter to auto. In this way, what operations will happen on the primary database, and what operations will happen on the standby database, such as creating a file on the primary database, the backup database will also create the same file, delete an existing file on the master database, and delete the corresponding files on the backup database. For example, add an online redo log file group or delete a tablespace.
4) LOG_ARCHIVE_DEST_n

This control is a key parameter for transferring archived redo logs from the primary database to the physical standby database. I will set two archiving targets:

1. The target LOG_ARCHIVE_DEST_1 specifies the physical location of the primary database to archive and redo logs. Note that we use the flash recovery zone as the target.

2. The network service address of the target LOG_ARCHIVE_DEST_2 database instance stdby is specified. This parameter ensures that the archived redo logs are automatically transmitted to the standby site.

I can also specify two other commands for this archive redo log transmission parameter:

◆ When the database is activated with a specific role, the command VALID_FOR greatly simplifies the type of redo log transmission. When the primary and backup database roles are different, this is the most critical parameter when transmitting redo logs, table 1 lists the values allowed by this parameter.

Table 1 VALID_FOR command value

Set

Description

ALL_LOGFILES

Default) Target uses online or standby redo log files

ONLINE_LOGFILE

Target only applies to online archiving and redo log files

STANDBY_LOGFILE

Target only applies to standby redo log files

ALL_ROLES

The target is valid by default when the database runs as a master or slave role.

PRIMARY_ROLE

The target is valid when the database runs as a master role.

STANDBY_ROLE

The target is valid when the database runs as a backup role.

◆ You can also set an appropriate redo transport mode value for the redo log transmission mode) to specify that the archived redo log is transmitted from the primary database to the standby database. Table 2 lists the values allowed by this command.


Table 2 redo log Transmission Mode

Set

Description

ASYNC

Not all targets may have received the transferred redo log default value before the transaction is committed)

SYNC

All targets must accept transmitted redo logs before the transaction is committed.

Affrem

Only after the redo data is written to the standby redo log, the target is confirmed to have received the data, which contains the meaning of SYNC.

Noaffrem

When the redo data is written to the standby redo log, the target can confirm the receipt, which contains the meaning of ASYNC.

Network Configuration Modification

Finally, I need to ensure that the master database and the slave database can communicate through the network. The only change is to name the configuration file TNSNAMES locally in the master database. add the instance of the standby database, and the LISTENER of the standby database. the ORA configuration file also requires a static listener for the standby database instance, as shown in Listing 4.

Listing 3 set the appropriate initialization parameter values on the primary database

Alter system set log_archive_dest_1 = 'location =/u01/app/oracle/flash_recovery_area/ORCL/DB_UNIQUE_NAME = orcl VALID_FOR = (ALL_LOGFILES, ALL_ROLES )';
Alter system set log_archive_dest_state_1 = 'enable ';
Alter system set log_archive_dest_2 = 'service = stdby ASYNC DB_UNIQUE_NAME = stdby VALID_FOR = (ONLINE_LOGFILE, PRIMARY_ROLE )';
Alter system set log_archive_dest_state_2 = 'enable ';
Alter system set standby_file_management = 'auto ';
Alter system set log_archive_config = 'dg _ CONFIG = (orcl, stdby )';

Listing 4 Network Configuration File Modification

# Add a backup database entry to the master database instance
STDBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = 11 gStdby) (PORT = 1521 ))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = stdby)
)
)
# Use the static reference of the standby database instance to set the listener of the standby Database
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = stdby)
(ORACLE_HOME =/u01/app/oracle/product/11.1.0/db_1)
(SID_NAME = stdby)
)
)
LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = 11 gStdby) (PORT = 1521 ))

Prepare for cloning: Prepare a backup site

Now that the primary site is ready for cloning, you need to make some adjustments on the corresponding standby site:

Create necessary Directories

You need to create necessary directories for Database Control Files, data files, online redo log files, and backup redo log files. I have also created a suitable directory for database review tracking.

Set Password File

Because remote authentication is required for the communication between the primary database and the standby database, I will use the orapwd tool to create a new password file, make sure that the password of SYS matches the primary database. Note that I may directly copy the password from the primary database to the standby database ).

Create a backup initialization parameter file

Finally, I need to create an initialization parameter file PFILE), only allow me to start the backup database instance, it only needs one parameter: DB_NAME. After the duplicate database Command script is executed, it will create a server parameter file SPFILE). It only includes the appropriate initialization parameter settings.

In listing 5, I explained these commands and temporary backup DATABASE initialization parameters. To enable the duplicate database cloning operation, I will start the listener of the backup site, then, use the previously created PFILE initialization parameter file to start the backup database instance to the NOMOUNT state.

$> Export ORACLE_SID = stdby
$> Sqlplus/as sysdba
SQL> startup nomount pfile = '/home/oracle/init_stdby.ora ';
Listing 5 backup database temporary initialization parameter files
######
# File: initstdby. ora
# Purpose: "Dummy" PFILE to enable startup of standby database
# Instance during duplicate database over the network
#####
DB_NAME = stdby


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.