Oracle 11G R2 uses RMAN to build the DataGuard Environment

Source: Internet
Author: User

Oracle 11G R2 uses RMAN to build the DataGuard Environment

Environment:

Role Machine name Operating System IP Remarks
Master Database Db1 CentOS 5.11 x86_64 192.168.2.241 Install Oracle and create a database
Slave Database Db2 CentOS 5.11 x86_64 192.168.2.242 Only install Oracle

Preparations:
Add in/etc/hosts of db1
127.0.0.1 db1
192.168.2.242 db2
Add in/etc/hosts of db2
127.0.0.1 db2
192.168.2.241 db1

Directory

  1. Open archive and force archive for the master database
  2. Create three sets of standby redolog
  3. Modify parameter files
  4. Modify the listener File
  5. RMAN backup master database
  6. Copy files to the slave Database
  7. Restore the parameter file (db2)
  8. Modify the backup database parameter file (db2)
  9. Prepare RMAN recovery (db2)
  10. Generate the backup database parameter file (db2)
  11. Restore a database (db2)
  12. Start the standby database (db2)

1. Open archive and force archive in the master database (db1)

Check whether archive is enabled in Oracle
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 6
Current log sequence 8
# You can see that the Automatic archival Disabled indicates that the archive is not enabled.
Open archive (you must close Oracle before starting the database to the mount State to modify the archive state)
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog; # open an archive

SQL> alter database force logging; # enable forced archiving or open the database in the open state.

SQL> alter database open; # open a database

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 6
Next log sequence to archive 8
Current log sequence 8


2. Create multiple sets of standby redo logs. At least one group is required. standby redo logs are necessary to use Real Time Apply.
SQL> select group #, member from v $ logfile;
GROUP # MEMBER
--------------------------------------------------------------------------------
3/opt/oracle/oradata/tpy100/redo03.log
2/opt/oracle/oradata/tpy100/redo02.log
1/opt/oracle/oradata/tpy100/redo01.log
SQL> alter database add standby logfile ('/opt/oracle/oradata/tpy100/standby04.log') size 50 m;
SQL> alter database add standby logfile ('/opt/oracle/oradata/tpy100/standby05.log') size 50 m;
SQL> alter database add standby logfile ('/opt/oracle/oradata/tpy100/standby06.log') size 50 m;
SQL> alter database add standby logfile ('/opt/oracle/oradata/tpy100/standby07.log') size 50 m;
SQL> select group #, member from v $ logfile;
GROUP # MEMBER
--------------------------------------------------------------------------------
3/opt/oracle/oradata/tpy100/redo03.log
2/opt/oracle/oradata/tpy100/redo02.log
1/opt/oracle/oradata/tpy100/redo01.log
4/opt/oracle/oradata/tpy100/standby04.log
5/opt/oracle/oradata/tpy100/standby05.log
6/opt/oracle/oradata/tpy100/standby06.log

7/opt/oracle/oradata/tpy100/standby07.log

3. Modify the parameter file
Before modifying the parameter file, back up
SQL> create pfile = '/tmp/tpy100.pfile' from spfile;
Before modification, we need to view the backup parameter file and change the following statement according to the specific environment.
SQL> alter system set db_unique_name = db1 scope = spfile;
SQL> alter system set log_archive_config = 'dg _ config = (db1, db2) 'scope = both;
SQL> alter system set log_archive_dest_1 = 'location =/opt/oracle/flash_recovery_area/valid_for = (all_logfiles, all_roles) db_unique_name = db1 'scope = both;
Error:
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified the value is invalid
ORA-16053: DB_UNIQUE_NAME db1 is not in the Data Guard Configuration

You may encounter the following error message. In this case, you need to restart the database.
SQL> shutdown immediate;
SQL> startup
SQL> alter system set log_archive_dest_1 = 'location =/opt/oracle/flash_recovery_area/valid_for = (all_logfiles, all_roles) db_unique_name = db1 'scope = both;
SQL> alter system set log_archive_dest_2 = 'service = db2 async valid_for = (online_logfiles, primary_role) db_unique_name = db2 'scope = both;
SQL> alter system set log_archive_dest_state_1 = enable scope = both;
SQL> alter system set log_archive_dest_state_2 = enable scope = both;
SQL> alter system set standby_file_management = auto scope = both;
SQL> alter system set fal_server = db2 scope = both;
SQL> alter system set fal_client = db1 scope = both;
SQL> alter system set db_file_name_convert = '/opt/oracle/flash_recovery_area', '/opt/oracle/flash_recovery_area' scope = spfile;
SQL> alter system set log_file_name_convert = '/opt/oracle/flash_recovery_area', '/opt/oracle/flash_recovery_area' scope = spfile;

4. Modify the listening File
[Oracle @ db1 ~] $ Vim/opt/oracle/product/11.2.0/network/admin/tnsnames. ora
Add
Db1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = db1) (PORT = 1521 ))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db1)
)
)

Db2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = db2) (PORT = 1521 ))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db2)
)
)

5. RMAN backup master database
Create backup storage directory

[Oracle @ db1 ~] $ Mkdir-p/opt/oracle/dbackup
Execute backup
Rman> run {
Allocate channel c1 type disk;
Backup format'/opt/oracle/dbackup/tpy100 _ % T _ % s _ % P' database;
SQL 'alter system archive log current ';
Backup format'/opt/oracle/dbackup/archive_log _ % T _ % s _ % P' archivelog all;
Backup spfile format '/opt/oracle/dbackup/spfile _ % u _ % T. bak ';
Release channel c1;
}
Rman> copy current controlfile for standby to '/opt/oracle/dbackup/standby. ctl ';


6. copy the file to the slave Database
[Oracle @ db1 ~] $ Scp-r/opt/oracle/dbackup/db2:/opt/oracle
[Oracle @ db1 ~] $ Cd $ ORACLE_HOME/dbs
[Oracle @ db1 dbs] $ scp-r orapwtpy100 db2: $ ORACLE_HOME/dbs
[Oracle @ db1 dbs] $ cd $ ORACLE_HOME/network/admin
[Oracle @ db1 admin] $ scp-r listener. ora tnsnames. ora db2: $ ORACLE_HOME/network/admin

7. Restore the parameter file (db2)
RMAN> set dbid 2926260986
RMAN> startup nomount;
# Errors will be reported here. Ignore them.
RMAN> restore spfile to pfile '/tmp/tpy100.pfile' from '/opt/oracle/dbackup/spfile_04quaekm_20160219.bak ';
# Restore the parameter file to/tmp/tpy100.pfile. Because this is the parameter file of the master database, the backup database is slightly different.
8. Modify the backup database parameter file (db2)
[Oracle @ db2 ~] $ Vim/tmp/tpy100.pfile
# Change DB1 db1 to the corresponding DB2 db2, and change DB1 to db1.
Tpy100. _ db_cache_size = 322961408
Tpy100. _ java_pool_size = 4194304
Tpy100. _ large_pool_size = 4194304
Tpy100. _ oracle_base = '/opt/oracle' # ORACLE_BASE set from environment
Tpy100. _ pga_aggregate_target = 339738624
Tpy100. _ sga_target = 503316480
Tpy100. _ shared_io_pool_size = 0
Tpy100. _ shared_pool_size = 159383552
Tpy100. _ streams_pool_size = 0
*. Audit_file_dest = '/opt/oracle/admin/tpy100/adump'
*. Audit_trail = 'db'
*. Compatible = '11. 2.0.0.0'
*. Control_files = '/opt/oracle/oradata/tpy100/control01.ctl', '/opt/oracle/flash_recovery_area/tpy100/control02.ctl'
*. Db_block_size = 8192
*. Db_domain =''
*. Db_file_name_convert = '/opt/oracle/flash_recovery_area', '/opt/oracle/flash_recovery_area'
*. Db_name = 'tpy100'
*. Db_recovery_file_dest = '/opt/oracle/flash_recovery_area'
*. Db_recovery_file_dest_size = 4070572032
*. Db_unique_name = 'db2'
*. Diagnostic_dest = '/opt/oracle'
*. Dispatchers = '(PROTOCOL = TCP) (SERVICE = tpy100XDB )'
*. Fal_client = 'db2'
*. Fal_server = 'db1'
*. Log_archive_config = 'dg _ config = (db2, db1 )'
*. Log_archive_dest_1 = 'location =/opt/oracle/flash_recovery_area/valid_for = (all_logfiles, all_roles) db_unique_name = db2'
*. Log_archive_dest_2 = 'service = db1 async valid_for = (online_logfiles, primary_role) db_unique_name = db1'
*. Log_archive_dest_state_1 = 'enable'
*. Log_archive_dest_state_2 = 'enable'
*. Log_file_name_convert = '/opt/oracle/flash_recovery_area', '/opt/oracle/flash_recovery_area'
*. Memory_target = 843055104
*. Open_cursors = 300
*. Processses = 150
*. Remote_login_passwordfile = 'clusive'
*. Standby_file_management = 'auto'
*. Undo_tablespace = 'undotbs1'

9. Prepare RMAN recovery (db2)
Create a directory and copy the control file of the slave database to the corresponding location.
[Oracle @ db2 ~] $ Mkdir-p/opt/oracle/admin/tpy100/adump
[Oracle @ db2 ~] $ Mkdir-p/opt/oracle/oradata/tpy100
[Oracle @ db2 ~] $ Mkdir-p/opt/oracle/flash_recovery_area/tpy100
[Oracle @ db2 ~] $ Cp/opt/oracle/dbackup/standby. ctl/opt/oracle/oradata/tpy100/control01.ctl
[Oracle @ db2 ~] $ Cp/opt/oracle/dbackup/standby. ctl/opt/oracle/flash_recovery_area/tpy100/control02.ctl
[Oracle @ db2 ~] $ Lsnrctl start
# Start a listener

10. Generate the backup database parameter file (db2)

SQL> shutdown immediate;
SQL> startup nomount pfile = '/tmp/tpy100.pfile'
SQL> create spfile from pfile = '/tmp/tpy100.pfile ';
SQL> alter database mount;

11. Restore the database (db2)

RMAN> restore database;

12. Start the standby database (db2)

SQL> alter database open read only;
# If
ERROR at line 1:
ORA-10458: standby database requires recovery
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/opt/oracle/oradata/tpy100/system01.dbf'
Use shutdown immediate to disable it and then restart it.

SQL> alter database recover managed standby database using current logfile disconnect from session;

To restart the slave database, run the following command:
SQL> startup;

SQL> alter database recover managed standby database using current logfile disconnect from session;
Note: After the restart, you will find that the data of the master database has not come yet, but it will take a while. in the production environment, we need to quickly solve this problem to reduce downtime.
After the complete database is closed, view it in the master database.
SQL> select dest_name, status, error from v $ archive_dest;
DEST_NAME STATUS ERROR
--------------------------------------
LOG_ARCHIVE_DEST_1 VALID
LOG_ARCHIVE_DEST_2 ERROR ORA-03113: end-of-file on communication channel
We can see that LOG_ARCHIVE_DEST_2 is incorrect because it is not connected to the archive path of the slave database. By default, Dataguard will automatically connect to the slave database every 300 seconds.
Solution: run the command in the master database
SQL> alter system set log_archive_dest_state_2 = enable;
Query again. If the problem persists, check the network and listener of the slave database.

The test has the following statement:
SQL> select sequence #, applied from v $ archived_log;
SQL> select process, status from v $ managed_standby;
SQL> select sequence # from v $ log_history;

Steps for installing Oracle 11gR2 in vmwarevm

Install Oracle 11g XE R2 In Debian

Important configuration parameters of Oracle Data Guard

Configure Oracle 11g Data Guard based on the same host

Explore Oracle 11g elastic uard

Oracle Data Guard (RAC + DG) archive deletion policies and scripts

Role conversion for Oracle Data Guard

FAL gap in Oracle Data Guard logs

Oracle 11g Data Guard Error 16143 Heartbeat failed to connect to standby

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.