Oracle 11G R2 replicuard construction in CentOS 5.11

Source: Internet
Author: User
Tags name database

Oracle 11G R2 replicuard construction in CentOS 5.11

Datagard is a disaster tolerance solution for Oracle Enterprise Edition. It is widely used in enterprises and I will record the setup process for backup.
Host Name database version Instance name IP
Db1 Oracle 11G R2 member 172.16.1.250
Db2 Oracle 11G R2 member 172.16.1.20.
By default, the Oracle database has been installed above, but only the database and listener have been set up on db1. db2 only installs the Oracle software without database creation and does not create a listener.

Directory:

Enable force archiving log

Add a standby log Group

Modify Master/Slave startup parameters

Password File Processing

Modify a listener

Copy the listener file, parameter file, and password file to the standby database.

Create a slave Database Control File

Copy the data files and log files of the master database to the slave database.

Initialize and configure the slave database for standby

DataGuard Test

Master/Slave switchover Test

Basic work:
A. Install CentOS 5.11 x86_64 and disable selinux and iptables.
B. Install Oracle 11G R2, db1 install software, listen to and create a database. db2 only installs software without creating a database.
For more information, see

Add in/etc/hosts of db1
127.0.0.1 db1
172.16.1.20.db2
Add in/etc/hosts of db2
127.0.0.1 db2
172.16.1.250 db1

Enable forced archiving (db1)

(Db1) SQL> shutdown immediate;

(Db1) SQL> startup mount;
(Db1) SQL> alter database force logging;
(Db1) SQL> alter database archivelog;
2. Create a redo log group (there must be one or more groups than the original redo log, and standby redo log is a necessary condition for using Real Time Apply)
(Db1) SQL> select group #, member from v $ logfile;
GROUP # MEMBER
------------------------------------------------------------------------------
3/opt/oracle/oradata/member/redo03.log
2/opt/oracle/oradata/member/redo02.log
1/opt/oracle/oradata/member/redo01.log
We can see from the above that there are already three groups of redo logs.
(Db1) SQL> alter database add standby logfile ('/opt/oracle/oradata/member/standby04.log') size 50 m;
(Db1) SQL> alter database add standby logfile ('/opt/oracle/oradata/member/standby05.log') size 50 m;
(Db1) SQL> alter database add standby logfile ('/opt/oracle/oradata/member/standby06.log') size 50 m;
(Db1) SQL> alter database add standby logfile ('/opt/oracle/oradata/member/standby07.log') size 50 m;
Check whether the log group is successfully created.
(Db1) SQL> select group #, member from v $ logfile;
GROUP # MEMBER
------------------------------------------------------------------------------
3/opt/oracle/oradata/member/redo03.log
2/opt/oracle/oradata/member/redo02.log
1/opt/oracle/oradata/member/redo01.log
4/opt/oracle/oradata/member/standby04.log
5/opt/oracle/oradata/member/standby05.log
6/opt/oracle/oradata/member/standby06.log
7/opt/oracle/oradata/member/standby07.log
The four groups of logs we created are displayed.

3. Modify the startup parameters of the Master/Slave Database

Generate parameter files
(Db1) SQL> create pfile = '/tmp/member. pfile' from spfile;
Exit sqlplus and open/tmp/member. pfile in the editor.
[Oracle @ db1 ~] $ Vi/tmp/member. pfile
Apsaradb for member. _ db_cache_size = 331350016
Member. _ java_pool_size = 4194304
Apsaradb for member. _ large_pool_size = 4194304
Member. _ oracle_base = '/opt/oracle' # ORACLE_BASE set from environment
Apsaradb for member. _ pga_aggregate_target = 339738624
Member. _ sga_target = 503316480
Member. _ shared_io_pool_size = 0
Apsaradb for member. _ shared_pool_size = 150994944
Member. _ streams_pool_size = 0
*. Audit_file_dest = '/opt/oracle/admin/member/adump'
*. Audit_trail = 'db'
*. Compatible = '11. 2.0.0.0'
*. Control_files = '/opt/oracle/oradata/member/control01.ctl', '/opt/oracle/flash_recovery_area/member/control02.ctl'
*. Db_block_size = 8192
*. Db_domain =''
*. Db_name = 'member'
*. Db_recovery_file_dest = '/opt/oracle/flash_recovery_area'
*. Db_recovery_file_dest_size = 4070572032
*. Diagnostic_dest = '/opt/oracle'
*. Dispatchers = '(PROTOCOL = TCP) (SERVICE = memberXDB )'
*. Log_archive_format = '% t _ % s _ % r. dbf'
*. Memory_target = 839909376
*. Open_cursors = 300
*. Processses = 150
*. Remote_login_passwordfile = 'clusive'
*. Undo_tablespace = 'undotbs1'
# Add a part
*. Db_unique_name = 'db1'
*. Archive_lag_target = 1800
*. Fal_client = 'db1'
*. Fal_server = 'db2'
*. Log_archive_config = 'dg _ CONFIG = (db1, db2 )'
*. Log_archive_dest_1 = 'location =/opt/oracle/flash_recovery_area/VALID_FOR = (all_logfiles, all_roles) db_unique_name = db1'
*. Log_archive_dest_2 = 'service = db2 lgwr async valid_for = (online_logfiles, primary_role) db_unique_name = db2'
*. Log_archive_dest_state_1 = 'enable'
*. Log_archive_dest_state_2 = 'enable'
*. Log_archive_format = '% t _ % s _ % r. dbf'
*. Standby_file_management = 'auto'
*. Db_file_name_convert = '/opt/oracle/flash_recovery_area', '/opt/oracle/flash_recovery_area'
*. Log_file_name_convert = '/opt/oracle/flash_recovery_area', '/opt/oracle/flash_recovery_area'


And copy a startup parameter file as the backup db2
[Oracle @ db1 ~] $ Cp/tmp/member. pfile/tmp/db2.pfile
[Oracle @ db1 ~] $ Vim/tmp/db2.pfile
Modify the added part
*. Db_unique_name = 'db2'
*. Archive_lag_target = 1800
*. Fal_client = 'db2'
*. Fal_server = 'db1'
*. Log_archive_config = 'dg _ CONFIG = (db1, db2 )'
*. Log_archive_dest_1 = 'location =/opt/oracle/flash_recovery_area/VALID_FOR = (all_logfiles, all_roles) db_unique_name = db2'
*. Log_archive_dest_2 = 'service = db1 lgwr async valid_for = (online_logfiles, primary_role) db_unique_name = db1'
*. Log_archive_dest_state_1 = 'enable'
*. Log_archive_dest_state_2 = 'enable'
*. Log_archive_format = '% t _ % s _ % r. dbf'
*. Standby_file_management = 'auto'
*. Db_file_name_convert = '/opt/oracle/flash_recovery_area', '/opt/oracle/flash_recovery_area'
*. Log_file_name_convert = '/opt/oracle/flash_recovery_area', '/opt/oracle/flash_recovery_area'


Start with/tmp/member. pfile modified on db1
(Db1) SQL> shutdown immediate;
(Db1) SQL> startup pfile = '/tmp/member. pfile' nomount;
(Db1) SQL> create spfile from pfile = '/tmp/member. pfile ';
(Db1) SQL> shutdown immediate;
(Db1) SQL> startup;


4. master database password file:
[1] password file exists
[Oracle @ db1 dbs] $ ls $ ORACLE_HOME/dbs
Hc_DBUA0.dat hc_member.dat init. ora lkDB1 lkMEMBER orapwmember spfilemember. ora
You can see the above Password File orapwmember. By default,
========================================================== ========================================================== =
[2] No password file exists
If not, you can manually create one.
[Oracle @ db1 dbs] $ cd $ ORACLE_HOME/dbs
[Oracle @ db1 dbs] $ orapwd file = orapwmember password = 123456 entries = 3
# Note that the above files must be created based on the SID Name. file = orapwSID
========================================================== ========================================================== ===


5. Modify the listener
[Oracle @ db1 dbs] $ cd $ ORACLE_HOME/network/admin
[Oracle @ db1 admin] $ mv listener. ora listener. ora. default
[Oracle @ db1 admin] $ vim listener. ora
LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = db1) (PORT = 1521 ))
)
 
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = member)
(ORACLE_HOME =/opt/oracle/product/11.2.0/db_1)
(SID_NAME = member)
)
)
# Note that the above HOST is the HOSTNAME
[Oracle @ db1 admin] $ vim tnsnames. ora
MEMBER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = localhost. localdomain) (PORT = 1521 ))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = member)
)
)
Db1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = db1) (PORT = 1521 ))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db1)
)
)
Db2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = db2) (PORT = 1521 ))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db2)
)
)

Restart a listener
[Oracle @ db1 admin] $ lsnrctl stop
[Oracle @ db1 admin] $ lsnrctl start

6. Copy the listener file, parameter file, and password file to the standby database.
# Pay attention to my $ ORACLE_HOME
[Oracle @ db1 ~] $ Echo $ ORACLE_HOME
/Opt/oracle/product/11.2.0/db_1
# Transfer startup parameter files
[Oracle @ db1 ~] $ Scp/tmp/db2.pfile db2 :~
# Transfer password files
[Oracle @ db1 ~] $ Scp/opt/oracle/product/11.2.0/db_1/dbs/orapwmember db2:/opt/oracle/product/11.2.0/db_1/dbs/orapwmember
# Transmit listener files
[Oracle @ db1 ~] $ Scp-r/opt/oracle/product/11.2.0/db_1/network/admin/{listener. ora, tnsnames. ora} db2:/opt/oracle/product/11.2.0/db_1/network/admin/


Modify/opt/oracle/product/11.2.0/db_1/network/admin/listener. ora on db2 and change db1 to db2
[Oracle @ db2 ~] $ Vim/opt/oracle/product/11.2.0/db_1/network/admin/listener. ora
LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = db2) (PORT = 1521 ))
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = member)
(ORACLE_HOME =/opt/oracle/product/11.2.0/db_1)
(SID_NAME = member)
)
)
# You only need to modify listener. ora. You do not need to change tnsnames. ora.

7. Create a standby control file for the master database. We use scp to transmit all the files.
View the path of the Control File
(Db1) SQL> select name from v $ controlfile;
NAME
--------------------------------------------------------------------------------
/Opt/oracle/oradata/member/control01.ctl
/Opt/oracle/flash_recovery_area/member/control02.ctl

(Db1) SQL> shutdown immediate;
(Db1) SQL> startup mount;
(Db1) SQL> alter database create standby controlfile as '/opt/oracle/oradata/member/standby. ctl ';
# Create a standby. ctl backup control file in the/opt/oracle/oradata/member/directory


8. copy the data files and log files of the master database to the slave database.
[Oracle @ db1 ~] $ Scp-r/opt/oracle/flash_recovery_area // opt/oracle/admin // opt/oracle/diag // opt/oracle/oradata/db2:/opt/oracle


9. initialize the slave Database
Use the standby control file on the slave machine to overwrite the original control file. You can find the path of the control file in the previous step.
[Oracle @ db2 ~] $ Cp/opt/oracle/oradata/member/standby. ctl/opt/oracle/oradata/member/control01.ctl
[Oracle @ db2 ~] $ Cp/opt/oracle/oradata/member/standby. ctl/opt/oracle/flash_recovery_area/member/control02.ctl

Use the parameter file modified before db2.pfile for db2

(Db2) SQL> startup pfile = '/home/oracle/db2.pfile' nomount;
(Db2) SQL> create spfile from pfile = '/home/oracle/db2.pfile ';
(Db2) SQL> shutdown immediate;
(Db2) SQL> startup nomount;
(Db2) SQL> alter database mount standby database;
(Db2) SQL> alter database open read only;
The following three methods are used to apply logs: (select a and B)
A. enable real-time application logs so that you can immediately find the logs on the slave machine after inserting the logs in the master database.
(Db2) SQL> alter database recover managed standby database using current logfile disconnect from session;
B. Enable redolog application logs, which can be queried after a long time
(Db2) SQL> alter database recover managed standby database disconnect from session;
C. stop applying redolog. Only Accept logs and do not redo logs.
(Db2) SQL> alter database recover managed standby database cancel;
At this point, DataGuard is successfully built. You can create a table on db1 and insert data, and then query it on db2. Data is always not found at the beginning, and it is found that the method of the above application log is a problem.

 

10. Dataguard test:
View the Standby management process
(Db1) SQL> select process from v $ managed_standby;
PROCESS
---------
ARCH
ARCH
ARCH
ARCH
LNS
(Db2) SQL> select process from v $ managed_standby;
PROCESS
---------
ARCH
ARCH
ARCH
ARCH
RFS
RFS
RFS
RFS
MRP0
The above shows that the LNS process is required on the host, and The RFS process is required on the slave machine to receive redo logs. The MRP0 process is responsible for writing logs to the database.

Switch the log on db1 and check whether the log is normal on db2.
(Db1) SQL> select max (sequence #) from v $ archived_log;
MAX (SEQUENCE #)
--------------
11
(Db2) SQL> select max (sequence #) from v $ archived_log;
MAX (SEQUENCE #)
--------------
11

(Db1) SQL> alter system switch logfile;
(Db1) SQL> select max (sequence #) from v $ archived_log;
MAX (SEQUENCE #)
--------------
12
(Db2) SQL> select max (sequence #) from v $ archived_log;
MAX (SEQUENCE #)
--------------
12

It can be seen from the above that the log is successfully switched and the logging uard is running normally.

11. Master-slave switchover test:
Db1 --- primary/db2 --- standby = db2 -- primary/db1 -- standby

[Oracle @ db1 ~] $ Lsnrctl stop
(Db1) SQL> alter database commit to switchover to physical standby with session shutdown;
(Db1) SQL> shutdown immediate;
(Db1) SQL> startup mount;
(Db1) SQL> alter database recover managed standby database using current logfile disconnect from session;
(Db1) SQL> alter database open;
[Oracle @ db1 ~] $ Lsnrctl start

(Db2) SQL> alter database commit to switchover to primary;
(Db2) SQL> shutdown immediate;
(Db2) SQL> startup;

The above is the master-slave switchover process.

Steps for installing Oracle 11gR2 in vmwarevm

Install Oracle 11g XE R2 In Debian

Important configuration parameters of Oracle Data Guard

Configure Oracle 11g Data Guard based on the same host

Explore Oracle 11g elastic uard

Oracle Data Guard (RAC + DG) archive deletion policies and scripts

Role conversion for Oracle Data Guard

FAL gap in Oracle Data Guard logs

Oracle 11g Data Guard Error 16143 Heartbeat failed to connect to standby

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.