Configuration and application of Oracle10_Dataguard in Linux

Source: Internet
Author: User
Tags custom name
DataGuard dual-node node1node21. Configure the network on node2, including the NIC mac address, ip address, dns name, and test the network. 2. Run terminal command -- en on node1.

DataGuard dual-node node1 node21. configure the network on node2, including the NIC mac address, ip address, dns name, and test the network. 2. Run terminal command -- en on node1.

DataGuard dual-node node1 node2
1. Configure the network on node2, including the NIC mac address, ip address, and dns name, and test the network
2. Run the terminal command -- env | grep PATH on node1.
3. Create a database on node1 and select a special database type.
4. Run the terminal command -- env | grep ORA on node1.
The global database name must be the same as that set by Oracle_sid in the configuration file.
5. If no default archive log is set on node1, alter system set db_recovery_file_dest = ''scope = spfile;
You must also enable the arch log process and manually create a directory and path for archiving logs.
(Mkdir-p/u01/app/arch add the relevant archive path/u01/app/arch on DBCA)

Note: After the default archive path is changed, it does not take effect immediately. You should regenerate the spfile file.
Create spfile = ''from pfile;

------------------------------------------
Start database creation --- end
After completion
1. Run the terminal command on node1 -- ps-ef to view the process. The oracle service process appears.
Note: Check the id of/etc/hosts. The IP address must be configured for node1 and node2 under/etc/hosts.
Set shortcuts for hosts ^_^

2. netca (dbca) ----- netmgr & or
Create listener for the primary database
View listener --- run cd network/admin/In/oracle/
There is a listener. ora file.
Edit FILE command vi listener. ora
3. netca configuration tnsname. ora node1 and node2 both need to be configured (connected to the background service)
Note: In the slave database, you must create an identical database to copy the original one from the master database.
For details, see the following command to copy a database: RMAN duplicate.

Ps-ef | more: Check whether the configuration is successful. You can also view the result on the netca interface.
Ps-ef | grep oracle view Process
-----------------------------------
1. Configure listener and tnsname on node2
2. Test the network communication between the two nodes after configuration. --------- sqlplus scott/tiger @ test2
Sqlplus scott/tiger @ test1

---------------------------------------
Start configuring DataGuard
3.1 configure the master Database:
1. enable forced logging --- alter database force logging;
2. create a password file --- automatically created by oracle when dbca is configured
--- When the database is not in the open state, the user $ table is used when the database is in the open state.

3. setting master database initialization parameters --- main work
++
Db_unique_name = uqn_node1 --- custom name
Log_archive_config = 'dg _ config = (uqn_node1, uqn_node2 )'
Log_archive_dest_2 = 'service = lsnode2 --- tnsname
Valid_for = (ONLINE_LOGFILES, PRIMARY_ROLE)
Db_unique_name = uqn_node2'
LOG_ARCHIVE_DEST_STATE_1 = ENABLE
LOG_ARCHIVE_DEST_STATE_2 = ENABLE
Fal_server = lsnode2
Fal_client = lsnode1
STANDBY_FILE_MANAGEMENT = AUTO
*. Db_file_name_convert = '/u02/oradata/test1','/oradata/test1' *. log_file_name_convert = '/u02/oradata/test1','/oradata/test1'
4. enable archiving
Shutdown immediate;
Startup mount;
Alter database archivelog;
Alter database open;

3.2 Create master database-slave database (Primary-Standby)
1. Back up the data file of the master database ---- rman back up the backup database
2. Create a control file for the Standby Database
Startup mount;
Alter database create standby controlfile as '/u01/oradata/test1/standby. ctl ';
Alter database open;
Configure initialization parameters for the Standby Database
4. Copy files from the master database to the slave Database
Including datafiles, standby control file, and initial file
Ps: create pfile from spfile;
Before creating the oracle initialization parameter file, we used initdgdemo. ora
And the previous spfiledgdemo. ora file should be deleted.
DELETE command: rm-f spfiledgdemo. ora

Cp initdgdemo. ora/u02/oradata
Cp orapwdgdemo/u02/oradata
Check the files in the/u01/oradata directory.
Including arch, dgdemo, initdgdemo. ora, orapwdgdemo, boston. ctl
There are two methods: 1. Package and upload it using ftp
In the/u01 directory, 2.scp-r admin oradata root @ IP:/u01
5. Configure the environment for the standby Database
Before modifying the configuration file, perform the following operations:
Replace the control file under/oradata/with boston. ctl,
Under dgdemo
Rm-f control0 *
Mv ../boston. ctl./control01.ctl
Cp control01.ctl control02.ctl
Cp control01.ctl control03.ctl
Copy the initialization parameter file to/oracle/dbs.
Mv $ ORACLE_HOME/dbs/inittest1.ora.
Mv $ ORACLE_HOME/dbs/orapwtest1.

Setting backup database initialization parameters
*. Db_unique_name = uqn_node2 --- custom name
*. Log_archive_config = 'dg _ config = (uqn_node1, uqn_node2 )'
*. Log_archive_dest_1 = 'location =/u01/app/arch'
*. Log_archive_dest_2 = 'service = lsnode1 --- tnsname
Valid_for = (ONLINE_LOGFILES, PRIMARY_ROLE)
Db_unique_name = uqn_node1'
*. LOG_ARCHIVE_DEST_STATE_1 = ENABLE
*. LOG_ARCHIVE_DEST_STATE_2 = ENABLE
*. Fal_server = lsnode1
*. Fal_client = lsnode2
*. STANDBY_FILE_MANAGEMENT = AUTO
*. Db_file_name_convert = '/u01/app/oradata','/u01/app/oradata'
*. Log_file_name_convert = '/u01/app/oradata','/u01/app/oradata'
---------------- Instance
Test1. _ db_cache_size = 427819008
Test1. _ java_pool_size = 4194304
Test1. _ large_pool_size = 4194304
Test1. _ shared_pool_size = 167772160
Test1. _ streams_pool_size = 0
*. Audit_file_dest = '/u01/app/admin/test1/adump'
*. Background_dump_dest = '/u01/app/admin/test1/bdump'
*. Compatible = '10. 2.0.1.0'
*. Control_files = '/u01/app/oradata/control01.ctl', '/u01/app/oradata/control02.ctl', '/u01/app/oradata/control03.ctl'
*. Core_dump_dest = '/u01/app/admin/test1/cdump'
*. Db_block_size = 8192
*. Db_domain =''
*. Db_file_multiblock_read_count = 16
*. Db_name = 'test1'
*. Db_recovery_file_dest_size = 2147483648
*. Db_recovery_file_dest =''
*. Dispatchers = '(PROTOCOL = TCP) (SERVICE = test1XDB )'
*. Job_queue_processes = 10
*. Log_archive_start = TRUE
*. Open_cursors = 300
*. Pga_aggregate_target = 201326592
*. Processses = 150
*. Remote_login_passwordfile = 'clusive'
*. Sga_target = 605028352
*. Undo_management = 'auto'
*. Undo_tablespace = 'undotbs1'
*. User_dump_dest = '/u01/app/admin/test1/udump'
*. Db_unique_name = test1
*. Log_archive_config = 'dg _ config = (test1, dubdg )'
*. Log_archive_dest_1 = 'location =/u01/app/oradata'
*. Log_archive_dest_2 = 'service = dubdg2 valid_for = (online_logfiles, primary_role) db_unique_name = dubdg'
*. Log_archive_dest_state_1 = enable
*. Log_archive_dest_state_2 = enable
*. Fal_server = dubdg2
*. Fal_client = dubdg1
*. Standby_file_management = auto
*. Db_file_name_convert = '/u01/app/oradata','/u01/app/oradata'
*. Log_file_name_convert = '/u01/app/oradata','/u01/app/oradata'
For details, see the following command to copy a database: RMAN duplicate.

,

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.