Oracle stream Disaster Tolerance configuration, oraclestream Disaster Tolerance

Source: Internet
Author: User
Tags id3

Oracle stream Disaster Tolerance configuration, oraclestream Disaster Tolerance
Configuration of Oracle stream Replication

Configuration: master database 100.100.100.20 linux system, database instance name db1 version 10.2.0.1

Slave database 100.100.100.254 windows database name dbwin2 version 10.2.0.1

The process is as follows: the slave database captures changes to all tables under the u1 user of the master database and synchronizes the changes in real time.

1.1 master database configuration 1.1.1 master database oracle parameter configuration

Alter system set aq_tm_processes = 2 scope = both;

Alter system set global_names = true scope = both;

Alter system set job_queue_processes = 10 scope = both;

Alter system set parallel_max_servers = 20 scope = both;

Alter system set undo_retention = 3600 scope = both;

Alter system set nls_date_format = 'yyyy-MM-DD HH24: MI: ss' scope = spfile;

 

# Automatic allocation

Alter system set streams_pool_size = 25 M scope = spfile;

Alter system set utl_file_dir = '*' scope = spfile;

Alter system set open_links = 4 scope = spfile;

 

Open archive log

1.1.2 create a stream management user for the master database

Step 1: Create streams to manage user tablespace stream_tbs

SQL> create tablespace stream_tbs datafile '/home/oracle/oradata/db3/stream01.dbf'

2 size 200 m

3 autoextend on

4 extent management local;

 

Step 2: Create a stream management user (set to test to simplify the name in the example) and grant DBA permissions to simplify the process.

Create user test identified by test default tablespace stream_tbs;

SQL> grant dba to test;

 

Step 3: Move the data dictionary of logminer from the system tablespace to the new tablespace stream_tbs

System User execution

SQL> execute dbms_logmnr_d.set_tablespace ('stream _ tbs ');

 

Step 4. Authorize the test user to be the stream Administrator (executed by the system user)

SQL> begin

Dbms_streams_auth.grant_admin_privilege (

Grantee => 'test ',

Grant_privileges => true );

End;

/

1.1.3 create a dblink for the master database and configure the TNS network for the master database

Step 1: Create a dblink for the standby database (dbwin2) in the master database (db1) (which is created by the stream management user test)

Connect test/test

Create database link dbwin2 connect to test identified by test using 'dbwin2 ';

 

Step 2: configure the TNS network of the master database, and add the tns name of the slave database (dbwin2)

Db1 =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP) (HOST = 100.100.100.20) (PORT = 1521 ))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = db1)

)

)

 

Dbwin2 =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP) (HOST = 100.100.100.254) (PORT = 1521 ))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = dbwin2)

)

)

1.1.4 create a master stream queue for the master database

Use stream management user test to create a stream queue dbqueue queue. The queue information table name is db1_queue_table

SQL> connect test/test

Begin

Dbms_streams_adm.set_up_queue (

Queue_table => 'db1 _ queue_table ',

Queue_name => 'db1 _ queue ');

End;

/

1.1.5 master database creation capture process

Use stream to manage user test to create the capture process capture_db1, capture master database u1 (schema_name => 'u1 ',

Specify) User table changes, and pass the changes information to the master queue test. db1_queue (as shown in the previous step)

Connect test/test

Begin

Dbms_streams_adm.add_schema_rules (

Schema_name => 'u1 ',

Streams_type => 'capture ',

Streams_name => 'capture _ db1 ',

Queue_name => 'test. db1_queue ',

Include_dml => true,

Include_ddl => true,

Include_tagged_lcr => false,

Source_database => null,

Inclusion_rule => true );

End;

/

1.1.6 create a propagation process for the master database

Step 1: Use the stream administrator user test to create the propagation process db1_to_dbwin2. The master Queue (test. db1_queue) to the backup Queue (test. dbwin2_queue. For the creation process, see the next chapter ).

Key Parameter: destination_queue_name => 'test. dbwin2_queue @ dbwin2 ', (backup queue of the slave database dbwin2. dbwin2_queue), source_queue_name => 'test. db1_queue 'specifies the mater queue db1_queue used to record table changes in the master database.

 

Connect test/test

 

Begin

Dbms_streams_adm.add_schema_propagation_rules (

Schema_name => 'u1 ',

Streams_name => 'db1 _ to_dbwin2 ',

Source_queue_name => 'test. db1_queue ',

Destination_queue_name => 'test. dbwin2_queue @ dbwin2 ',

Include_dml => true,

Include_ddl => true,

Include_tagged_lcr => false,

Source_database => 'db1 ',

Inclusion_rule => true );

End;

Step 2: Modify the propagation sleep time to 0, which indicates Real-Time propagation of LCR.

Begin

Dbms_aqadm.alter_propagation_schedule (

Queue_name => 'db1 _ queue ',

Destination => 'dbwin2 ',

Latency => 0 );

End;

1.2 slave Database Configuration 1.2.1 slave database oracle parameter configuration

Alter system set aq_tm_processes = 2 scope = both;

Alter system set global_names = true scope = both;

Alter system set job_queue_processes = 10 scope = both;

Alter system set parallel_max_servers = 20 scope = both;

Alter system set undo_retention = 3600 scope = both;

Alter system set nls_date_format = 'yyyy-MM-DD HH24: MI: ss' scope = spfile;

 

# Automatic allocation

Alter system set streams_pool_size = 25 M scope = spfile;

Alter system set utl_file_dir = '*' scope = spfile;

Alter system set open_links = 4 scope = spfile;

 

Open archive log

1.2.2 create a stream management user for the standby Database

Step 1: Create streams to manage user tablespace stream_tbs

SQL> create tablespace stream_tbs datafile '/home/oracle/oradata/db3/stream01.dbf'

2 size 200 m

3 autoextend on

4 extent management local;

 

Step 2: Create a stream management user (set to test to simplify the name in the example) and grant DBA permissions to simplify the process.

Create user test identified by test default tablespace stream_tbs;

SQL> grant dba to test;

 

Step 3: Move the data dictionary of logminer from the system tablespace to the new tablespace stream_tbs

System User execution

SQL> execute dbms_logmnr_d.set_tablespace ('stream _ tbs ');

 

Step 4. Authorize the test user to be the stream Administrator (executed by the system user)

SQL> begin

Dbms_streams_auth.grant_admin_privilege (

Grantee => 'test ',

Grant_privileges => true );

End;

/

1.2.3 create a dblink for the slave database and configure the TNS network for the slave Database

Step 1: Create the dblink of the master database (db1) in the standby database (dbwin2)

Connect test/test

Create database link db1 connect to test identified by test using 'db1 ';

 

Step 2: configure the TNS network of the slave database (dbwin2), and add the tns name of the master database (db1)

Db1 =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP) (HOST = 100.100.100.20) (PORT = 1521 ))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = db1)

)

)

 

Dbwin2 =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP) (HOST = 100.100.100.254) (PORT = 1521 ))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = dbwin2)

)

)

1.2.4 backup database creation backup stream queue

Use stream management user test to create a stream queue dbwin2_queue. The new table name of the storage queue is dbwin2_queue_table.

 

SQL> connect test/test

SQL> connect test/test

Begin

Dbms_streams_adm.set_up_queue (

Queue_table => 'dbwin2 _ queue_table ',

Queue_name => 'dbwin2 _ queue ');

End;

/

Note:

In the propagation process db1_to_dbwin2 of the master database, the changed master queue db1_queue information recorded in the master database is transmitted to the backup queue dbwin2_queue of the slave database. The configuration truncation is as follows:

Source_queue_name => 'test. db1_queue ',

Destination_queue_name => 'test. dbwin2_queue @ dbwin2 ',

 

1.2.5 configure the apply process for the slave Database

Use stream to manage user test to create the standby database application process apply_dbwin2 and synchronize it to the table of the standby database U1 user.

 

Connect test/test

Begin

Dbms_streams_adm.add_schema_rules (

Schema_name => 'u1 ',

Streams_type => 'application ',

Streams_name => 'apply _ dbwin2 ',

Queue_name => 'test. dbwin2_queue ',

Include_dml => true,

Include_ddl => true,

Include_tagged_lcr => false,

Source_database => 'db1 ',

Inclusion_rule => true );

End;

/

1.3 oracle stream startup stop 1.3.1 oracle stream startup and monitoring

Step 1: The slave database uses stream to manage user test to start the apply process.

Connect test/test

Begin

Dbms_apply_adm.start_apply (

Apply_name => 'apply _ dbwin2 ');

End;

/

Step 2: The master database uses stream to manage user test to start the capture process.

Connect test/test

Begin

Dbms_capture_adm.start_capture (

Capture_name => 'capture _ db1 ');

End;

Step 3: The slave database monitoring apply process is enable

SQL> select apply_name, queue_name, status from dba_apply;

 

APPLY_NAME QUEUE_NAME STATUS

--------------------------------------------------------------------

APPLY_DBWIN2 DBWIN2_QUEUE ENABLED

Step 4: The master database monitors the capture process as enable

SQL> SELECT capture_name, capture_type, status, status_change_time, queue_name, captured_scn, applied_scn, error_message FROM DBA_CAPTURE;

 

CAPTURE_NAME CAPTURE_TYPE STATUS STATUS_CHANGE_TIME QUEUE_NAME CAPTURED_SCN APPLIED_SCN ERROR_MESSAGE

---------------------------- ------------ -------- -------------------- ------------------------------ ------------ ----------- Begin ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

CAPTURE_DB1 local enabled 13:04:57 DB1_QUEUE 735788 735788

1.3.2 stop of oracle stream

Step 1: The master database closes the capture process and srream manages users.

Connect test/test

Begin

Dbms_capture_adm.stop_capture (

Capture_name => 'capture _ db1 ');

End;

Step 2: The slave database closes the apply process and sream manages users.

Begin

Dbms_apply_adm.stop_apply (

Apply_name => 'apply _ dbwin2 ');

End;

1.4 test oracle stream

Table creation for the master database

SQL> create table u3 (id3 number (10 ));

 

Table created

 

SQL> commit;

Commit complete

Slave database check

SQL> select * from u3;

 

ID3

-----------

 

Master database insert record

SQL> insert into u3 values (3 );

 

1 row inserted

 

SQL> commit;

 

Commit complete

 

Slave database check

 

SQL> select * from u3;

 

ID3

-----------

3

 


Oracle's dedicated uard is good for disaster tolerance backup, but it is difficult to configure and requires professional DBAs for maintenance,

Dataguard is no longer a new technology. It is mature and widely used, and the maintenance cost is low.
Since data needs to be disaster tolerant, it indicates that data is important. Oracle that manages important data does not deserve a DBA.
You can use Oracle grid control for visual disaster recovery backup. It is very convenient to configure and manage dataguard.

What Is Disaster Tolerance in oracle?

1. Disaster Tolerance (Disaster Tolerance), search for Disaster Tolerance, there is no need to copy it to you.
In short, in order to maintain business continuity, the database must be recovered as soon as possible in the event of a disaster or there is a backup that can be immediately topped up.
2. oracle provides a complete disaster recovery solution and High Availability HA (High Availability), such:
A) Cold backup imp/exp Function
B) DataGuard Function
C) RAC (Real Application Cluster) Function
D. Advanced Replication)
E) oracle stream (Oracle Streams)
F) Partition (Oracle Partition)
 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.