Oracle stream Configuration

Source: Internet
Author: User

1 Introduction

The Oracle stream function is designed to improve the high availability of the database. In Oracle 9i and earlier versions, this function is called Advance replication. Oracle stream uses advanced Queue Technology to parse archived logs and parse archived logs into DDL and DML statements to synchronize data between databases. This technology can replicate objects in the entire database and database to another database. By using stream technology, archiving logs can be mined without any pressure on the main system, synchronize database objects and even the entire database. The technology of parsing and archiving logs is widely used now. Quest's shareplex software and DSG's realsync are all such products. Some companies use such products for application-level disaster tolerance. However, shareplex or realsync is very expensive, so you can try to use the Oracle stream feature without extra money. Oracle stream has a very small impact on the production database. The slave database can be a different operating system platform from the master database. You can use ORACLE stream to copy several slave databases, slave databases can be used for different functions such as query, report, and disaster tolerance. This article does not talk about technical details, but just takes you step by step to build the stream environment. You can check online documents for details.

2 Overview

Primary database:

Operating System: Linux 4.7

IP Address: 10.148.55.21

Database: Oracle 10.2.0.2

Oracle_sid: Prod

Global_name: Prod

Slave database:

Operating System: Linux 4.7

IP Address: 10.148.55.22

Database: Oracle 10.2.0.3

Oracle_sid: h10g

Global_name: h10g

3. Prepare the environment

3.1 set initialization parameters

Use the modify init <Sid>. ora file of pfile and use the alter system command of spfile to modify the spile file. The master and slave databases execute the following statements respectively:

Sqlplus '/As sysdba'

Alter system set aq_tm_processes = 2 Scope = both;

Alter system set global_names = true scope = both;

Alter system set job_queue_processes = 10 Scope = both;

Alter system set parallel_max_servers = 20 scope = both;

Alter system set undo_retention = 3600 scope = both;

Alter system set nls_date_format = 'yyyy-MM-DD hh24: MI: ss' scope = spfile;

Alter system set streams_pool_size = 25 m scope = spfile;

Alter system set utl_file_dir = '*' scope = spfile;

Alter system set open_links = 4 Scope = spfile;

Restart the database after execution.

3.2 set the database to archive Mode

Set log_archive_dest_1 to the corresponding location; Set log_archive_start to true to enable automatic archiving; Set log_archive_format to specify the command format for archiving logs.

Example:

Sqlplus '/As sysdba'

Alter system set log_archive_dest_1 = 'location =/yang/arch 'scope = spfile;

Alter system set log_archive_start = true scope = spfile;

Alter system set log_archive_format = 'arch % T _ % S _ % R. arc' scope = spfile;

Shutdown immediate;

Startup Mount;

Alter database archivelog;

Alter database open;

After the database is set to archive mode, you can check it as follows:

SQL> archive log list

Database Log mode archive Mode

Automatic Archival Enabled

Archive destination/yang/Arch

Oldest online log sequence 534

Next log sequence to archive 536

Current Log sequence 536

3.3 create a stream management user

3.3.1 create a stream management user in the main environment

# Log On As sysdba

Connect/As sysdba

# Create a dedicated stream tablespace for the primary environment

Create tablespace tbs_stream datafile '/yang/oradata/Prod/tbs_stream01.dbf'

Size 100 m autoextend on maxsize unlimited segment space management auto;

# Transfer the data dictionary of logminer from the system tablespace to the new tablespace to prevent the system tablespace from being fully occupied

Execute dbms_logmnr_d.set_tablespace ('tbs _ stream ');

# Create a stream management user

Create user strmadmin identified by strmadmin

Default tablespace tbs_stream temporary tablespace temp;

# Authorize stream to Manage Users

Grant connect, resource, DBA, aq_administrator_role to strmadmin;

Begin

Dbms_streams_auth.grant_admin_privilege (

Grantee => 'strmadmin ',

Grant_privileges => true );

End;

/

3.3.2 create a stream management user in the slave Environment

# Log On As sysdba

Connect/As sysdba

# Create a dedicated stream tablespace. I use ASM for the slave database. For more information, see 3.3.1.

Create tablespace tbs_stream datafile '+ vgdata/h10g/datafile/tbs_stream01.dbf'

Size 100 m autoextend on maxsize unlimited segment space management auto;

# Similarly, the data dictionary of logminer is transferred from the system tablespace to the new tablespace to prevent the system tablespace from being fully occupied.

Execute dbms_logmnr_d.set_tablespace ('tbs _ stream ');

# Create a stream management user

Create user strmadmin identified by strmadmin

Default tablespace tbs_stream temporary tablespace temp;

# Authorize stream to Manage Users

Grant connect, resource, DBA, aq_administrator_role to strmadmin;

Begin

Dbms_streams_auth.grant_admin_privilege (

Grantee => 'strmadmin ',

Grant_privileges => true );

End;

/

3.4 configure network connection

3.4.1 configure the master environment tnsnames. ora

Add the slave database configuration in the master database (tnsnames. ora.

H10g =

(Description =

(Address_list =

(Address = (Protocol = TCP) (host = 10.148.55.22) (Port = 1521 ))

)

(CONNECT_DATA =

(SID = h10g)

(Server = dedicated)

)

)

3.4.2 configure the slave environment tnsnames. ora

Add the configuration of the primary database from the database (tnsnames. ora.

Prod =

(Description =

(Address_list =

(Address = (Protocol = TCP) (host = 10.148.55.21) (Port = 1521 ))

)

(CONNECT_DATA =

(SID = prod)

(Server = dedicated)

)

)

3.5 enable append log

You can enable the lemental log based on the database or table level ). In an oracle stream environment that replicates Data Based on schema granularity, If you confirm that all tables in the schema have a reasonable primary key, you do not need to enable append logs.

# Enable database append log

Alter database add Supplemental log data;

# Enable table append log

Alter table add supplement log group log_group_name (table_column_name) always;

3.6 create a dblink

According to the official Oracle 10gr2 stream documentation, the name of the database chain created for the master database must be the same as global_name of the slave database.

To modify global_name, Run "alter database rename global_name to XXX ".

3.6.1 create a primary database chain

# Log on to the master database as strmadmin.

Connect strmadmin/strmadmin

# Creating a database chain

Create database link h10g connect to strmadmin identified by strmadmin using 'h10g ';

3.6.2 create a slave database chain

# Log on to the slave database as strmadmin.

Connect strmadmin/strmadmin

# Creating a database chain

Create database link prod connect to strmadmin identified by strmadmin using 'prod ';

3.7 create a stream queue

3.7.1 create a master stream queue

# Log on to the master database as strmadmin.

Connect strmadmin/strmadmin

Begin

Dbms_streams_adm.set_up_queue (

Queue_table => 'prod _ queue_table ',

Queue_name => 'prod _ queue ');

End;

/

3.7.2 create a backup stream queue

# Log on to the slave database as strmadmin.

Connect strmadmin/strmadmin

Begin

Dbms_streams_adm.set_up_queue (

Queue_table => 'h10g _ queue_table ',

Queue_name => 'h10g _ queue ');

End;

/

3.8 create a capture process

# Log on to the master database as strmadmin. This document uses HR users as an example.

Connect strmadmin/strmadmin

Begin

Dbms_streams_adm.add_schema_rules (

Schema_name => 'hr ',

Streams_type => 'capture ',

Streams_name => 'capture _ prod ',

Queue_name => 'strmadmin. prod_queue ',

Include_dml => true,

Include_ddl => true,

Include_tagged_lcr => false,

Source_database => null,

Inclusion_rule => true );

End;

/

3.9 instantiate and copy a database

In the primary database environment, execute the following shell statement. If the HR user of the slave database does not exist, create an empty HR user.

Exp userid = HR/hr @ prod file = '/tmp/hr. dmp' object_consistent = y rows = y

IMP userid = system/manager @ h10g file = '/tmp/hr. DMP 'ignore = y commit = y log = '/tmp/hr. log 'streams_instantiation = y fromuser = HR touser = HR

3.10 create a propagation process

# Log on to the master database as strmadmin.

Connect strmadmin/strmadmin

Begin

Dbms_streams_adm.add_schema_propagation_rules (

Schema_name => 'hr ',

Streams_name => 'prod _ to_h10g ',

Source_queue_name => 'strmadmin. prod_queue ',

Destination_queue_name => 'strmadmin. h10g_queue @ h10g ',

Include_dml => true,

Include_ddl => true,

Include_tagged_lcr => false,

Source_database => 'prod ',

Inclusion_rule => true );

End;

/

# Modify the sleep time of propagation to 0, which indicates Real-Time Propagation of LCR.

Begin

Dbms_aqadm.alter_propagation_schedule (

Queue_name => 'prod _ queue ',

Destination => 'h10g ',

Latency => 0 );

End;

/

3.11 create an application process

# Log on to the slave database as strmadmin.

Connect strmadmin/strmadmin

Begin

Dbms_streams_adm.add_schema_rules (

Schema_name => 'hr ',

Streams_type => 'application ',

Streams_name => 'apply _ h10g ',

Queue_name => 'strmadmin. h10g_queue ',

Include_dml => true,

Include_ddl => true,

Include_tagged_lcr => false,

Source_database => 'prod ',

Inclusion_rule => true );

End;

/

3.12 start stream

# Log on to the slave database as strmadmin.

Connect strmadmin/strmadmin

# Start the apply Process

Begin

Dbms_apply_adm.start_apply (

Apply_name => 'apply _ h10g ');

End;

/

# Log on to the master database as strmadmin.

Connect strmadmin/strmadmin

# Start the capture process

Begin

Dbms_capture_adm.start_capture (

Capture_name => 'capture _ prod ');

End;

/

3.13 stop stream

# Log on to the master database as strmadmin.

Connect strmadmin/strmadmin

# Stop the capture process

Begin

Dbms_capture_adm.stop_capture (

Capture_name => 'capture _ prod ');

End;

/

# Log on to the slave database as strmadmin.

Connect strmadmin/strmadmin

# Stop the apply Process

Begin

Dbms_apply_adm.stop_apply (

Apply_name => 'apply _ h10g ');

End;

/

3.14 clear all configuration information

To understand the stream configuration information, you must first execute 3.13 to stop the stream process.

# Log on to the master database as strmadmin.

Connect strmadmin/strmadmin

Exec dbms_streams_adm.remove_streams_configuration ();

# Log on to the slave database as strmadmin.

Connect strmadmin/strmadmin

Exec dbms_streams_adm.remove_streams_configuration ();

4. test scenario

This document sets up a stream replication environment for HR users. If there is no special Declaration, the following test scenarios are all executed as HR users.

4.1 create a table for test

Primary Database

SQL> Create Table TTT (ID number primary key,

2 Name varchar2 (50)

3)

4/

Table created.

Slave Database

SQL> DESC ttt

Name null? Type

-------------------------------

Id not null number

Name varchar2 (50)

4.2 Insert a row of data into the table

Primary Database

SQL> insert into TTT values (1, 'sdfsdfsdfsdf ');

1 row created.

SQL> commit;

Commit complete.

SQL>

Slave Database

SQL> select * From TTT;

ID name

------------------------------

1 sdfsdfsdfsdf

4.3 change the table structure and add a column

Primary Database

SQL> ALTER TABLE TTT add (age Number (2 ));

Table altered

Slave Database

SQL> DESC ttt

Name null? Type

---------------------------------

Id not null number

Name varchar2 (50)

Age number (2)

4.4 change a table to a tablespace

Primary Database

SQL> select table_name, tablespace_name from user_tables

2 Where table_name = 'ttt ';

Table_name tablespace_name

------------------------------------------------------------

TTT users

SQL> ALTER TABLE TTT move tablespace tbs_stream;

Table altered

SQL> select table_name, tablespace_name from user_tables

Where table_name = 'ttt ';

Table_name tablespace_name

------------------------------------------------------------

TTT tbs_stream

Slave Database

SQL> select table_name, tablespace_name from user_tables

Where table_name = 'ttt ';

Table_name tablespace_name

------------------------------------------------------------

TTT tbs_stream

4.5 create an index for the name column of a table

Primary Database

SQL> Create index ttt_name_idx on TTT (name );

Index created

Slave Database

SQL> select table_name, index_name from user_indexes where table_name = 'ttt ';

Table_name index_name

------------------------------------------------------------

TTT ttt_name_idx

TTT sys_c005721

4.6 rebuild index test

Primary Database

SQL> alter index ttt_name_idx rebuild;

Index altered

Slave Database

SQL> select table_name, index_name from user_indexes where table_name = 'ttt ';

Table_name index_name

------------------------------------------------------------

TTT ttt_name_idx

TTT sys_c005721

4.7 index for a tablespace Test

Primary Database

SQL> alter index ttt_name_idx rebuild tablespace tbs_stream;

Index altered

Slave Database

SQL> select table_name, index_name, tablespace_name from user_indexes

Where table_name = 'ttt ';

Table_name index_name tablespace_name

------------------------------------------------------------------------------------------

TTT ttt_name_idx tbs_stream

TTT sys_c005721 users

4.8 index deletion Test

Primary Database

SQL> drop index ttt_name_idx;

Index dropped

Slave Database

SQL> select table_name, index_name, tablespace_name from user_indexes

Where table_name = 'ttt ';

Table_name index_name tablespace_name

------------------------------------------------------------------------------------------

TTT sys_c005721 users

4.9 Delete table test

Primary Database

SQL> drop table TTT;

Table dropped

Slave Database

SQL> DESC TTT;

Object TTT does not exist.

4.10 create a table with a lob Field

Primary Database

SQL> Create Table tttclob (ID number primary key, memo clob );

Table created

Slave Database

SQL> DESC tttclob;

Name null? Type

---------------------------------

Id not null number

Memo clob

4.11 insert a row of data into the table

Primary Database

SQL> insert into tttclob values (1, 'clob _ test ');

1 row inserted

SQL> commit;

Commit complete

Slave Database

SQL> select * From tttclob;

Id memo

------------------------------------------------------------------------------------------

1 clob_test

4.12 create a type test

Primary Database

SQL> Create or replace type ttttype;

2/

Type created

Slave Database

SQL> select * From user_types where type_name = 'ttttype ';

Type_name type_oid typecode attributes Methods predefined incomplete final instantiable supertype_owner supertype_name local_attributes local_methods typeid

Certificate certificate ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Ttttype 1b36aaf10da8301de040a8c0289a77b4 object 0 0 no yes

4.13 Delete type test

Primary Database

SQL> drop type ttttype;

Type dropped

Slave Database

SQL> select * From user_types where type_name = 'ttttype ';

Type_name type_oid typecode attributes Methods predefined incomplete final instantiable supertype_owner supertype_name local_attributes local_methods typeid

5. problem diagnosis

5.1 how do I know whether the capture process is running normally?

Log on to the master database as strmadmin and execute the following statement:

SQL> select capture_name,

2 queue_name,

3 rule_set_name,

4 negative_rule_set_name,

5 Status

6 from dba_capture;

The result is as follows:

Capture_name queue_name

------------------------------------------------------------

Rule_set_name negative_rule_set_name status

--------------------------------------------------------------------

Capture_prod prod_queue

Ruleset $ _ 14 Enabled

Enabled

If the status is enabled, the capture process runs normally;

If the status is disabled, the capture process is stopped. You only need to restart the process;

If the status is aborted, the capture process stops abnormally. You can obtain detailed information by querying the corresponding error_number and error_message columns. Oracle also records this information in the tracking file.

5.2 how do I know whether captured LCR has a propagation gap?

Log on to the master database as strmadmin and execute the following statement:

SQL> select capture_name, queue_name, status, captured_scn, applied_scn

2 from dba_capture;

The result is as follows:

Capture_name queue_name status

--------------------------------------------------------------------

Captured_scn applied_scn

-----------------------

Capture_prod prod_queue Enabled

17023672 17023672

If applied_scn is smaller than captured_scn, it indicates that at one end of the primary database, the LCR is not dequeue, or the propagation process has not been propagated to the slave end.

5.3 how do I know if the appy process is running normally?

Log on to the slave database as strmadmin and run the following statement:

SQL> select apply_name, apply_captured, status from dba_apply;

The result is as follows:

Apply_name apply _ Status

--------------------------------------------

Apply_h10g Yes Enabled

If the status is enabled, the apply process runs normally;

If the status is disabled, the apply process is stopped. You only need to restart the application;

If the status is aborted, The apply process stops abnormally. You can obtain detailed information by querying the corresponding error_number and error_message columns. You can also query the dba_apply_error view to learn more about the apply error information.

Conclusion 6

Through the test above, we can see that stream is still very powerful. By configuring Oracle stream, We can greatly improve Database Availability and security, such an easy-to-use function with no extra cost is worth noting.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.