Construction of OracleStreams Technology

Source: Internet
Author: User
Stream is an extended application of Oracle Message Queue (also known as OracleAdvancedQueue) technology. Oracle Message Queue solves events through publishing and subscription.

Stream is an extended application of Oracle Message Queue (also called OracleAdvanced Queue) technology. Oracle Message Queue solves events through publishing/subscription.

Stream is an extended application of Oracle Message Queue (also called OracleAdvanced Queue) technology. Oracle Message Queue solves event management through publishing/subscription. Stream replication is only based on its data sharing technology. It can also be used as a highly available solution that can be flexibly customized. It can synchronize data between two databases at the database level, schema level, and Table level, and this synchronization can be bidirectional. Oracle Stream also improves availability through Data redundancy, which is similar to the Data Guard type.

Oracle Advanced Replication and Stream Replication are two technologies with similar names and functions. However, the former is based on triggers, and the latter is based on the Logminer technology.

I. How Stream works

Stream is an extended application of Oracle AdvancedQueue technology. The most basic principle of this technology is to collect events, store the time in the queue, and then publish these events to different subscribers. From the DBA's point of view, it is to capture the Redo logs generated by the Oracle database, and then spread these logs to multiple databases over the network. Other databases apply these logs, to replicate changes.

In the Stream environment, the replication start Database is called the Source Database, and the replication end Database is called the Target Database. Create a queue for both databases, where the Source Database is the sending queue, and the Target Database is the receiving queue.

All database operations are recorded in logs. After the Stream environment is configured, there will be a Capture Process (Capture Process) on the Source Database. This Process uses Logminer technology to extract DDL and DML statements from logs, these statements are expressed in a special format, called Logical Change Record (LCR ). one LCR corresponds to one atomic row change. Therefore, a DML statement in the source database may correspond to several LCR records. These LCR will be saved to the local sending queue of the Sourece Database. Then the Propagation Process sends these records to the receiving queue of the Target Database over the network. There will be an application Process (Apply Process) on the Target Database. This Process extracts the LCR record from the local receiving queue and then applies it locally to implement data synchronization.

Ii. Differences between Data Guard and Stream

Date Guard has two types: physicalstandby and logical standby. Standby has three functional modules: log transmission, log receipt, and log recovery. The two standby methods are the same in the first two modules. They both Send Logs through the LGWR or ARCn process and receive logs through The RFS process. The difference lies in the third module:

Physical Standby uses MediaRecovery technology to directly recover data blocks. Therefore, Physical Standby can fully synchronize two databases without restrictions on data types.

Logical Standby actually uses Logminer technology to restore records in logs to SQL statements and execute these statements through ApplyEngine to synchronize data. Therefore, Logical Standby cannot guarantee full data consistency. For example, Logical Standby does not support some data types. When selecting Logical Standby, you must consider that the data types unsupported by Logical Standby can be viewed from DBA_LOGSTDBY_UNSUPPORTED.


Stream uses the third LogicalStandby module, that is, at the Source Database end, the Capture process uses the Logminer technology to restore the log Content to LCR and then sends it to the Target Database at the Target database end, also, use the Apply Engine to execute these LCR operations. Therefore, Stream has some restrictions. You can view the data types not supported by stream replication from the view ALL/DBA_STREAMS_NEWLY_SUPPORTED, ALL/DBA_STREAMS_UNSUPPORTED.





The following lists their differences in charts:


Data Guard

The main purpose is to share data.

The main objective is disaster recovery and high availability.

Multi-direction Synchronization

Only one-way, from Primary --> Standby

Only database-level

Supports synchronization of Heterogeneous Platforms (Heterogeneous Platforms)

Homogeneous Platforms)

Each database involved in replication can be read and written.

Only Primary can read and write, Standby read-only

Supports synchronization between Oracle and non-Oracle databases

It can only be between Oracle databases

Iii. preparations:

1). Determine the replica set: whether it is database or table-level

2) Decide to copy the site

3) determines whether the LCR is a local capture or a downstream capture.

Local Capture: this operation is performed in the source database to obtain LCR from online logs and archived logs.

Downstream Capture: this operation is performed on the target database to obtain the LCR from the archived logs.

Local capturing can protect more data, but will occupy the resources of the source database.

4). Decide to copy the topology: At this time, decide the purpose of database replication, whether it is only used to prevent disasters, or to keep idle at ordinary times, or to allow users to use it.

Iv. Simple setup process

Orcl is the original database, and vion is the target database

1. initialize the settings (executed in the Master/Slave database) and restart the database after execution.

Alter system setaq_tm_processes = 2 scope = both;

Alter system set global_names = true scope = both;

Alter system set job_queue_processes = 10 scope = both;

Alter system set parallel_max_servers = 20 scope = both;

Alter system set undo_retention = 3600 scope = both;

Alter system set nls_date_format = 'yyyy-MM-DD HH24: MI: ss' scope = spfile;

Alter system set streams_pool_size = 250 M scope = spfile;

Alter system set utl_file_dir = '*' scope = spfile;

Alter system setopen_links = 4 scope = spfile;

Alter system set "_ job_queue_interval" = 1 scope = spfile;

Alter system set logmnr_max_persistent_sessions = 1 scope = spfile #10g default value: 1, the number of persistent log mining sessions.

Note that streams_pool_size must be large enough, because if SGA_TARGET is enabled, ORACLE may allocate a small amount of memory to stream, resulting in a large amount of information being spill to the disk, resulting in query of DBA_APPLY, DBA_CAPTURE, DBA_PROPGATION all statuses ENABLED, but no data is synchronized.

In addition, _ job_queue_interval is set to improve the queue check time and prevent apply problems.

2. confirm that the database is in archive mode; otherwise, change it to archive mode.

Startup mount

Alter database archivelog;

Alter database open

Archive log list --- View

3. Create an account with the same master and slave

# Create a tablespace

Createtablespace tbs_stream datafile '/home/oracle/data/tbs_stream01.dbf' size 100 mautoextend on maxsize unlimited segment space management auto;

# Transfer the data dictionary of logminer from the system tablespace to the new tablespace to prevent the system tablespace from being fully occupied

Executedbms_logmnr_d.set_tablespace ('tbs _ stream ');

# User

Createuser strmadmin identified by strmadmin

Defaulttablespace tbs_stream temporary tablespace temp;

# Authorization

Grantconnect, resource, dba, aq_administrator_role to strmadmin;


Dbms_streams_auth.grant_admin_privilege (

Grantee => 'strmadmin ',

Grant_privileges => TRUE );



4. Configure the network

Configure the network service name


















Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.