The following articles mainly introduce the peer-to-peer Q replication Configuration between three points. We all know that peer-to-peer Q replication is one of DB2 Q replication. With this peering Q-replication, data changes on any server can be transmitted to other associated servers through MQ and copied to these servers.
This allows you to synchronize data between multiple database servers. This article uses an example to describe how to set up the basic configuration environment for peer-to-peer Q replication to implement peer-to-peer Q replication between three vertices.
Introduction
Peer-to-Peer Q replication is mainly used to synchronize data between two or more databases. It has the following main features:
You can copy tables on two or more database servers.
Changes on any database server in the peering configuration can be copied to all other related database servers.
All servers are in a peering relationship and do not have the concept of a "master" server. If a conflict occurs, the data updated with the latest timestamp is valid data.
More and more users begin to use DB2 Q replication as a high-availability and high-scalability solution for DB2 to build a database system with "Active-active" or even "Active-active.
Figure 1. Peer-to-Peer Q replication architecture between the three servers
This article uses an example to illustrate how to build a peer-to-peer DB2 Q replication environment between the three databases.
This document consists of three parts:
The first part is the basic configuration of the operating system, database, and MQ;
The second part is to establish a peer-to-peer Q replication environment through the Replication Center;
The third part is the configuration verification and replication test for peer Q replication.
Basic configurations of the operating system, database, and MQ
Preparations before configuration
Before setting up the Q replication environment, make the following preparations:
1. Install the DB2 database software.
2. Create a db2 instance, mqm user, and group in the operating system as shown in Table 1.
Table 1. user and group settings
Description of Peer A Peer B Peer C
- Instance ID db2inst1 db2inst2 db2inst3
- Instance Group db2grp1,mqm db2grp2,mqm db2grp3,mqm
- Fence ID db2fenc1 db2fenc2 db2fenc3
- Fence Group db2fgrp1,mqm db2fgrp2,mqm db2fgrp3,mqm
- MQ ID mqm mqm mqm
- MQ Group mqm mqm mqm
- REP ID qrepladm qrepladm qrepladm
3. Install the MQ software.
4. Create a DB2 instance and database.
Note: The software versions used in this article are DB2 v9.1.0.6 and WebSphere MQ 6.0.2.3. The test environment in this article is to create three DB2 instances and databases on the same Linux server to simulate replication between three points.
Database settings
After the above preparations are completed, the instance and database information are shown in table 2.
Table 2. Database Information
Description of Peer A Peer B Peer C
- Instance db2inst1 db2inst2 db2inst3
- Port 50000 50001 50002
- IP 127.0.0.1 127.0.0.1 127.0.0.1
- Indirect Database TP1 TP2 TP3
- Remote Database TP2, TP3 TP1, TP3 TP1, TP2
Note: before using the replication function, all databases should set the log mode to archive logging mode ).
After creating a DB2 instance and database, you must catalog the remote database locally before accessing the database.
For example, in db2inst1, enter the command shown in Listing 1 to catalog the remote TP2 and TP3 databases:
Listing 1. cataloguing A DB2 database
- db2 catalog tcpip node db2inst2 remote 127.0.0.1 server 50001 db2 catalog database tp2
atnode db2inst2 db2 catalog tcpip node db2inst3 remote 127.0.0.1 server 50002 db2 catalog database tp3 at node db2inst3 db2 terminate
Use the method shown in Listing 2 to test whether the db2inst1 instance can normally connect to the TP2 and TP3 databases on the db2inst2 and db2inst3 instances.
List 2. connect to a remote database
- db2 connect to tp2 user db2inst2 using *** db2 connect to tp3 user db2inst3 using *** db2 terminate
In the same way, the corresponding node and database information is catalogued on db2inst2 and db2inst3. Enables each instance to access the databases on the other two instances.
To facilitate the copy operation, the same mode and copy table are usually created on each database.
Use the method shown in listing 3 to authorize the qrepladm user on TP1, TP2, and TP3 and create the DB2 QREPLADM. S_TAB table
Listing 3. Authorization and table Creation
- db2 grant DBADM on DATABASE to USER qrepladm db2 "create table QREPLADM.S_TAB(id integer not null PRIMARY KEY,content varchar(20))"
MQ object settings
Some scripts are provided in the attachment of this article to create related MQ objects. Users can modify or directly use them to create necessary MQ objects. The QM1.mqs, QM2.mqs, and QM3.mqs files are used to create message objects related to QM1, QM2, and qm3. The preceding three scripts define the queue, channel, and other objects in the queue manager QM1, QM2, and QM3.
On Peer A, define the MQ manager named DB2 QM1. If QM1 already exists, run the following command to delete the old QM1:
Listing 4. Stop and delete the queue manager
- endmqm QM1 dltmqm QM1
Create QM1 according to the method shown in listing 5.