Ogg _ cascade replication & #39;, ogg Replication

Source: Internet
Author: User

Ogg _ cascade replication ', ogg Replication
Cascade replication: node1-> node2-> node 3
[Oracle @ dominic dump_dir] $ cat/etc/hosts 127.0.0.1 localhost. localdomain localhost4 localhost4.localdomain4
: 1 localhost. localdomain localhost6 localhost6.localdomain6
192.168.0.199 dominic. mysql1 --> node1
192.168.0.195 dominic. mysql2 --> node2
192.168.0.171 dominic. node1 --> node3

Overview: before doing this, my node1-> node2 is in sync, but you only need to perform node2-> node3 synchronization on this basis. In addition to configuring to receive data replicat from node1, node2 also needs to be used as the intermediate database to configure the extract process (pump) for data extraction ). note that the two processes are stored in separate directories.
Node1-> node2 (node1 extract Directory:/dba/ogg/dirdat/st) (node2 replicat Directory:/dba/ogg/dirdat/tt)
Node2-> node3 (node2 extract Directory:/dba/ogg/dirdat/at) (node3 replicat Directory:/dba/oggs/dirdat/bt)

In addition, when node2-> node3, the initialization data on node3 can use node2. Here I initialize the data through expdp (small data, test) and pay attention to the port, in fact, the intermediate database node2 extract process group must be added with: ignoreapplops, getreplicates -- Key Connection parameters:
Configuration:
1: Configure node 3 CHECKPOINTTABLE parameter (node2 already exists) to add node3. /GLOBALS parameter (if it is redone, this table can be deleted under the ogg user) GGSCI (target)> edit params. /GLOBALS checkpointtable ogg. checktable GGSCI (target)> dblogin userid ogg, password ogg GGSCI (target)> add checkpointtable ogg. checktable
2: node 2 configure the extra_2 and pump_2 process group parameters of the extract process and start these two parameters:
-- Extract GGSCI (dominic. mysql2) 27> view params extra_2extract extra_2dynamicresolutionuserid ogg, password ogg -- rmthost dominic. mysql2, mgrport 7809, compress -- rmthost 192.168.0.171, mgrport 7809, compressreportcount every 1 minutes, rateexttrail/dba/ogg/dirdat/atddl include allddloptions addtrandata, limit, getreplicates -- key parameter: table scott. *; -- pumpGGSCI (dominic. mysql2) 28> view params pump_2extract pump_2rmthost 192.168.0.171, mgrport 7809, compresspassthrurmttrail/dba/oggs/dirdat/btdynamicresolutiontable scott. *;
Add Extract process: GGSCI (source)> add extract extra_2, tranlog, begin now (use alter for the second time ...) Add a local trail file. The Extract group is responsible for writing this part of the file, and the pump process is responsible for reading it. GGSCI (source)> add exttrail/dba/ogg/dirdat/at, extract extra_2 tables add the pump process: GGSCI (source)> view params pump_2 GGSCI (source)> add extract pump_2, exttrailsource/dba/ogg/dirdat/at GGSCI (source)> add rmttrail/dba/ogg/dirdat/bt, extract pump_2 -- deliver to the target directory.

-- After startup: GGSCI (dominic. mysql2) 29> info allProgram Status Group Lag at Chkpt Time Since ChkptMANAGER running extract running EXTRA_2 00:00:00 extract running PUMP_2 00:00:07 replicat running REP_1 00:00:00 00:00:09
3: Configure node3 mgr management process parameters and start
-- MgrGGSCI (dominic. node1 as ogg @ node1) 31> view params mgr

PORT 7809
Dynamicportlist 7810-7850
Autostart er *
Autorestart extract *, waitminutes 2, retries 5
Lagreporthours 1
Laginfominutes 3
Lagcriticalminutes 5
Purgeoldextracts/dba/ogg/bt *, usecheckpoints, minkeepdays 3

4: node 2 is backed up through expdp, and the value of current_scn is indicated by the expdp parameter flashback_scn = xxxxx.
5: node3, restoration, and disable constraints for data copying.
6: configure the node3 replicat process rep_2. And start
-- Replicat GGSCI (dominic. node1 as ogg @ node1) 33> view params rep_2

Replicat rep_2
Userid ogg, password ogg
Assumetargetdefs
Reperror default, discard
Discardfile/dba/oggs/dirrpt/rep_2.dsc, append, megabytes 50
Dynamicresolution
APPLYNOOPUPDATES
Ddl include mapped
Ddloptions report
Ddlerror default ignore retryop -- add this parameter because it was operated in previous experiments and some ddl data involved can be ignored (query notes)
Map scott. *, target scott .*;
Add replicat rep_2 to GGSCI (target)> add replicat rep_2, exttrail/dba/ogg/dirdat/bt GGSCI (target)> start rep_1 GGSCI (target)> info all
7 test:
-- Node1:
152 rows selected.
SQL> create table emp as select * from T_ORDERS;
Table created.
SQL> commit; SQL> select count (*) from tab; COUNT (*) -------- 153
-- Node2: SQL> select count (*) from tab; COUNT (*) ---------- 1531 row selected.
-- Node3 SQL> select count (*) from tab; COUNT (*) ---------- 153
-- Node1
-- Node2
-- Node3

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.