Introduction to Golden Gate: Concepts and architecture
Oracle GoldenGate (OGG) supports real-time transaction Change Data Capture, conversion, and transmission in a diverse and complex IT architecture. Data Processing and exchange are measured in transactions and support heterogeneous platforms, for example, DB2 and MSSQL
Www.2cto.com
Golden Gate supports two major types of solutions for different business needs:
● High availability and Disaster Tolerance Solutions
● Real-time data integration solution www.2cto.com
The high availability and Disaster Tolerance solutions are mainly used to eliminate unplanned and planned downtime. They include the following three sub-solutions:
1. Disaster Tolerance and emergency backup
2. eliminate planned downtime 3. the dual-Service Center (also called "dual-active") Real-time Data integration solution provides real-time data for DSS or OLTP databases for data integration and integration. It includes the following two sub-solutions:
1. Real-time Data Warehouse supply
2. flexible topology of real-time reports for users: a typical Golden Gate configuration logic structure: ① Manager
As the name suggests, the Manager process is the control process of the Process in Golden Gate. It is used to manage Extract, Data Pump, Replicat, and other processes.
Before the Extract, Data Pump, and Replicat processes start, the Manager process must start at the source and target ends.
During the entire Golden Gate operation, it must be in the running state
Worker monitors and starts other GoldenGate Processes
Manage trail files and Reporting
In Windows, the Manager process is started as a service, and in Unix
② Extract
The Extract process runs on the database source. It is a Golden Gate capture mechanism. You can configure the Extract process to do the following:
Batch initial data loading: for initial data loading, the Extract process extracts data directly from the source object.
⒉ Synchronous change Capture: Keep the source data synchronized with other datasets. After the initial data synchronization is complete, the Extract process captures source data changes, such as DML changes and DDL changes.
③ Replicat
The Replicat process is a process running on the target system. It reads the data (changed transactions or DDL changes) extracted by the Extract process and applies the data to the target database.
Like the Extract process, you can configure the Replicat process to complete the following tasks:
Initialize data load: for initial data load, the Replicat process applies data to the target object or routes them to a high-speed Bulk-load tool.
Commit Data Synchronization: Apply the committed transactions captured by the Extract process to the target database.
④ Collector
Collector is a background process running on the target.
Receives database changes transmitted from TCP/IP networks and writes them to the Trail file.
Dynamic collector: the collector automatically started by the management process is called dynamic collector, and users cannot interact with dynamic collector.
Static collector: You can configure it to run collector manually. This collector is called static collector.
⑤ Trails
To continuously extract and copy database changes, GoldenGate temporarily stores the captured data changes in a series of files on the disk. These files are called Trail files.
These files can be either on the source DB or on the target DB, or on the intermediate system, depending on which configuration is selected
Local Trail or Extract Trail on the database source; Remote Trail on the target
⑥ Data Pumps
Data Pump is an auxiliary Extract mechanism for source configuration.
Data Pump is an optional component. If Data Pump is not configured, the main Extract process sends Data to the Remote Trail file of the target end.
If Data Pump is configured, the local Trail file written by the Extract master process is sent to the Remote Trail file of the target end through the network.
The advantages of using Data Pump are:
If the target end or network fails, the Extract process on the source end will not terminate unexpectedly.
Filters must implement data filtering or conversion at different stages.
⒊ Copy multiple source databases to the data center
Shard data needs to be copied to multiple target Databases
7. Data source
When the transaction change data is processed, the Extract process can directly obtain from the transaction logs of the database (Oracle, DB2, SQL Server, MySQL, and so on ).
Or get it from GoldenGate VAM. With VAM, the database vendor will provide the required components for data extraction changes in the Extract process.
⑧ Groups
To distinguish multiple Extract and Replicat processes on a system, we can define a process group.
For example, to copy different datasets in parallel, we can create two Replicat groups.
A process group consists of an Extract process or Replicat process, a corresponding parameter file, a Checkpoint file, and other related files.
If the process in the processing group is a Replicat process, the processing group also contains a Checkpoint table.