Oracle goldengate (Ogg) supports real-time transaction Change Data Capture, conversion, and transmission in a diverse and complex IT architecture. Data Processing and exchange are measured in transactions and support heterogeneous platforms, for example, DB2 and MSSQL
Golden Gate supports two major types of solutions for different business needs:
● High availability and Disaster Tolerance Solutions
● Real-time data integration solution
The high availability and Disaster Tolerance solutions are mainly used to eliminate unplanned and planned downtime. They include the following three sub-solutions:
1. Disaster Tolerance and emergency backup
2. Eliminate planned downtime
3. Dual-Business Center (also called dual-active)
The real-time data integration solution provides real-time data for DSS or OLTP databases for data integration and integration. It includes the following two sub-solutions:
1. Real-time Data Warehouse supply
2. Real-time reports
Flexible topology to Achieve flexible solutions for users:
Is a typical Golden Gate configuration logical structure:
① Manager
As the name suggests, the Manager process is the control process of the Process in Golden Gate. It is used to manage extract, Data Pump, replicat, and other processes.
Before the extract, data pump, and replicat processes start, the Manager process must start at the source and target ends.
During the entire Golden Gate operation, it must be in the running state
Worker monitors and starts other goldengate Processes
Manage trail files and reporting
In Windows, the Manager process is started as a service, and in UNIX
② Extract
The extract process runs on the database source. It is a golden gate capture mechanism. You can configure the extract process to do the following:
Batch initial data loading: for initial data loading, the extract process extracts data directly from the source object.
⒉ Synchronous change Capture: Keep the source data synchronized with other datasets. After the initial data synchronization is complete, the extract process captures source data changes, such as DML changes and DDL changes.
③ Replicat
The replicat process is a process running on the target system. It reads the data (changed transactions or DDL changes) extracted by the extract process and applies the data to the target database.
Like the extract process, you can configure the replicat process to complete the following tasks:
Initialize data load: for initial data load, the replicat process applies data to the target object or routes them to a high-speed bulk-load tool.
Commit Data Synchronization: Apply the committed transactions captured by the extract process to the target database.
④ Collector
Collector is a background process running on the target.
Receives database changes transmitted from TCP/IP networks and writes them to the trail file.
Dynamic COLLECTOR: the collector automatically started by the management process is called dynamic collector, and users cannot interact with dynamic collector.
Static COLLECTOR: You can configure it to run collector manually. This collector is called static collector.
⑤ Trails
To continuously extract and copy database changes, goldengate temporarily stores the captured data changes in a series of files on the disk. These files are called TRAIL files.
These files can be either on the source dB or on the target dB, or on the intermediate system, depending on which configuration is selected
Local trail or extract trail on the database source; remote trail on the target
⑥ Data Pumps
Data Pump is an auxiliary extract mechanism for source configuration.
Data Pump is an optional component. If data pump is not configured, the main extract process sends data to the remote trail file of the target end.
If data pump is configured, the local trail file written by the extract master process is sent to the remote trail file of the target end through the network.
The advantages of using data pump are:
If the target end or network fails, the extract process on the source end will not terminate unexpectedly.
Filters must implement data filtering or conversion at different stages.
⒊ Copy multiple source databases to the data center
Shard data needs to be copied to multiple target Databases
7. Data Source
When the transaction change data is processed, the extract process can directly obtain from the transaction logs of the database (Oracle, DB2, SQL Server, MySQL, and so on ).
Or get it from goldengate VAM. With VAM, the database vendor will provide the required components for data extraction changes in the extract process.
⑧ Groups
To distinguish multiple extract and replicat processes on a system, we can define a process group.
For example, to copy different datasets in parallel, we can create two replicat groups.
A process group consists of an extract process or replicat process, a corresponding parameter file, a checkpoint file, and other related files.
If the process in the processing group is a replicat process, the processing group also contains a checkpoint table.