Preparing the database for Oracle Goldengate

Source: Internet
Author: User
Tags unsupported dedicated server

Learn how to prepare a database for Oracle GoldenGate, including how to configure connectivity and logging, how to enable Oracle GoldenGate in a database, how to set up a flashback query, and how to manage server resources.

    • Configure the connection for the integration process
    • Configure logging Properties
    • Enable Oracle GoldenGate in the database
    • Set up a flashback query
    • Managing Server resources
2.1 Configuring the connection for the integration process

If you will use integrated capture and integration Replicat, each requires a dedicated server connection in the Tnsnames.ora file. When you configure these processes, you can instruct the process to use these connections in the extract and Replicat parameter files and use the UserID or Useridalias parameters.

The following are examples of private connections required for integrated capture (Extract) and integrated Replicat.

=  =    =      ==== 1521 ) )            = = dedicated) = Test)        ))

The following are the security options for specifying the connection string in the extract or Replicat parameter file.

Password encryption Method:

USERID intext@test, PASSWORD mypassword

Credential Storage Method:

Useridalias ext

For Useridalias, alias ext is stored in the Oracle Goldengate credential store using the actual connection string, as shown in the following example:

Ggsci>  INFO credentialstore DOMAIN supportdomain:support  alias:ext  userid:intext@ Test

For more information about specifying database connection information in a parameter file, see Managing Oracle GoldenGate

2.2 Configuring Logging Properties

Oracle Goldengate relies on redo logs to capture the data needed to replicate source transactions. The Oracle Redo logs on the source system must be properly configured before you start Oracle goldengate processing.

The amount of redo is increased due to the log records required for this time. You can wait until you are ready to start Oracle Goldengate processing to enable logging.

This section describes the following logging levels that apply to Oracle Goldengate. The log level you use depends on the Oracle Goldengate feature you are using.

    • Enable minimal database-level supplemental logging
    • Enable schema-level supplemental logging
    • Enable table-level supplemental logging

This table shows the Oracle goldengate use cases for different logging properties.

Logging option Ggsci command What it does Use case
Forced logging mode No; enabled through the database Force logging of all transactions and load logs All Oracle Goldengate Use cases are strongly recommended.
Minimum Database-level Supplemental Logging No; enabled through the database Enable minimal supplemental logging to add the row chain information to the redo log. All Oracle Goldengate Use cases require
Schema-level Supplemental logging, default setting
See Enabling schema-level supplemental logging
ADD Schematrandata

Enable unconditional supplemental records for primary keys and conditional supplemental records for unique keys and foreign keys for all tables in the pattern.

All these keys together are called dispatch columns.

Enables logging of all current and future tables in the schema.

If the source and destination primary keys, unique keys and foreign key columns are not the same, use Allcols. Required when using DDL support.

Schema-level supplemental logging records unconditional logging for all supported columns.
(See Enabling schema-level supplemental logging for unsupported column types)
ADD SCHEMATRANDATAWith ALLCOLS option Enables all columns in the unconditionally replenished logging table for all tables in the schema.

A bidirectional and active-active configuration that checks all column values when attempting to perform an update or delete, rather than just the changed columns.

While allowing the highest level of real-time data validation and therefore detecting conflicts, this requires more resources.

It can also be used when the primary key of the source and destination, the unique key and the foreign key are not the same, or the source and target are constantly changing.

Schema-level supplemental logging, minimum settings ADD SCHEMATRANDATAWith NOSCHEDULINGCOLS option Enables unconditional supplemental logging for all valid unique indexes of the primary key and all tables in the schema. For non-integrated replicat only. This is the minimum schema-level logging required.
Table-level supplemental logs with built-in support for integrated Replicat
(see Enable table-level supplemental logging)
ADD Trandata

Enables unconditional supplemental logging of primary keys and conditional secondary logging for unique keys and foreign keys for a table.

All these keys together are called dispatch columns.

Unless you use supplemental logging at the schema level, all Oracle Goldengate Use cases require this use case.

If the source and destination primary keys, unique keys and foreign key columns are not the same, use Allcols.

Table-level supplemental logging that uses unconditional logging for all supported columns.

(see Enable table-level supplemental logging to view unsupported column types)

ADD TRANDATAWith ALLCOLS option Enable all columns in the Unconditional Supplemental logging table

A bidirectional and active-active configuration that checks all column values when attempting to perform an update or delete, rather than just the changed columns.

While allowing the highest level of real-time data validation and therefore detecting conflicts, this requires more resources.

It can also be used when the primary key of the source and destination, the unique key and the foreign key are not the same, or the source and target are constantly changing.

Table-level supplemental logging, minimum settings ADD TRANDATAWith NOSCHEDULINGCOLS option Enable unconditional supplemental logging for a table's primary key and all valid unique indexes For non-integrated replicat only. This is the minimum required table-level logging.
2.2.1 Enable minimal database-level supplemental records

Oracle strongly recommends placing the Oracle source database in forced logging mode. Forcing logging mode forces all transactions and loads to be logged, instead overwriting any user or storage settings. This ensures that the source data in the extract configuration is not missed.

In addition, the Oracle source database requires minimal supplemental logging (database-level options) when using Oracle Goldengate. This adds the row chain information (if present) to the redo log for the update operation.

It is strongly recommended that you do not use database-level primary key (PK) and unique index (UI) logging because it creates additional overhead on tables that are outside of replication. Unless these logging options are required for business purposes, you only need to enable minimal supplemental logging and Oracle Goldengate Mandatory logging at the database level.

Perform the following steps to verify and enable (if necessary) minimal supplemental logging and forced logging.

  1. Log on to SQL * Plus as a user with alter system privileges.
  2. Issue the following command to determine whether the database is in supplemental logging mode and in forced logging mode. If the results of all two queries are yes, the database conforms to the Oracle Goldengate requirements.
    SELECT  from v$database;
  3. If the result of two or two properties is no, continue with the following steps to enable them as needed:
    SQL>ALTERDATABASEADDLOG  DATA; SQL>ALTERDATABASE Force LOGGING;
  4. Issue the following command to verify that these properties are now enabled.
     from v$database;
  5. Toggles the log file.
    SQL>ALTER SYSTEM SWITCH LOGFILE;
2.2.2 Enabling schema-level supplemental records

Oracle Goldengate supports schema-level supplemental logging. The Oracle source database requires schema-level logging when using the Oracle GoldenGate DDL replication feature. In all of the cases, it is optional, and then you must use Table-level logging instead (see Enabling table-level supplemental logging).

By default, schema-level logging automatically enables conditional supplemental logging of primary keys and unique and foreign keys for all tables in the schema. option allows you to change the logging as needed.

Oracle strongly recommends using schema-level logging instead of table-level logging because it ensures that any new tables added to the schema will be captured when wildcard specifications are met.

Perform the following steps on the source system to enable schema-level supplemental logging.

  1. If the version is earlier than 11.2.0.2, the Oracle Patch 13794550 is applied to the source Oracle database.
  2. Run Ggsci on the source system.
  3. Issue the Dblogin command with a user alias in the credential store that has permission to enable schema-level supplemental logging.
    Dblogin Useridalias Alias

    For more information about Useridalias and other options, see Useridalias in the Oracle Goldengate reference.

  4. The
  5. issues the Add schematrandata command for each mode that you want to use to capture data changes using Oracle Goldengate.
     add  schematrandata schema  Span style= "color: #ff0000;" >[ allcols | Noschedulingcols    

    Where:
    1) If there is no option, the add The Schematrandata mode allows unconditional supplemental logging on the primary key source system and conditionally complements all unique and foreign keys for all current and future tables in a given pattern. Unconditional logging forces the primary key value to be written to the log, regardless of whether the key has changed in the current operation. Condition logging records All column values for a foreign key or unique key if at least one of them has changed in the current operation. For support of non-integrated replicat, the default setting is optional, but the integration replicat needs to be supported by default, because primary keys, unique keys, and foreign keys must all be available to the inbound server to compute dependencies. For more information about integrating Replicat, see decide which application method to use.
    2) Allcols can be used to enable unconditional supplemental logging for all columns of a table, and applies to all current and future tables in a given pattern. Used to support integrated replicat when the source and target tables have different scheduling columns. (The dispatch column is a primary key, a unique key, and a foreign key.) )
    3) Noschedulingcols records only the primary key values and all valid unique index values for existing tables in the schema, as well as new tables added later. This is the minimum level required for a pattern-level log and is valid only for REPLICAT in non-integrated mode
    in the following example, the command is enabled finance Default supplemental logging for mode

     add  schematrandata Finance 

In the following example, this command enables supplemental logging only for the primary key and valid unique indexes of the HR mode.
   

ADD Schematrandata HR Noschedulingcols

2.2.3 Enable table-level supplemental logging

Enable table-level supplemental logging on the source system in the following situations:

  • Enable the required level of logging when you do not use mode-level logging (see Enable mode-level supplemental logging). Both schema-level and table-level logs must be used. By default, table-level logging automatically enables unconditional supplemental logging for primary keys and conditional supplemental logging for unique and foreign keys for tables. option allows you to change the logging as needed.
  • Prevents the primary key record for any given table.
  • The
  • records non-key column values at the table level to support specific Oracle Goldengate features, such as filtering and conflict detection, and parsing logic.
    Perform the following steps on the source system to enable table-level supplemental logging or to use the optional features of the command.
    1) Run Ggsci on the source system.
    2) Issue the dblogin command with the alias of the user in the credential store that has permission to enable table-level supplemental logging.
     dblogin Useridalias alias 

      For more information about dblogin and other options, see Useridalias in the Oracle Goldengate reference.
    3) Issue the Add Trandata command.

     add  trandata [ container.   schema . table  [ , COLS (columns)    Nokey    [   

     

Where:

    • If the table is in a multi-tenant container database, container is the name of the root container or pluggable database.
    • Schema is the source schema that contains the table
    • Table is the name of the sheet. For a description of specifying object names, see Specifying object names in Oracle Goldengate input in managing Oracle Goldengate
    • Adding Trandata without additional options automatically enables unconditional supplemental records for primary keys and conditional supplemental records for unique and foreign keys for the table. Unconditional logging forces the primary key value to be written to the log, regardless of whether the key has changed in the current operation. Condition logging records All column values for a foreign key or unique key if at least one of them has changed in the current operation. The default value is optional for support for non-integrated replicat (see Noschedulingcols), but because of the primary key, the unique and foreign keys must all be available to the inbound server to compute the dependency, so the default value is required to support the integrated Replicat. For more information about integrating Replicat, see decide which application method to use.
    • Allcols supports unconditional replenishment of all columns in the record table. Used to support integrated replicat when the source and target tables have different scheduling columns. (The dispatch column is a primary key, a unique key, and a foreign key.) )
    • The Noschedulingcols is only available for REPLICAT in non-integrated mode. It emits an ALTER TABLE command using the Add supplemental LOG DATA always clause, which applies to the unique constraint type defined for the table, or to all columns that do not have a unique constraint. This command meets the basic table-level logging requirements for Oracle Goldengate when schema-level logging is not used. For information about how Oracle Goldengate selects keys or indexes, see ensure row uniqueness in source and target tables.
    • The cols column records the non-key columns or filters and operations required by the KEYCOLS clause. Parentheses are required. In addition to the primary key, these columns are logged unless the Nokey option also exists.
    • Nokey prevents the record of a primary key or unique key. You need the KEYCOLS clause in the table and map parameters and the COLS clause in the Add trandata command to record the replacement keycols column.

4) If you use the Add Trandata with the cols option, create a unique index for these columns on the target to optimize row retrieval. If you record these columns as an alternative key to the KEYCOLS clause, note that you add the KEYCOLS clause to the table statement and the map statement when you configure the Oracle Goldengate process.

2.3 Enabling Oracle GoldenGate in the database

The database services required to support Oracle Goldengate capture and application must be explicitly enabled for Oracle 11.2.0.4 or later databases. This is required for all modes of extract and Replicat.

To enable Oracle GoldenGate, set the following database initialization parameters. All instances in an Oracle RAC must have the same settings.

Enable_goldengate_replication=true

For more information about this parameter, see initializing parameters.

2.4 Setting up a flashback query

In order to process some update records, extract extracts additional row data from the source database. Oracle Goldengate obtains the following data:

    • User-defined Types
    • Nested tables
    • XmlType Object

By default, Oracle Goldengate uses a flashback query to get values from the Undo (rollback) Table space. In this way, Oracle goldengate can reconstruct a read-consistent row image in a specific time or SCN to match the redo record.

For the best extraction results, configure the source database as follows:

  1. Set up a sufficient number of redo reservations (in seconds) by setting the Oracle initialization Parameters undo_management and undo_retention as follows.
    Undo_management=autoundo_retention=86400 in high- Volume environments.
  2. Use the following formula to calculate the space that is required in the Undo table space.
    = * + Overhead

    Where:
    1) Ndo_space is the number of undo blocks.
    2) Undo_retention is the value (in seconds) of the undo_retention parameter.
    3) UPS is the number of undo blocks per second.
    4) The overhead is the minimum cost of the Metadata (transaction table, etc.).

    Use System View v $ undostat to estimate ups and overhead.

  3. For a table that contains lobs, do one of the following:
    1) Set the LOB storage clause to retention. This is the default value for the table created when Undo_management is set to Auto.
    2) If you are using pctversion instead of retention, set the pctversion to the initial value of 25. You can adjust it according to the read statistics reported with the stats extract command (see table 2-1). If the values of the Stat_oper_rowfetch Currentbyrowid or Stat_oper_rowfetch_currentbykey fields in these statistics are high, increase the pctversion in increments of 10 until the statistics show a low value.
  4. Grant the Oracle GoldenGate extract user any of the following permissions:
    GRANT  any TABLE  to Db_user GRANT  on schema. Table  to Db_user

Oracle Goldengate provides the following parameters to manage extraction.

Table 2-1 Oracle GoldenGate Parameters and Commands to Manage fetching

Parameter or Command Description
Stats extract command with REPORTFETCH option Shows Extract fetch statistics on demand.
Statoptions parameter with REPORTFETCH option Set the stats extract command so that extract statistics are always displayed.
Maxfetchstatements parameters Controls the number of open cursors for prepared queries maintained by extract in the source database, and the number of sqlexec operations.
Maxfetchstatements parameters Controls the default fetch behavior for extract: Whether extract performs a flashback query or extracts the current image from a table.
FetchOptions parameter using the uselatestversion or Nouselatestversion option The failure to process the Extract flashback query, for example, if the undo retention expires or the structure of the table has changed. Extract can extract the current image from a table or ignore the failure
Repfetchedcoloptions parameters Controls the response of Replicat when it processes tracking records that contain extracted data or column missing conditions.
2.5 Managing server resources

In integrated mode, extract interacts with the underlying logmining server in the source database, and Replicat interacts with the inbound server in the target database. This section provides guidelines for managing shared memory used by these servers.

The shared memory used by the server comes from the streams pool portion of the system global Zone (SGA) in the database. Therefore, you must set the database initialization parameter Streams_pool_size high enough to ensure that enough memory is available for the number of extract and replicat processes that you want to run in Integrated mode. Note that the Streams pool is also used by other components of the database, such as Oracle streams,advanced queuing and DataPump export/import, so that in determining the Oracle Goldengate the size of the streams pool, be sure to take them into account.

By default, an integrated capture fetch requests the logon server to run at 1GB Max_sga_size. Therefore, if you run three extracts in the same DB instance in integrated capture mode, you need at least 3 GB of memory allocated to the streams pool. As a best practice, keep 25% of the streams pools available. For example, if you have 3 extracts in integrated capture mode, set the streams_pool_size of the database to the following values:

3 * 1.25 = 3.75 GB

Resources

https://docs.oracle.com/goldengate/c1230/gg-winux/GGODB/preparing-database-oracle-goldengate.htm# Ggodb-guid-e06838bd-0933-4027-8a6c-d4a17bdf4e41

Preparing the database for Oracle Goldengate

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.