Oracle Golden Gate Series 12-GG data initialization load ii scn-based initialization instructions and Examples

Source: Internet
Author: User

 

I. Initialization instructions

During Gg implementation, Initialization is an important task, especially when there is a large amount of data to be initialized and the system is 7x24.

 

For static initialization, the business will be stopped and the data on the DB will not change. At this time, we can use expdp/impdp or dblink to synchronize the basic data, start the relevant Gg synchronization process.

However, if zero downtime is required, DB transactions will continue. You can use the following two methods to ensure transaction integrity and data accuracy during initialization.

1. Use keys + handlecollisions

2. Use commit SCN/CSN

 

1.1 keys + handlecollisions Method

The first method is described and tested in Series 9:

Oracle Golden Gate Series 9-GG data initialization loading instructions and Examples

Http://blog.csdn.net/tianlesoftware/article/details/6976551

 

The handlecollisions parameter relies on the table's key (primarykey/unique key) to process duplicate and missing rows of data, and ensures data consistency during data initialization. However, this method has considerable limitations in actual engineering implementation.

First, the initialization method has poor performance and is not suitable for large databases. What's more serious is that it has great defects.

 

Here is a description of the handlecollisions Method on MOS:

1. when there isprimary Key Update (pkupdate), The handlecollisions method may lose data. thesolution in the case of a primary key update is for extract to fetch whole Rowby making a flashback query. however, this is not efficient and may not bepossible in certain environments.

2. When a tabledoes not have any type of key, handlecollisions may introduce duplicate rows inthe table. One workaround is to add a primary or unique key to the targettable.

3. the exacttimestamp when the initial load was started may not be known (and differenttables may have different start times), so it is almost inevitable cannot be used for certain records un-necessarily. this couldmask a real data integrity problem.

 

1.2 commit + SCN/CSN Method

You can use the following methods to achieve this:

(1) Consistent exp and IMP

(2) Consistent expdp and impdp

(3) backup-based tablespace migration

(4) dataguard

 

1.2.1 transportable tablespace

Transporttabletablespace can be implemented using expdp/impdp, or RMAN. Note that the tablespace needs to be set to read only for transmission of tablespaces, but it is generally not allowed by the production database, therefore, the backup-based RMAN table space can be used here.

One of the biggest advantages of table space migration based on backup is zero downtime, and supports heterogeneous platforms and cross versions (convert is required for source-Target Platform initialization in different bytes ), however, it is only supported in version 10 Gb or above, and it is also subject to the restrictions of table space migration.

 

1.2.2 data guard

Dataguard is suitable for initializing the system environment of the same platform version.

For articles about the Oracle DG series, refer to my blog:

Http://blog.csdn.net/tianlesoftware/article/category/700326

 

1.2.3 exp/expdp

Through the consistent parameters (flashback_scn) of exp and expdp, export the consistent version of the specific SCN point. The flashback_scn parameter is used to specify the table data at a specific SCN time point, such:

Flashback_scn = scn_value.

Scn_value is used to identify the SCN value. flashback_scn and flashback_time cannot be used at the same time.

 

There is a problem with initialization using this method, that is, the process of constructing consistent data will put a lot of pressure on undo, especially for large databases, the split DataPump can be used to synchronize data in groups, relieve undo pressure, and merge the DataPump of the Group at an appropriate time.

This method can be initialized across platforms and versions.

 

Oracle 10g Data Pump expdp/impdp

Http://blog.csdn.net/tianlesoftware/article/details/4674224

 

Oracle expdp/impdp example

Http://blog.csdn.net/tianlesoftware/article/details/6260138

 

Oracle exp/IMP description

Http://blog.csdn.net/tianlesoftware/article/details/4718366

 

After the SCN-based data Initialization is complete, we can start from the specified SCN at replicat. For example:

Start rep1, aftercsn 12345678

 

Ii. Example

Only expdp/impdp is demonstrated here. This method is the most flexible.

 

The architecture of GG is: extract + Data Pump + replicat.

 

2.1 clear the previous Gg Environment

-- Target System

SQL> conn Dave/Dave;

Connected.

SQL> drop table pdba;

Table dropped.

 

Ggsci (gg2) 46> stop rep1

Sending stop request to replicat rep1...

Request processed.

 

Ggsci (gg2) 47> info all

Program Status group lag time since chkpt

Manager running

Replicat stopped rep1 00:00:00

 

Ggsci (gg2) 48> Delete rep1

Error: cocould not delete dB checkpoint forreplicat rep1 (Database login required to delete database checkpoint ).

 

Ggsci (gg2) 49> dblogin useridggate, password ggate

Successfully logged into database.

 

Ggsci (gg2) 51> Delete rep1

Deleted replicat rep1.

 

Ggsci (gg2) 52> info all

Program Status group lag time since chkpt

Manager running

 

-- Source System

Ggsci (gg1) 55> info all

Program Status group lag time since chkpt

 

Manager running

Extract running dpump 00:00:00 00:00:09

Extract running ext1 00:00:00 00:00:09

 

Ggsci (gg1) 56> dblogin userid ggate, password ggate

Successfully logged into database.

 

Ggsci (gg1) 57> stop ext1

Sending stop request to extract ext1...

Request processed.

 

Ggsci (gg1) 58> stop dpump

Sending stop request to extract dpump...

Request processed.

 

Ggsci (gg1) 59> Delete ext1

2011-11-17 16:51:48 info OGG-01750 successfullyunregistered extract ext1 from database.

Deleted extract ext1.

 

Ggsci (gg1) 60> Delete dpump

Deleted extract dpump.

 

Ggsci (gg1) 61> info all

Program Status group lag time since chkpt

Manager running

 

 

2.2 reconfigure the GG Environment

2.2.1 source system create extract and Data Pump

-- Create extract: ext1

Ggsci (gg1) 62> Add extractext1, tranlog, begin now

2011-11-17 16:56:21 info OGG-01749 successfully registeredextract ext1 to start managing log retention at SCN 1374149.

Extract added.

 

Ggsci (gg1) 63> Add exttrail/u01/ggate/dirdat/LT, extract ext1

Exttrail added.

 

Ggsci (gg1) 64> View Params ext1

 

Extract ext1

Userid ggate @ gg1, password ggate

-- Rmthost gg2, mgrport 7809

-- Rmttrail/u01/ggate/dirdat/LT

Exttrail/u01/ggate/dirdat/LT

DDL include all objname Dave. pdba;

Table Dave. pdba;

 

-- Create DataPump: dpump

Ggsci (gg1) 65> Add extractdpump, exttrailsource/u01/ggate/dirdat/LT

Extract added.

 

Ggsci (gg1) 66> Add rmttrail/u01/ggate/dirdat/LT, extract dpump

Rmttrail added.

 

Ggsci (gg1) 67> View Params dpump

Extract dpump

Userid ggate @ gg1, password ggate

Rmthost gg2, mgrport 7809

Rmttrail/u01/ggate/dirdat/LT

Passthru

Table Dave. pdba;

 

2.2.2 create replicat in Target System

 

-- Create a checkpoint table

Ggsci (gg2) 57> View Param./globals

Ggschema ggate

Checkpointtable ggate. Checkpoint

 

Ggsci (gg2) 58> dblogin userid ggate @ gg2, password ggate

Successfully logged into database.

 

Ggsci (gg2) 59> Add checkpointtable ggate. Checkpoint

Successfully created checkpoint tableggate. Checkpoint.

 

-- Create replicat: rep1

Ggsci (gg2) 60> Add replicatrep1, exttrail/u01/ggate/dirdat/LT, checkpointtable ggate. Checkpoint

Replicat added.

 

Ggsci (gg2) 61> View Param rep1

Replicat rep1

Assumetargetdefs

Userid ggate @ gg2, password ggate

Discardfile/u01/ggate/dirdat/rep1_discard.txt, append, megabytes 10

-- Handlecollisions

DDL include all

Ddlerror default ignore retryop

Map Dave. pdba, target Dave. pdba;

 

Note that when I delete a process, the corresponding parameter file is not deleted. After a process with the same name is created, the previous parameter file is used by default, the location of the parameter file is displayed during the edit process.

 

It is stored in the $ ggate/dirprm directory by default:

Gg2:/u01/ggate/dirprm> ls

Mgr. PRM rep1.prm rept2.prm

 

2.3 start the extract and Data Pump Processes: ext1, dpump

Do not start the replicat process here. Wait until the initialization is completed with expdp/impdp and start the replicat process with SCN.

 

Ggsci (gg1) 70> Start ext1

Sending start request to manager...

Extract ext1 starting

 

Ggsci (gg1) 71> Start dpump

Sending start request to manager...

Extract dpump starting

 

Ggsci (gg1) 73> info all

 

Program Status group lag time since chkpt

Manager running

Extract running dpump 00:00:00 00:18:27

Extract running ext1 00:20:10 00:00:02

 

2.4 use expdp/impdp with flashback_scn to complete Initialization

2.4.1 query the current SCN in sourcedb

SQL> select current_scn from V $ database;

Current_scn

-----------

1376141

 

2.4.2 export data from the pdba table

Gg1:/u01/backup> expdp Dave/Dave directory = backup dumpfile = pdba. dmplogfile = table. Log tables = pdba flashback_scn = 1376141;

 

Export: Release 11.2.0.3.0-production onthu Nov 17 17:24:49 2011

 

Copyright (c) 1982,201 1, Oracle and/or itsaffiliates. All rights reserved.

 

Connected to: Oracle Database 11 genterprise Edition Release 11.2.0.3.0-64bit Production

With the partitioning, OLAP, data miningand real application testing options

Starting "Dave". "sys_export_table_02": Dave/********* directory = backupdumpfile = pdba. dmp logfile = table. Log tables = pdba flashback_scn = 1376141

Estimate in progress using blocks method...

Processing object typetable_export/table/table_data

Total estimation using blocks Method: 60 MB

Processing object typetable_export/table

Processing object typetable_export/table/statistics/table_statistics

.. Exported "Dave". "pdba" 48.51 MB 2678634 rows

Master table "Dave". "sys_export_table_02" successfully Loaded/unloaded

**************************************** **************************************

Dump File set for Dave. sys_export_table_02is:

/U01/backup/pdba. dmp

Job "Dave". "sys_export_table_02" successfully completed at17: 26: 22

 

2.4.3 delete some data from the pdba table

SQL> Delete from pdba whererownum <1000;

999 rows deleted.

 

SQL> commit;

Commit complete.

 

SQL> select current_scn from V $ database;

Current_scn

-----------

1377291

 

2.4.4 run dumpscp to target

Gg1:/u01/backup> SCP pdba. dmp192.168.3.200:/u01/backup

Oracle@192.168.3.200's password:

Pdba. dmp 100% 49 MB 6.1 MB/S

 

2.4.5 impdp dump file

Gg2:/u01/backup> impdp Dave/davedirectory = backup dumpfile = pdba. dmp logfile = table. Log tables = pdbatable_exists_action = replace;

 

Import: Release 11.2.0.3.0-production onthu Nov 17 17:30:04 2011

 

Copyright (c) 1982,201 1, Oracle and/or itsaffiliates. All rights reserved.

 

Connected to: Oracle Database 11 genterprise Edition Release 11.2.0.3.0-64bit Production

With the partitioning, OLAP, data miningand real application testing options

Master table "Dave". "sys_import_table_01" successfully Loaded/unloaded

Starting "Dave". "sys_import_table_01": Dave/********* directory = backupdumpfile = pdba. dmp logfile = table. Log tables = pdba table_exists_action = replace

Processing object typetable_export/table

Processing object typetable_export/table/table_data

.. Imported "Dave". "pdba" 48.51 MB 2678634 rows

Processing object typetable_export/table/statistics/table_statistics

Job "Dave". "sys_import_table_01" successfully completed at17: 31: 00

 

2.5 start replicat with SCN

 

Ggsci (gg2) 63> Start rep1, aftercsn 1376141

 

Sending start request to manager...

Replicat rep1 starting

 

 

Ggsci (gg2) 64> info rep1

 

Replicat rep1 last started 2011-11-1717: 32 status running

Checkpoint lag 00:00:00 (updated 00:00:04 ago)

Log read checkpoint file/u01/ggate/dirdat/lt000000

The first record RBA 978

 

 

2.6 Verification

As long as the number of pdba records on the source and target tables is consistent, it indicates that it is OK.

-- Source DB:

SQL> conn Dave/Dave;

Connected.

SQL> select count (*) from pdba;

 

Count (*)

----------

2678634

 

-- Target DB:

SQL> conn Dave/Dave;

Connected.

SQL> select count (*) from pdba;

 

Count (*)

----------

2678634

 

Synchronization is normal. The SCN-based initialization example ends here.

 

 

 

 

 

Bytes -------------------------------------------------------------------------------------------------------

All rights reserved. reprinted articles are allowed, but source addresses must be indicated by links. Otherwise, the documents will be held legally responsible!

Blog: http://blog.csdn.net/tianlesoftware

WEAVER: http://weibo.com/tianlesoftware

Email: tianlesoftware@gmail.com

Skype: tianlesoftware

 

------- Add a group to describe the relationship between Oracle tablespace and data files in the remarks section. Otherwise, reject the application ----

Dba1 group: 62697716 (full); dba2 group: 62697977 (full) dba3 group: 62697850 (full)

Super DBA group: 63306533 (full); dba4 group: 83829929 (full) dba5 group: 142216823 (full)

Dba6 group: 158654907 (full) dba7 group: 69087192 (full) dba8 group: 172855474

DBA super group 2: 151508914 dba9 group: 102954821 chat group: 40132017 (full)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.