Summary of 5 schemes for implementing efficient migration data in DB2

Source: Internet
Author: User
Tags db2 hash mkdir node server datapipe

Replicating large amounts of data from one database server to another on a network requires the following challenges:
1. Massive data
2. Time limit, requiring faster data transfer speed and data reload speed
3. Need to distribute data evenly across database partitioning nodes
So choosing the right data migration tools and methods is the key to efficiently migrating data between different systems in a large data warehouse environment. In order to help you complete the data Migration task, this article introduced several commonly used data migration plan, for everybody Reference study.

Implementation methods

There are a number of scenarios that can be used for reference when refreshing large datasets from one environment to another DB2. This article lists the following 5 methods, as follows:
1. Export the data on the local database server, transfer the data files, and then load the data locally on the target database server
2. Export data from the local database server and load the data remotely into the target database server
3. Export data from a remote database server and load the data locally on the target database server
4. Export data to the operating system pipeline, and then import or load data into the destination remote database server through pipelines
5. Export data in parallel on the local database server (each part is in the respective partition file system), use a data file transfer, and then load the parts locally in parallel
The following example details the implementation steps for each of these methods:


A. Export data on a local database server, transfer data files, and then load data locally on the target database server


1st. The implementation model looks like this:


2nd. The specific steps are as follows:
1. Connect locally to source on the original database server
CONNECT to Sourcedb;
2. Perform DB2 export on a table in the source database server
EXPORT to Datamarts. Scenario_calendar. Del to Del MESSAGES datamarts. Scenario_calendar. MSG
SELECT * from Datamarts. Scenario_calendar;
3. Compress exported files to shorten the time required for file transfer between servers
Gzip Datamarts. Scenario_calendar F.del
4. Transfer compressed files from Sourcedb server to TARGETDB server using SFTP or SCP
Cd
SFTP username@
Put Datamarts. Scenario_calendar. Del.gz
OR
SCP Datamarts. Scenario_calendar. Del.gz username@:/
5. Unzip the transmitted file on the target database server
Gunzip Datamarts. Scenario_calendar. Del.gz
6. Connect locally to TARGETDB on the target database server
CONNECT to Targetdb;
7. Perform load or import
LOAD from Datamarts. Scenario_calendar. Del of del Savecount 10000 MESSAGES datamarts. Scenario_calendar. LOAD. MSG INSERT into
Datamarts. Scenario_calendar;
8. If you choose to use the Load command, execute the SET INTEGRITY command at the end of the operation
SET INTEGRITY for Datamarts. Scenario_calendar IMMEDIATE CHECKED;
9. Perform RUNSTATS to keep statistics up to date
RUNSTATS on TABLE datamarts. Scenario_calendar with distribution and DETAILED INDEXES all;


B. Exporting data on a local database server, transferring data files, and then loading data locally on the target database server


1st. The implementation model looks like this:


2nd. Specific steps:
1. Register the target database in the source database server.
CATALOG TCPIP NODE targetnd REMOTE TargetDBServer.ibm.com SERVER 50001;
CATALOG DATABASE targetdb at NODE targetnd;
2. Connect to Sourcedb locally on the source database server.
CONNECT to Sourcedb;
3. Perform DB2 export from the table on the source database server.
EXPORT to Datamarts. Scenario_calendar. Del to Del MESSAGES datamarts. Scenario_calendar.msg
SELECT * from Datamarts. Scenario_calendar;
4. Connect to Targetdb remotely on the target database server.
CONNECT to Targetdb user using;
5. Remotely load or import data from the source database to the target database.
LOAD CLIENT from Datamarts. Scenario_calendar. Del to del Savecount 10000 MESSAGES
Datamarts. Scenario_calendar. Load.msg INSERT into Datamarts. Scenario_calendar;
6. If you choose to use the Load command, execute the SET INTEGRITY command at the end of the operation.
SET INTEGRITY for Datamarts. Scenario_calendar IMMEDIATE CHECKED;
7. Perform RUNSTATS to keep statistics up to date.
RUNSTATS on TABLE datamarts. Scenario_calendar with distribution and DETAILED INDEXES all;


C. Export data on a local database server, transfer data files, and then load data locally on the target database server


1st. The implementation model looks like this:

&NBSP
2nd. Concrete steps:
1. Registers the source database in the destination database server.
 catalog TCPIP NODE sourcend REMOTE SourceDBServer.ibm.com SERVER 50001;
CATALOG DATABASE sourcedb at NODE sourcend;
2. Connect remotely to the source database from the target database server.
CONNECT to sourcedb user using;
3. Remote execution of DB2 export from table.
EXPORT to Datamarts. Scenario_calendar. Del of del MESSAGES
Datamarts. Scenario_calendar.msg
        SELECT * from Datamarts. Scenario_calendar;
4. Connect locally to TARGETDB on the destination database server.
 connect to targetdb user using;
5. Perform a local load or import. The
 load from Datamarts. Scenario_calendar. Del of del Savecount 10000 MESSAGES
     datamarts. Scenario_calendar. Load.msg INSERT into Datamarts. Scenario_calendar;
6. If you choose to use the Load command, execute the SET INTEGRITY command at the end of the operation.
 set INTEGRITY for Datamarts. Scenario_calendar IMMEDIATE CHECKED;
7. Perform RUNSTATS to keep statistics up to date.
  RUNSTATS on TABLE datamarts. Scenario_calendar with distribution
and DETAILED INDEXES all;


D. Export data on a local database server, transfer data files, and then load data locally on the target database server


1st. The implementation model looks like this:


2nd. Specific steps:
1. Register the source database in the target database server:
CATALOG TCPIP NODE sourcend REMOTE SourceDBServer.ibm.com SERVER 50001;
CATALOG DATABASE sourcedb at NODE sourcend;
2. Create an operating system pipeline in the target database server.
Mkfifo DataPipe
LS? ltr DataPipe
prw-r--r--1 bculinux bcuigrp 0 2011-09-18 16:32 datapipe
3. Remotely connect to the source database from the target server.
CONNECT to sourcedb user using;
4. Export data from the source database and write it to one end of the operating system pipeline (data pipeline). In a business scenario, the database management team refreshes the UAT from the PROD. Applies only to 12,904,084 records in 2011.
EXPORT to DataPipe of DEL MODIFIED by Coldel, MESSAGES
Fact_cust_fpi_validation. Exp.msg SELECT * from Datamarts. F_cust_fpi_validation
WHERE rec_load_dt > ' 2011-01-01-00.00.00.000000 ' with UR;
5. Remotely connect to the source database from the target server.
CONNECT to Targetdb user using;
6. Import or load data from the other end of the operating system pipeline into a table in a regular hash range partition.
IMPORT from DataPipe of DEL MODIFIED by Coldel, MESSAGES
Fact_cust_fpi_validation. Imp.msg INSERT into
Datamarts. Fact_cust_fpi_validation;
LOAD from DataPipe of DEL MODIFIED by Coldel, MESSAGES
Fact_cust_fpi_validation. Ld.msg INSERT into
Datamarts. Fact_cust_fpi_validation;


E. Export data on a local database server, transfer data files, and then load data locally on the target database server


1st. The implementation model looks like this:


2nd. Specific steps:
1. Create soft links from the admin node to export directories to all data nodes. In this example, the export directory is $HOME/db2backup/exports.
Ln-s/db2fs/bculinux/node0001 NODE1
Ln-s/db2fs/bculinux/node0002 NODE2
Ln-s/db2fs/bculinux/node0040 NODE40
3. The following list shows the files that were obtained after the soft link was created.
LS? ltr
lrwxrwxrwx 1 bculinux bcuigrp 2011-04-13 19:25 NODE1->/db2fs/bculinux/node0001
lrwxrwxrwx 1 bculinux bcuigrp 2011-04-13 19:25 NODE2->/db2fs/bculinux/node0002
lrwxrwxrwx 1 bculinux bcuigrp 2011-04-13 19:28 NODE40->/db2fs/bculinux/node0040
4. In each physical data node server, create a directory structure similar to the following.
MkDir? p/db2fs/bculinux/node0001/exports/datamarts
MkDir? p/db2fs/bculinux/node0002/exports/datamarts
MkDir? p/db2fs/bculinux/node0040/exports/datamarts
5. For tables that need to be exported, please syscat from the. COLUMNS found hash partition column.
DB2 "SELECT SUBSTR (colname,1,20) colname, partkeyseq from
Syscat. COLUMNS WHERE tabname= ' ' f_cust_proftblty_tmp ' and
Tabschema= ' Datamarts '
6. In this table, we found two hash partitioning columns. We select one of the HA series to export the data in the respective partitions.
Use the DB2 Export command to export data in parallel across all partitions, as shown in the following list.
Db2_all "\" | | DB2 \ "EXPORT to"
$HOME/db2backup/exports/node##/exports/datamarts/
Datamarts. F_cust_proftblty_tmp.del of del SELECT * from
Datamarts. F_cust_proftblty_tmp WHERE Dbpartitionnum
(Buss_unit_key) =##\ ""
This command exports each partition data to its respective partition node.
7. Use the SCP to perform file replication from each source database server node to the target database server.
Scp-p @:
8. Parallel load loads each partition data into its respective partition nodes.
Db2_all "<<-0<<\" db2-v \ "LOAD from Db2backup/exports/node##/exports/datamarts
Datamarts. F_cust_proftblty_tmp.del of Del INSERT into
Datamarts. F_cust_proftblty_tmp nonrecoverable \ ""


How we mastered the above five scenarios, then migrating DB2 database is a piece of cake.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.