Demonstration of cross-platform migration of DB2 database and PHP application system

Source: Internet
Author: User

This article mainly describes the cross-platform migration instance of the DB2 database and PHP application system. In actual operations, we need to note that the DB2look of DB2 v6 has not been extracted, such as UDF, ddl statements for database objects such as TRIGGER, UserSpace, NodeGroup, and BufferPool.

Since DB2 v7, DB2look can extract the DDL statements of the above objects, but it still cannot extract the ddl statements of the objects created in the stored procedure.

Since DB2 v8.2, we have improved the support for the DB2look function and implemented the extraction of ddl statements in the stored procedure. Because the source database system version involved in this article is earlier than DB2 v8.1), you need to use the above scheme to obtain the DDL information of all database objects:

1). Run the CATALOG operation on the SRCDB1DB2 database v8.1 from a DB2 v8.2 system:

  1. DB2 catalog db SRCDB1 as SRCDB1; 

2). Perform the DB2look extraction process for SRCDB1 from the DB2 v8.2 system:

  1. DB2look -d SRCDB1 -e -o srcdb1.ddl -a -i user_srcdb1 -w pw_srcdb1; 

In this way, you can obtain the complete database object DDL information.

3. generate the data export script

Use shell scripts to generate and export DML scripts for all data and redirect them to the srcdbw.export. SQL file. Users familiar with the DB2 database should know that each table, view, and alias created in the database corresponds to a row of records in SYSCAT. TABLES. Therefore, you can obtain all required database table information through the corresponding database select statement.

As needed, the following shell script will be run from the system table SYSCAT. in TABLES, select the table names of all tabschema TABLES in SRCDB1 as SRCDB1, ASN, SQLDBA, and DB2DBG Based on the tabname field, and generate corresponding export statements based on their names, to export data in batches. The rtrim function is used to remove spaces on the right of the data in the tabname field.

Listing 6. generate an export script

  1. # DB2 "select 'export to ' || rtrim(tabname) || '.ixf of ixf select * from ' ||  
  2. rtrim(tabname) || ';' from syscat.tables  
  3. where tabschema in('SRCDB1', 'ASN', 'SQLDBA', 'DB2DBG')" > srcdb1_export.sql ;  

Edit the generated srcdb1_export. SQL, delete the statistics displayed in the header and tail, and retain only the necessary export statements. By modifying the tabschema information contained in the preceding script, you can specify the scope of the table to be exported, that is, all the table names required during the migration process. The generated export Statement has the following command format:

  1. DB2 export to tablename.ixf of ixf select * from tablename; 

4. Generate a data import load script

Use shell scripts to generate a load script to import data to the target system: srcdb+load. SQL

Listing 7. Generate a load script

  1. # DB2 "select 'load from ' || rtrim(tabname) || '.ixf of ixf insert into ' ||  
  2. rtrim(tabname) || ';' from syscat.tables  
  3. where tabschema in ('SRCDB1', 'ASN', 'SQLDBA', 'DB2DBG')" > srcdb1_load.sql;  

Edit the generated srcdb1_load. SQL, delete the statistics of the header and tail, and retain only the necessary load statements. Similar to the export Statement, the preceding shell script selects the names of all tables in SRCDB1 from the system table and generates corresponding import statements based on their names to achieve the goal of batch import. The Command Format of the generated import statement is as follows:

  1. DB2 load from tablename.ixf of ixf insert into tablename; 

5. Process Auto-increment fields in database tables

For a table containing auto-increment fields to be loaded, that is, the ixf data file of the table contains the value of auto-increment columns, you can add the following parameters to the load command to control the auto-increment field values:

1). modified by identityignore: The loaded data file contains the auto-increment field value. During load, the auto-increment field value in the data file is ignored;

2). modified by identitymissing: The loaded data file does not contain the auto-increment field value. The auto-increment field value is automatically generated during load;

3). modified by identityoverride: The loaded data file contains the auto-increment field value, and the auto-increment field value in the data file is used for load.

To ensure that the data in the table containing the auto-increment field in the target database is consistent with that in the source database, the modified by identityoverride parameter is selected in this example, use the auto-increment field value in the data file when importing data. You can select appropriate control parameters based on different situations.

First, search for the table names of all the packet auto-increment fields (tables containing the generated always as identity field) in the srcdb1_tables.ddl file ), then, insert the modified by identityoverride statement segment in srcdb1_load. SQL to the load command line corresponding to the tables with auto-increment fields.

Listing 8. Auto-increment field processing in the load script

  1. DB2 load from test.ixf of ixf modified by identityoverride insert into TEST; 

6. Execute the export script

Run the export script to export data from all tables.

  1. # DB2 -tvf srcdb1_export.sql 

The exported table data is stored in the current path in ixf format.

The above content is an introduction to the cross-platform migration instance of the DB2 database and PHP application system. I hope you will have some gains.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.