Migrating from Oracle to IBM DB2 with Coca-Cola bottled Company

Source: Internet
Author: User
Tags ibm db2 sap netweaver

Background, start point, and target

Coca-Cola Bottling Co. Coca-Cola lidated (CCBCC) produces and sells drinks, most of which are from Coca-Cola Company. CCBCC is the second largest Coca-Cola bottled factory in the United States. Its operation is concentrated in seven southeast states. Founded in 1902 and headquartered in Charlotte, North Carolina, the company has a net operating income of more than $1.4 billion.

Make full use of the comprehensive effect: SAP Unicode conversion and DB2 migration
Before the technical upgrade of the SAP environment, CCBCC decided to perform Unicode conversion and migrate from the existing Oracle database platform to IBM DB2 with Deep Compression. Because this policy does not require a new Oracle license, the total cost of ownership (TCO) can be reduced ).

During the migration, the DB2 Deep Compression function can reduce the database capacity by more than 40%, and shorten the backup time and execution time of subsequent SAP software upgrades.

At the same time, before SAP upgrade, CCBCC can benefit from the highly automated DB2 database management, thus reducing operating costs. Features of DB2 9 include memory management, automatic memory management (STMM), automatic reorganization, automatic operation startup, and real-time statistics and backup through the Integrated FlashCopy function.

All Database management and monitoring tasks can be completed through SAP Database Administrator (DBA) Cockpit for DB2. This simple and easy-to-use management environment has been integrated into the SAP application environment.

Deploying Unicode is a solution to ensure future operation
Unicode is deployed for CCBCC because all new SAP products will start with SAP NetWeaver 7.0 in the future. CCBCC hopes to be able to prepare new SAP applications, such as SAP NetWeaver Process Integration (sap nw pi), because these new SAP applications are part of the future implementation plan.

From a technical point of view, Unicode conversion is very similar to database migration. In both cases, you must use the SAP program R3load to import and export databases.

Unicode conversion is performed in the export phase of the migration plan. Therefore, you can easily import databases to a new target system without downtime. The benefit of migrating to IBM DB2 with SAP software upgrades and/or Unicode conversion is that it not only avoids repeated projects such as backup and testing, but also effectively controls migration costs.

Migration Process: heterogeneous system Replication
CCBCC uses the standard SAP Method for migration. This method is also called "heterogeneous system replication" or "OS/DB migration. CCBCC allows you to perform migration and conversion in the pre-arranged maintenance window without the need to use new SAP migration tools or services, such as Zero Downtime.

The entire sap r/3 Enterprise environment migration project was completed for 8 weeks, including two iteration tests of the 1 TB production database. Migration of SAP systems takes only one weekend from Saturday night to the early Monday morning. During the entire migration process, only 26 hours of downtime is required.

To reduce downtime, the company uses a series of dedicated SAP migration tools:
1. Unsorted Export, applicable to transparent tables

2. Package Splitter, applicable to the largest table "Big Table" group)

3. Table Splitter, applicable to three cluster tables

4. Migration Monitor, which can execute distributed parallel import and export processes for multiple instances

5. R3load, which has the Deep Compression function, can compress the database during the migration stage.

The following sections describe how CCBCC uses these tools, the reasons for choosing these tools, and the benefits.

Architecture Overview: CCBCC migration Solution
CCBCC divides IBM Power Systems Server Model p5-560 into four logical partitions (LPAR) for migration. Three lpars are used to export the database from the source system, and one LPAR is used to import the database to the target system. The export partition consists of the Central Instance/Database partition and the other two partitions. The Central Instance/Database (CI/DB) has 16 1.50 GHz CPUs and 64 GB memory, the other two shards each have 4 1.50 GHz CPUs and 12 GB memory. Import partition or new CI/DB partition) has 16 1.50 GHz CPU and 64 GB memory.

During the test, the system is set as the best migration environment to handle the migration workload.

To achieve the desired downtime, the workload of the export package is distributed in the CI/DB server and two other server hosts A and B). These three servers run in the first three lpars. The CI/DB server processes three large cluster tables through Table Splitter. Host A processes small tables. Host B processes "Big Table" groups containing more than 10 million records, more than 2 million records, and more than 0.2 million records). The data is divided into smaller packages by Package Splitter. All three hosts use local storage to export data to the disk. Each export process is controlled by the Migration Monitor (MigMon) instance.

The import end has only one server: Host C's new CI/DB Server ). The exported disks of CI/DB, host A, and host B are read through NFS on host C. The import job is controlled by multiple MigMon instances.

Then, use the "sorted unload" option to export a subset from the "Big Table" group on host B. This function requires additional CPU resources. Therefore, you must specify an additional server in the export phase. During the import phase, tables in the "Big Tables" group are compressed during the loading process.

Database Export-migration tool used

Unsorted vs. sorted export
CCBCC uses the "Sorted export" and "Unsorted export" export methods to uninstall data in the Oracle database. Unsorted Export is usually faster than Sorted Export. However, because CCBCC must also perform Unicode conversion, the Migration team had to Export SAP cluster tables such as CDCLS, RFGLG, and EDI40 using the "Sorted Export" method. Sorting data requires additional CPU power consumption. Therefore, in the data export stage, CCBCC uses three servers.

1. Sorted Export: Pool Table, Cluster Table, report, Dynpro, and Nametabs.
2. Unsorted Export: most transparent tables

If you use the "Sorted Export" method, the table page is read in the order of primary keys. If the proportion of the cluster is not good, the data page will not be read continuously. In addition, database sorting jobs may extend the execution time.

If "Unsorted Export" is used, data is read sequentially and then written directly to the file without sorting the data.

Notes for Unicode conversion of cluster tables
After Unicode conversion, the record content and length may change. Even a logical record may have different numbers of physical records. Because a logical record is composed of physical records, data must be read in order to find all physical records belonging to the logical record. Therefore, Unsorted unload cannot be Unsorted.

Database restrictions
DB2 supports "Unsorted Export", but some databases only support "Sorted Export ". In other words, these databases face major challenges during migration and restrict their daily operations. For example, it is difficult to set up a test and QA system using the "Sorted Export" method, especially a large database. If you are forced to execute "Sorted Export", it will greatly prolong the downtime, and it is almost impossible to change the database, or even complete Unicode conversion within a reasonable period of time.

Kit and table Segmentation
In the past, nearly 1 TB of database size and large tables were the main causes of service interruption. Therefore, CCBCC decided to use Package Splitter and Table Splitter to export the database in parallel to speed up the entire migration process.

Package Splitter can split the source database tables into packages and then export them. Each suite is processed by a dedicated R3load program. These programs can be performed simultaneously, so they can effectively use CPU resources. Table Splitter R3ta can generate multiple WHERE conditions for tables and export Table data through multiple R3load programs that are executed simultaneously. Each R3load program needs to set the WHERE condition to select a data subset in the table.

1. 262 large table "Big Table" groups) Package themselves through Package Splitter to improve their parallel processing capabilities and ensure the accuracy of the suite, and effectively use resources during the migration process.

2. The 12 super large tables are split into multiple suites by Table Splitter, so that multiple R3load programs can export and import tables in parallel.

3. Other Tables use Package Splitter to be included in the Union suite. After the content is divided into multiple R3load program 20 parallel programs), data can be exported and imported in parallel, saving a lot of time.

Migration Monitor (MigMon)
In the Unicode conversion phase, system replication will generate a huge CPU load during export. Most CPU resources are used to convert data, especially when processing cluster tables.

To avoid CPU bottle items, CCBCC distributes Export and Import jobs to four LPAR for efficient parallel processing of these programs. In this way, CCBCC can use additional processor resources to process database import/export jobs. Migration Monitor assists in executing and controlling programs for uninstallation and loading during system replication, and allows 20 Export and Import programs to be executed simultaneously.

Database Import: Use the DB2 Deep Compression function

DB2 9 storage Optimization
The storage optimization feature of DB2 9, also known as Deep Compression, can replace duplicate patterns with short symbol in a dictionary-like manner. The dictionary stores the most frequently-seen samples, retrieves these samples with the relevant symbols, and then replaces them. Because all types in the table are replaced by not just single-page samples), a considerable compression rate can be achieved for each table up to 90% ).

R3load has the DB2 Deep Compression function:
CCBCC decided to use Deep Compression during the migration because it wanted to use the DB2 storage optimization feature. Although we know that the compression rate of R3load 6.40 is not the best, CCBCC decided to do so because it can achieve a compression rate of 40%, and effectively improve performance although only 169 large tables are compressed ).

If you use the DB2 Deep Compression feature during database migration and/or Unicode conversion, data can be compressed smoothly when the database is loaded. The R3load tool provides multiple methods to deploy DB2 Deep Compression when loading data into tables. Different R3load versions (version 6.40, version 7.00, or version 7.00) provide different compression options, such as the SAMPLED option for R3load.

This feature provides the best data compression and does not require time to refresh tables. CCBCC uses R3load 6.40, so this article focuses on its compression function.

R3load 6.40 has the compression function
To generate a compression dictionary, R3load first loads the defined rows into the table without compression. After offline reorganization, R3load creates a compression Dictionary Based on these rows.

CCBCC defines the value of the environment variable DB6LOAD_COMPRESSION_THRESHOLD, which defines the number of rows originally loaded for creating a dictionary. This critical value is 10,000 records by default, but this value is too low for large table compression examples.

By extracting 10%-80% records as samples based on the number of rows in the table), CCBCC can set the optimal critical value and achieve an ideal compression effect. The two largest tables, COEP and BSIS, have over 0.1 billion million records, and several tables have records ranging from 10 million to million.

CCBCC uses the following threshold values for compressing transparent table groups:
1. A group of 20 tables with more than 3 million records; critical value = 3 million

2. A group of 47 tables with more than 200,000 records; critical value = 200,000

3. A group of 60,000 tables with more than 102 records; critical value = 60,000

Note that not all tables meeting the critical values are attached with a compression flag and assigned to a group. Only tables with excellent compression performance in the test phase will be selected.

After the initial import and creation of the dictionary, R3load will import the remaining rows into the table, and DB2 will compress the data according to the dictionary.

To compress the table during loading, you must set the compression attribute. Some tables in CCBCC need to be compressed, but some do not. Therefore, for Migration Monitor, different template files are used.

CCBCC imports data through multiple Migration Monitor instances. For each instance, DB6LOAD_COMPRESSION_THRESHOLD uses different values.

Summary
CCBCC has benefited from Unicode upgrades and database migration. This is manifested in the fact that the company makes full use of the synthesis effect throughout the migration process, eliminating duplicate programs such as backup and testing. From the beginning to the completion of the ERP migration project, it took only 8 weeks, including Unicode conversion.

It is also very important that the conversion from Oracle to DB2 is very simple because DB2 is friendly and easy to use. The database administrators of CCBCC have strong Oracle skills and they can fully master DB2 Technology in just a few weeks. This shows that it is easy for experienced DBAs to transfer to DB2, No matter what technology they previously mastered.

CCBCC can immediately benefit from DB2:
1. Low TCO
2. The database size is reduced by 40%.
3. Improved performance: manufacturing reduced by more than 65%
4. Better integration of databases into SAP tools sap dba cockpit for DB2)
5. Reduce DBA's workload for managing DB2

Correct use of DB2 is a good preparation for the upcoming upgrade of CCBCC to sap erp 6.0. The current execution of SAP is smoother and faster than before. Because the database size is reduced by 40%, the backup and running time of SAP software upgrade will also be shortened.


 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.