Brief introduction
DB2 Universal database® (UDB) is the first relational database management system to support multimedia and the Web, which is powerful enough to meet the needs of large companies and is flexible enough to meet the requirements of small and medium-sized enterprises. The combination of DB2 product family software and Internet technology allows us to easily access information, use information, and secure information across different platforms. Worldwide, 6000多万个 DB2 users of 30多万家 companies rely on IBM data management solutions.
DB2 UDB provides support for most applications that require e-commerce, such as E-commerce, enterprise resource planning, customer relationship management, supply chain management, Web self-service, and business intelligence. This is a scalable, industrial-level database that is ideal for use as a basis for data management in the development of e-business.
Online transaction processing (on-line transaction PROCESSING,OLTP) is a class of applications that can facilitate transaction-oriented applications and can be used to manage transaction-oriented applications, typically for data entry and retrieval transactions in many industries, including banking, Aviation, mail order, supermarkets and manufacturing. Typically, OLTP workloads include many short transactions that run concurrently. Today's online transaction processing increasingly requires support across networks and transactions that may include multiple companies. As a result, new OLTP software uses client/server processing and proxy software, which allows transactions to run on different computer platforms in a single network.
Performance is one of the most important factors in any database system. This article focuses on a number of DB2 performance tuning techniques based on the experience gained from performance benchmarking programs that run OLTP types (tpc-c, tpc-w, Trade2, and so on). While the performance of database applications can be affected by many factors, we focus on configuration rather than factors such as capacity planning, database design, or application design.
The organizational structure of this article is as follows:
Some basic elements of performance
Updating catalog statistics, which emphasizes the importance of collecting and maintaining up-to-date database statistics, is often the source of many performance problems.
Monitoring and Tuning database configuration parameters, which describe a list of database manager parameters and database parameters in order of importance. In general, it is not necessary to try all the parameters in the list to achieve performance goals. You can only try the ones that are at the top of the list to see if there are performance improvements.
With these techniques, you can start your OLTP application and make it very well run.
Some basic elements of performance
There is enough memory.
For 32-bit systems, each CPU uses at least MB of RAM, up to 4 GB per machine to support buffer pools, DB2 agents, and other shared memory objects that are required by a large number of concurrent users. (See the "Buffer pool size (BUFFPAGE)" section for more information about the buffer pool.) May require more memory to support applications that run locally or as stored procedures. The JFS file cache on aix® can use additional memory to replenish the buffer pool.
For 64-bit systems, the buffer pool can actually be any size. However, for most e-business OLTP applications that use large databases, the buffer pool size does not actually require more than 8 GB. The bigger the better, of course, but at a certain point, when the buffer pool hits the 98+%, it decreases with the increase in memory. The number of concurrent users, which affects the number of DB2 agents, determines how much memory is required.
The amount of memory required for each user to connect to the database (that is, the DB2 agent) depends on the nature of the SQL statement that the application executes-such as the number of concurrent cursors open and the amount of sorting and temporary space required. For OLTP applications, there will be fewer sorting and temporary space, and only a few concurrent cursors open at a time.
Experience: For each DB2 agent, use at least 1 MB of RAM in UNIX and use up to a minimum of KB memory in Windows. If a protected stored procedure is used, there are two DB2 agents per user connection in addition to the memory required to run the stored procedure application.
have sufficient I/O processing power.
There must be sufficient disk devices to ensure adequate I/O parallelism to support large-capacity concurrent transactions. For a moderate workload, there should be at least 5 to 10 disks per CPU, and at least 20 disks for high I/O OLTP workloads. Operating systems (including paging space), DB2 logs, and DB2 tablespaces should have their own dedicated disks. There should be more than one disk for DB2 logs, tables, and indexes.
The correct way to estimate the I/O processing power required for good performance is actually to make a transaction prototype and find out how much I/O each transaction requires and how many transactions per second. Then find the I/O rate of the disk controller and the disk subsystem to help determine how many controllers and disks are needed.
There is sufficient network bandwidth.
You must have sufficient network bandwidth to support the workload. Make sure the network or any intermediate hubs are not a bottleneck. This is especially important when remote access is supported. For example, the T1 line supports 1.544 MB/s, which is only 0.193 MB/s, whereas a typical MB/sec Ethernet LAN can support 1.25 MB/s for throughput of 6 times times the T1 line. Using commands such as netstat on UNIX can monitor traffic on the connection.
Use the DB2 Performance Configuration wizard (DB2 control Center), DB2 (DB2 performance Configuration), to set up the initial Wizard Database Manager (DB Manager) and database configuration (Database Configuration) parameter.
This tool asks you a series of questions about the nature of the workload to determine the starting settings for the configuration parameter values. You can modify these parameters to meet the requirements of the production workload.
Index the table columns appropriately.
Make sure that the columns in the query that are connected are indexed.
Performance can be improved if an index is established for the columns involved in order by and GROUP by.
You can also include data that is often accessed as a column in the INCLUDE clause in the index.
Depending on the tables and SQL statements used, the Index advisor (also known as Index Wizard), which can be invoked from the DB2 control center, is used to help determine the use of a suitable set of indexes.