DB2 databaseIt provides high-level data availability, integrity, security, recoverability, and small-to large-scale execution capabilities. Its performance is very powerful, the simplest and most effective Bufferpool
A buffer pool is a storage area in the memory. It is used to temporarily read and modify database pages (including table rows or index items ). The buffer pool is used to improve the performance of the database system. Accessing data from the memory is much faster than accessing data from the disk. Therefore, the fewer times the database manager needs to read or write data to the disk from the disk, the better the performance. Configuring one or more buffer pools is the most important aspect of optimization because most of the data (not including large objects and long field data) of applications connected to the database) all operations are performed in the buffer pool.
By default, the application uses the buffer pool ibmdefabp bp, which is created when the database is created. When the NPAGES value of the buffer pool in the SYSCAT. BUFFERPOOLS directory table is-1, the DB2 database configuration parameter BUFFPAGE controls the buffer pool size. Otherwise, the BUFFPAGE parameter is ignored and the buffer pool is created using the number of pages specified by the NPAGES parameter.
We recommend that you change NPAGES to-1 for applications that only use one buffer pool, so that BUFFPAGE can control the size of the buffer pool. This makes it easier to update and report the buffer pool size and other DB2 database configuration parameters.
Make sure that you can use the BUFFPAGE parameter in the database configuration to control the buffer pool size and set this parameter to an appropriate value. This parameter is set to a reasonable shard based on the database size and application nature, which is safe. Generally, the default value of this parameter is very small and may not meet the requirements.
Db2 "get snapshot for all bufferpools"
In the snapshot output of the database snapshot or buffer pool snapshot, find the following "logical reads" and "physical reads" to calculate the buffer pool hit rate, which can help tune the buffer pool:
Buffer Pool hit rate indicates the percentage of time when the database manager can process page requests without loading pages from the disk (that is, the page is already in the buffer pool. The higher the buffer pool hit rate, the lower the disk I/O frequency. Calculate the buffer pool hit rate as follows:
(1-(buffer pool data physical reads + buffer pool index physical reads )/
(Buffer pool data logical reads + pool index logical reads ))
) * 100%
This calculation takes into account all pages (indexes and data) cached by the buffer pool at high speed ). Ideally, this ratio should exceed 95% and be as close to 100% as possible. To increase the buffer pool hit rate, try the following methods:
1. Increase the buffer pool size..
2. consider allocating multiple buffer pools,If possible, allocate a buffer pool to the tablespace of each frequently accessed large table and allocate a buffer pool to a small table, then try to use a buffer pool of different sizes to check which combination will provide the best performance.
3. If the allocated memory cannot help improve performance,Avoid allocating too much memory to the buffer pool. The size of the buffer pool should be determined based on the snapshot information taken from the test environment.
4. Too small buffer pools produce too many unnecessary physical I/O.A large buffer pool puts the system at the risk of page Scheduling on the operating system and consumes unnecessary CPU cycles to manage excessive memory allocation. The size of the buffer pool is equal to a certain balance between "too small" and "too large. The appropriate size exists at the point where the return will begin to decrease.
SQL for optimal performance
A bad SQL statement completely destroys everything. A relatively simple SQL statement can also mess up a well-adjusted database and machine. For many of these statements, daily (or in files) without the DB2 UDB configuration parameters can correct high costs caused by incorrect SQL statements.
Worse, DBAs are often constrained by the inability to change SQL (probably because it is provided by application vendors ). This leaves only three steps for DBA:
1. change or add an index
2. Change the Cluster
3. Change directory statistics
A robust application consists of thousands of different SQL statements. The execution frequency of these statements varies with the functions of the application and the daily business needs. The actual cost of an SQL statement is the cost of one execution multiplied by the number of executions.
A major task for each DBA is to identify the challenges of statements with the highest "actual cost" and reduce the cost of these statements.
The local DB2 Explain utility, tools provided by some third-party vendors, or DB2 udb SQL Event Monitor data can be used to calculate the resource costs used to execute an SQL statement. However, the statement execution frequency can only be understood through careful and time-consuming analysis of the data of DB2 udb SQL Event Monitor.
The optimal performance not only requires high-cost SQL statements, but also ensures that the corresponding physical infrastructure is appropriate. The optimal performance can be achieved only when all the adjustment knob is set properly, the memory is effectively allocated to the pool and heap, and I/O is evenly allocated to each disk.
An error that cannot be missed -- Lock
These lock-related controls are database configuration parameters:
LOCKLIST indicates the storage capacity allocated to the lock list. Each database has a lock list that contains the locks held by all applications connected to the database concurrently. Locking is a mechanism used by the database manager to control concurrent access to data in the database by multiple applications. Rows and tables can be locked. Based on whether the object still holds other locks, each lock requires a lock list of 32 or 64 bytes:
1,It requires 64 bytes to hold the lock on an object. On this object, no other lock is held.
2,32 bytes are required to record the lock on an object. On this object, a lock has been held.
MAXLOCKS defines the percentage of the lock list held by the application. The lock list must be filled before the database manager performs the lock upgrade. When the percentage of the lock list used by an application reaches MAXLOCKS, the Database Manager upgrades these locks, which means that the table lock replaces the row lock and reduces the number of locks in the list. When the number of locks held by any application reaches the percentage of the total lock list size, the locks held by the application are upgraded. If the lock list is used up, a lock upgrade will also occur. The Database Manager checks the lock list of the application and finds the tables with the most row locks to determine which locks to upgrade. If you replace these row locks with a table lock, it will no longer exceed the MAXLOCKS value, and the lock upgrade will stop. Otherwise, the lock upgrade will continue until the percentage of the lock list held is lower than that of MAXLOCKS. The MAXLOCKS parameter multiplied by the MAXAPPLS parameter cannot be less than 100.
Although the upgrade process does not take a long time, locking the entire table (as opposed to locking individual rows) reduces concurrency, in addition, the overall performance of the database may be reduced due to subsequent access to tables affected by the lock upgrade.
The default value of LOCKTIMEOUT is-1, which means no lock timeout will occur (for OLTP applications, this situation may be disastrous ). Many DB2 users use LOCKTIMEOUT =-1. Set LOCKTIMEOUT to a very short time value, for example, 10 or 15 seconds. Wait for a long time on the lock will produce an avalanche effect on the lock.
First, run the following command to check the value of LOCKTIMEOUT:
Db2 "get db cfg for DBNAME"
And find the rows that contain the following text:
Lock timeout (sec) (LOCKTIMEOUT) =-1
If the value is-1, consider using the following command to change it to 15 seconds (Be sure to first ask the application developer or vendor to ensure that the application can handle lock timeout ):
Db2 "update db cfg for DBNAME using LOCKTIMEOUT 15"
At the same time, we should monitor the number of lock waits, the lock wait time, and the amount of memory that is using the lock list (lock list memory. Run the following command:
Db2 "get snapshot for database on DBNAME"
If the Lock list memory in use (Bytes) exceeds 50% of the defined LOCKLIST size, add 4 k pages to the LOCKLIST database configuration.
-SORTHEAP
SORTHEAP is a database configuration parameter that defines the maximum number of private memory pages used by private sorting, or the maximum number of shared memory pages used by shared sorting. If the sorting is private, this parameter affects the agent's private memory. If the sorting is shared sorting, this parameter affects the shared memory of the database.
Each sort has a separate sort heap allocated by the database manager as needed. Sort data in the sorting heap. If the optimizer is used to guide the allocation of the sorting heap size, the size of the sorting heap allocated by the optimizer is smaller than the size specified by this parameter.
SHEAPTHRES is a Database Manager configuration parameter. The sources of memory used by private and shared sorting are different. The size of the shared sorting memory zone is pre-determined in a static manner based on the SHEAPTHRES value when the database is connected for the first time. The size of the private sorting memory area is unrestricted. For private sorting and shared sorting, SHEAPTHRES parameters are applied in different ways:
For private sorting, SHEAPTHRES is an instance-level "soft" limit for all memory that can be consumed by private sorting at any given time. When the total consumption of private sorting memory of the Instance reaches this limit, the memory allocated for other private sorting requests will be greatly reduced.
For shared sorting, SHEAPTHRES is a database-level "hard" limit on all memory that can be consumed by shared sorting at any given time. When this limit is reached, other shared sorting memory requests are not allowed until the total shared memory consumption falls below the limit specified by SHEAPTHRES.
Examples of operations on Sorting heap include hash join and operations on tables in memory. The explicit definition of the threshold value prevents the database manager from using too much memory for large sorting.
Suggestions
Use the database system monitor to track sorting activities.
Use appropriate indexes to minimize the usage of the sorting heap.
Increase the SORTHEAP value when large sorting is required frequently.
If SORTHEAP is added, determine whether to adjust the SHEAPTHRES parameter in the Database Manager configuration file.
The optimizer uses the size of the sorting heap to determine the access path. After changing this parameter, consider rebinding the application (using the rebind package command ).
Ideally, the SHEAPTHRES parameter should be reasonably set to a multiple of the maximum SORTHEAP value set in the Database Manager instance. This parameter must be at least twice the maximum SORTHEAP defined by any database in the instance.
How to change these parameters
To change the values of SORTHEAP and SHEAPTHRES, run the following command:
-- SORTHEAP shocould be changed for individual database --
Db2 "update db cfg for DB_NAME using SORTHEAP a_value"
-- SHEAPTHRES is a database manager parameter --
Db2 "update dbm cfg using SHEAPTHRES B _value"
Study procedure
OLTP applications should not perform large sorting. Large sorting costs are too high in terms of CPU and I/O resources. Generally, the default SORTHEAP size (256 4 kb pages) is enough. In fact, for high-concurrency OLTP, you may want to lower this default value. When you need further research, you can issue the following command:
Db2 "update monitor switches using sort on"
Then, let the application run for a while, and enter:
Db2 "get snapshot for database on DBNAME"
Based on this output, you can calculate the number of sorting tasks for each transaction, and calculate the percentage of sorting that overflows the memory that can be used for sorting.
SortsPerTransaction
= (Total Sorts)/(Commit statements attempted + Rollback statements attempted)
PercentSortOverflow
= (Sort overflows * 100)/(Total sorts)
Experience:If SortsPerTransaction is greater than 5, it may indicate that each transaction is too sorted. If PercentSortOverflow is greater than 3%, serious and unexpected large sorting may occur. In this case, adding SORTHEAP only hides performance problems-but cannot fix them. The correct solution to this problem is to improve the access solution for problematic SQL statements by adding the correct index.
These tips for optimizing the performance of DB2 databases are very practical. If you don't believe them, you can try them in comparison to the actual operations.