The DB2 tutorial you are looking at is: DB2 Optimization (simple edition). Prepare-monitors on
DB2 "Update monitor switches using
Lock on sort on bufferpool on UOW on
Table ON statement on "
Turn on the monitor switch to get the required performance information
The simplest and most effective-bufferpool
A buffer pool is a storage area in memory that is used to temporarily read and change database pages (including table rows or index entries). The purpose of the buffer pool is to improve the performance of the database system. Accessing data from memory is much faster than accessing data from disk. As a result, the fewer times the database manager needs to read from or write to the disk, the better the performance. Configuring one or more buffer pools is the most important aspect of tuning because most of the data (excluding large objects and long field data) for applications connected to the database is performed in the buffer pool.
By default, the application uses the buffer pool IBMDEFAULTBP, which is created when the database is created. When Syscat. When the npages value of the buffer pool in the Bufferpools directory table is-1, the DB2 database configuration parameter BUFFPAGE controls the size of the buffer pool. Otherwise, the BUFFPAGE parameter is ignored and the buffer pool is created with the number of pages specified by the Npages parameter.
It is recommended that you change the npages to 1 for applications that use only one buffer pool, so that BUFFPAGE can control the size of the buffer pool. This makes it easier to update and report buffer pool size and other DB2 database configuration parameters.
Make sure that you can use the BUFFPAGE parameter in the database configuration to control the buffer pool size, and then set the parameter to the appropriate value. It is safe to set this parameter to a reasonable large value based on the size of the database and the nature of the application. Typically, the default value for this parameter is very small and may not meet the requirements.
DB2 "Get snapshot to all Bufferpools"
In the snapshot output of a database snapshot or buffer pool snapshot, look for the following "logical reads" and "physical reads" so that the buffer pool hit ratio can be computed to help tune the buffer pool:
The buffer pool hit ratio indicates that the database manager does not need to load a page from the disk (that is, the page is already in the buffer pool) to handle the percentage of the page request. The higher the hit ratio of the buffer pool, the lower the frequency of using disk I/O. Calculate the buffer pool hit ratio as follows:
(1-(Buffer pool data physical reads + buffer pool Index physical reads)/
(Buffer pool data logical reads + Pool index logical reads))
) * 100%
This calculation takes into account all the pages (indexes and data) of the buffer pool cache. Ideally, the ratio should be over 95% and as close to 100% as possible. To increase the buffer pool hit ratio, try the following methods:
Increase the buffer pool size.
Consider allocating multiple buffer pools and, if possible, allocating a buffer pool for each table space to which the frequently accessed large table belongs, assigning a buffer pool to a small set of tables, and then trying to use a different sized buffer pool to see which combination provides the best performance.
If the allocated memory does not help improve performance, avoid allocating too much memory to the buffer pool. The size of the buffer pool should be determined based on the snapshot information taken from the test environment.
Too small a buffer pool can generate excessive, unnecessary physical I/O. Too large a buffer pool causes the system to be at risk of operating system page scheduling and consumes unnecessary CPU cycles to manage overallocated memory. The right buffer pool size is on a balance between "too small" and "too big". The appropriate size exists at the point where the return will begin to decrease.
-sql to get the best performance
A bad SQL statement can completely destroy everything. A relatively simple SQL statement can also mess up a well tuned database and machine. For many of these statements, there is no DB2 UDB configuration parameter at the bottom of the day (or in the file) to correct the high cost of the bad SQL statements.
Worse, DBAs are often constrained by the fact that SQL cannot be changed (possibly because it is provided by the application vendor). This leaves the DBA with only three paths to go:
1. Change or add index
2. Change the cluster
3. Change Catalog Statistics
A robust application consists of thousands of different SQL statements. The frequency with which these statements are executed varies depending on the functionality of the application and the day-to-day business needs. The actual cost of an SQL statement is the cost it performs once, multiplied by the number of times it executes.
The major task facing each DBA is to identify the challenges of statements with the highest "real cost" and to reduce the cost of these statements.
The resource cost of executing an SQL statement can be calculated through the native DB2 Explain utility, some third-party vendor-supplied tools, or DB2 UDB SQL Event Monitor data. However, the frequency of statement execution can only be understood through careful and time-consuming analysis of data from DB2 UDB SQL Event Monitor.
Optimal performance requires not only the removal of High-cost SQL statements, but also the need to ensure that the appropriate physical infrastructure is appropriate. The best performance can be achieved when all the adjustment knobs are set properly, the memory is effectively allocated to the pool and the heap, and I/O is evenly distributed to each disk.
-lock not to be missed
These lock-related controls are database configuration parameters:
Locklist indicates the amount of storage allocated to the lock list. Each database has a lock list that contains the locks held by all applications that are connected to the database concurrently. Locking is the mechanism by which the Database manager controls multiple applications to access data in the database concurrently. Both rows and tables can be locked. Each lock requires a list of 32 or 64-byte locks, depending on whether the object holds additional locks:
64 bytes are required to hold the lock on an object, and no other locks are held on the object.
32 bytes are required to record the lock on an object, and a lock is already held on the object.
MAXLOCKS defines the percentage of the list of locks that the application holds, and must populate the lock list before the database Manager performs a lock upgrade. When the percentage of lock lists used by an application reaches MAXLOCKS, the database manager upgrades the locks, which means that table locks are used instead of row locks, reducing the number of locks in the list. Locks are upgraded on locks held by an application when the number of locks held by any application reaches this percentage of the total lock list size. If the lock list has run out of space, a lock escalation will also occur. The database manager determines which locks are upgraded by looking at the application's lock list and finding the table with the most row locks. If you replace these row locks with a table lock, the MAXLOCKS value will no longer be exceeded, and the lock escalation will stop. Otherwise, the lock escalation continues until the percentage of the lock list held is less than MAXLOCKS. The MAXLOCKS parameter multiplied by the MAXAPPLS parameter cannot be less than 100.
Although the upgrade process itself does not take much time, locking the entire table (relative to locking individual rows) reduces concurrency, and the overall performance of the database can be reduced by subsequent access to the tables that are affected by the lock escalation.
The default value for LOCKTIMEOUT is-1, which means there will be no lock timeout (this may be disastrous for OLTP applications). Many DB2 users use LOCKTIMEOUT =-1. Set the LOCKTIMEOUT to a very short time value, such as 10 or 15 seconds. Waiting too long on the lock will produce an avalanche effect on the lock.
First, use the command to check the value of the LOCKTIMEOUT:
DB2 "Get DB CFG for dbname"
and find the line that contains the following text:
Lock Timeout (sec) (LOCKTIMEOUT) =-1
If the value is-1, consider using the following command to change it to
[1] [2] Next page
The DB2 tutorial you are looking at is: DB2 Optimization (Simple edition). 15 seconds (be sure to ask the application developer or vendor first to ensure that the application can handle the lock timeout):
DB2 "Update db CFG for dbname using LOCKTIMEOUT 15"
You should also monitor the number of lock waits, lock latency, and the amount of lock list memory being used (lock list memory). Please issue the following command:
DB2 "Get snapshot to database on dbname"
If the Lock list memory in use (Bytes) exceeds 50% of the defined locklist size, increase the number of 4k pages in the Locklist database configuration.
This news total 2 pages, current in 1th page 1 2
prev [1] [2]