Top 10 DB2 optimization techniques for flying Databases

Source: Internet
Author: User
Tags prefetch

For helpDB2DBA avoids performance disasters and achieves high performance. I have summarized a fault diagnosis process for our customers, users, and DB2 experts. The following describes in detail the 10 most important performance improvement techniques for using DB2 UDB's e-commerce OLTP applications in Unix, Windows, and OS/2 environments, at the end of this article, we will make a summary.DB2 databaseOfOptimizationWork is an important part of DB2. The optimization of DB2 database can ensure that DB2 database runs normally. The following describes the optimization techniques of DB2 database, I hope this will help you.

10. Monitoring Switch

Make sure that the monitoring switch is enabled. If they are not enabled, you cannot obtain the required performance information. To enable this monitoring switch, run the following command:

1 db2 "update monitor switches using2 lock ON sort ON bufferpool ON uow ON3 table ON statement ON"

9. Agent

Ensure that there are sufficient DB2 agents to handle the workload. To find information about the agent, run the following command:

1 db2 "get snapshot for database manager" and find the following lines: 1 High water mark for agents registered = 72 High water mark for agents waiting for a token = 03 Agents registered = 74 Agents waiting for a token = 05 Idle agents = 56 Agents assigned from pool = 1587 Agents created from empty Pool = 78 Agents stolen from another application = 09 High water mark for coordinating agents = 710 Max agents overflow = 0 if you find Agen Ts waiting for a token or Agents stolen from another application is not 0, so please increase the number of proxies available to the Database Manager (MAXAGENTS and/or MAX_COORDAGENTS ).

8. Maximum number of opened files

DB2 tries its best to be a good citizen under the constraints of operating system resources. One of its outstanding citizens is to set an upper limit for the maximum number of files opened at any time. The Database Configuration Parameter MAXFILOP restricts the maximum number of files that DB2 can open at the same time. When the number of opened files reaches this threshold, DB2 will start to close and open its tablespace files (including bare devices ). The constant opening and closing of files slows down the SQL response time and consumes the CPU cycle. To check whether DB2 is closing the file, run the following command:

1 db2 "get snapshot for database on DBNAME" and find the following rows:

1 Database files closed = 0 if the value of the preceding parameter is not 0, the value of MAXFILOP is increased until the file is continuously opened and closed.

1 db2 "update db cfg for DBNAME using maxfilop n"

7. Lock

The default value of LOCKTIMEOUT is-1, which means no lock timeout will occur (for OLTP applications, this situation may be disastrous ). Even so, I often find that many DB2 users use LOCKTIMEOUT =-1. Set LOCKTIMEOUT to a very short time value, for example, 10 or 15 seconds. Wait for a long time on the lock will produce an avalanche effect on the lock.

First, run the following command to check the value of LOCKTIMEOUT:

1 db2 "get db cfg for DBNAME" and search for rows containing the following text:

1 Lock timeout (sec) (LOCKTIMEOUT) =-1 if the value is-1, consider using the following command to change it to 15 seconds (Be sure to first ask the application developer or your vendor to ensure that the application can handle lock timeout ):

1 db2 "update db cfg for DBNAME using LOCKTIMEOUT 15" you should also monitor the number of lock waits, lock wait times, and the amount of memory using the lock list memory. Run the following command:

1 db2 "get snapshot for database on DBNAME:

1 Locks held currently = 02 Lock waits = 03 Time database waited on locks (MS) = 04 Lock list memory in use (Bytes) = 5765 Deadlocks detected = 06 Lock escalations = 07 Exclusive lock escalations = 08 Agents currently waiting on locks = 09 Lock Timeouts = 0 if Lock list memory in use (Bytes) if the size of the LOCKLIST exceeds 50%, the number of 4 k pages will be added to the LOCKLIST database configuration.

6. Temporary tablespace

To improve the performance of running parallel I/O in DB2 and using TEMPSPACE for sorting, hash join, and other database operations, the temporary tablespace should have at least three containers on three different disk drives.

To know how many containers your temporary tablespace has, run the following command:

1 db2 "list tablespaces show detail" queries the TEMPSPACE tablespace definition similar to the following example:

1 Tablespace ID = 12 Name = TEMPSPACE13 Type = System managed space4 Contents = Temporary data5 State = 0x00006 Detailed explanation: normal7 Total pages = 18 Useable pages = 19 Used pages = 110 Free pages = Not applicable11 High water mark (pages) = Not applicable12 Page size (bytes) = 409613 Extent size (pages) = 3214 Prefetch size (pages) = 9615 Number of containers = 3 Note that the value of Number of containers is 3, and Prefetch s Ize is three times the size of Extent. To achieve optimal parallel I/O performance, it is important that the Prefetch size is a multiple of the Extent size. This multiple is equal to the number of containers.

To find the container definition, run the following command:

1 db2 "list tablespace containers for 1 show detail" 1 refers to tablespace ID #1, which is TEMPSPACE1 In the example just given.

5. Memory sorting

OLTP applications should not perform large sorting. They are extremely costly in terms of CPU, I/O, and time used, and will slow down any OLTP application. Therefore, the default SORTHEAP size (1 MB) of 256 4 K pages (1 MB) should be sufficient. You should also know the number of sort overflow and the number of sort for each transaction.

Run the following command:

1 Db2 "get snapshot for database on DBNAME" and find the following lines:

1 Total sort heap allocated = 02 Total sorts = 13 Total sort time (MS) = 84 Sort overflows = 05 Active sorts = 06 Commit statements attempted = 37 Rollback statements attempted = 08 Let transactions = Commit statements attempted + Rollback9 statements allow Let wait = Total sorts/transactions11 Let wait = sort overflows * 100/Total sorts if PercentSortOverflows (Sort ove Rflows * 100)/Total sorts) is greater than 3 percentage points, so there will be serious or unexpected sorting problems in the application SQL. Because the existence of overflow indicates that a large sorting occurs, it is ideal to find that no sorting overflow or at least its percentage is less than one percentage point.

If too many sort overflow occurs, the emergency solution is to increase the SORTHEAP size. However, this only masks the real performance problems. Instead, you should determine the SQL that causes sorting and change the SQL, index, or cluster to avoid or reduce the sorting overhead.

If SortsPerTX is greater than 5 (as an experience), the number of transactions may be large. Although some application transactions execute many small composite sorting tasks (they do not overflow and the execution time is short), they consume too much CPU. When SortsPerTX is large, in my experience, these machines are usually limited by CPU. Determining the SQL statements that cause sorting and improving the access scheme (by indexing, clustering, or modifying SQL statements) is extremely important to improve transaction throughput.

4. Table Access

For each table, determine the number of rows that DB2 reads for each transaction. You must issue two commands:

1 db2 "get snapshot for database on DBNAME" 2 db2 "get snapshot for tables on DBNAME" after the first command is issued, determine how many transactions have occurred (by taking the sum of Commit statements attempted and Rollback statements attempted-see Tip 3 ).

After the second command is issued, the number of rows read is divided by the number of transactions (RowsPerTX ). In each transaction, OLTP applications should generally read 1 to 20 rows from each table. If you find that hundreds of rows are being read for each transaction, you may need to create an index when scanning. (Sometimes running runstats with distribution and detailed indexes also provides a solution .)

1 get snapshot for tables on DBNAME sample output: 2 Snapshot timestamp = 09-25-20003 4:47:09. 9708114 Database name = DGIDB5 Database path =/fs/inst1/inst1/NODE0000/SQL00001/6 Input database alias = DGIDB7 Number of accessed tables = 88 Table List9 Table Schema = INST110 Table Name = DGI_11 SALES _ LOGS_TB12 Table Type = User13 Rows Written = 014 Rows Read = 9885715 Overflows = 016 Page Reorgs = 0 a large number of Overflows may mean You need to reorganize the table. A row overflow occurs when DB2 must locate a row on an unsatisfactory page because the row width is changed.

3. tablespace Analysis

Table space snapshots are extremely valuable for understanding what data is accessed and how it is accessed. To get a tablespace snapshot, run the following command:

1 db2 "get snapshot for tablespaces on DBNAME"

For each tablespace, answer the following questions:

What is the average read time (MS?

What is the average write time (MS?

What is the percentage of asynchronous (prefetch) Physical I/O relative to synchronous (random?

What is the buffer pool hit rate for each tablespace?

How many physical pages are read per minute?

How many physical and logical pages are read for each transaction?

For all tablespaces, answer the following questions:

Which tablespace has the slowest reading and writing time? Why? Is the container on a slow disk? Is the container size equal? Compared with asynchronous access and synchronous access, is the access attribute consistent with the Expected One? Random read tables should have random read tablespaces to achieve a high percentage of synchronous reads, a high buffer pool hit rate, and a lower physical I/O rate.

For each tablespace, make sure that the prefetch size is equal to the size of the data block multiplied by the number of containers. Run the following command:

1 db2 "list tablespaces show detail" if needed, you can change the prefetch size for a given tablespace. Run the following command to check the container definition:

1 db2 "list tablespace containers for N show detail" here, N is the tablespace ID number.

2. Buffer Pool Optimization

I often find that some DB2 UDB sites, although the machine has 2, 4 or 8 GB memory, but the DB2 database has only one buffer pool (IBMDEFAULTBP), its size is only 16 MB!

If this is the case on your site, create a buffer pool for the SYSCATSPACE directory tablespace, create a buffer pool for the TEMPSPACE tablespace, and create at least two buffer pools: BP_RAND and BP_SEQ. The Random Access tablespace should be allocated to the buffer pool (BP_RAND) for random access ). The tablespace for sequential access (using asynchronous prefetch I/O) should be allocated to the buffer pool for sequential access (BP_SEQ ). You can create additional buffer Pools Based on the performance objectives of some transactions. For example, you can make a buffer pool large enough to store the entire Hot (or frequently accessed) table. When a large table is involved, some DB2 users put the indexes of important tables into an index (BP_IX) buffer pool for great success.

A buffer pool that is too small will generate too many unnecessary physical I/O. A large buffer pool puts the system at the risk of page Scheduling on the operating system and consumes unnecessary CPU cycles to manage excessive memory allocation. The size of the buffer pool is equal to a balance between the size and size. The appropriate size exists at the point where the return will begin to decrease. If you do not use a tool to automatically perform a return reduction analysis, then, you should scientifically test the Buffer Pool performance (hit rate, I/O time, and physical I/O read rate) on the increasing buffer pool size until the optimal buffer pool size is reached. Because the business has been changing and growing, we should re-assess the best size decisions on a regular basis.

1. SQL Cost Analysis

A bad SQL statement will completely destroy your entire day. I have seen more than once a relatively simple SQL statement mess up a well-adjusted database and machine. For many of these statements, daily (or in files) without the DB2 UDB configuration parameters can correct high costs caused by incorrect SQL statements.

Worse, DBAs are often constrained by the inability to change SQL (probably because it is provided by application vendors, such as SAP, lelesoft, or Siebel ). This leaves only three steps for DBA:

1. change or add an index

2. Change the Cluster

3. Change directory statistics

In addition, today's robust applications consist of thousands of different SQL statements. The execution frequency of these statements varies with the functions of the application and the daily business needs. The actual cost of an SQL statement is the cost of one execution multiplied by the number of executions.

A major task for each DBA is to identify the challenges of statements with the highest actual cost and reduce the cost of these statements.

By using the local DB2 Explain utility, tools provided by some third-party vendors, or DB2 udb SQL Event Monitor data, you can calculate the resource costs used to execute an SQL statement. However, the statement execution frequency can only be understood through careful and time-consuming analysis of the data of DB2 udb SQL Event Monitor.

When studying SQL statements, the DBA uses the following standard process:

1. Create an SQL Event Monitor and write it to the file:

$ Db2 "create event monitor SQLCOST for statements write ..."

2. Activate the event monitor to ensure sufficient disk space is available ):

$ Db2 "set event monitor SQLCOST state = 1"

3. Run the application program.

4. Cancel activation event monitor:

$ Db2 "set event monitor SQLCOST state = 0"

5. Use the db2evmon tool provided by DB2 to format original SQL Event Monitor data (hundreds of megabytes of available disk space may be required based on the SQL throughput ):

$ Db2evmon-db DBNAME-evm SQLCOST

Sqltrace.txt

6. Browse the entire formatted file to find a significant cost (a time-consuming process ):

$ More sqltrace.txt

7. perform a more complete analysis on a formatted file. This file attempts to identify a unique statement (independent of a text value) and the frequency of each unique statement (the number of times it appears) and its total CPU, sorting, and other resource costs. It may take one week or more to thoroughly analyze the SQL activity sample of the application within 30 minutes.

To reduce the time required to determine high-cost SQL statements, you can consider many available information sources:

Tip 4: calculate the number of rows read from each table in each transaction. If the generated number looks large, the DBA can search for the relevant table name in the SQL Event Monitor formatted output (this will narrow the search range and save some time ), in this way, you may find the problematic statement. Tip 3: Calculate the asynchronous read Percentage and physical I/O read rate for each tablespace. If the asynchronous read Percentage of a tablespace is very high and far exceeds the average physical I/O read rate, one or more tables in the tablespace are being scanned. Query the Directory and find out which tables are allocated to suspicious tablespaces (one table is allocated for each tablespace to provide optimal performance detection). Then, search for these tables in the SQL Event Monitor formatting output. These may also help narrow the search scope for high-cost SQL statements.

Observe the DB2 Explain information of each SQL statement executed by the application. However, I found that high-frequency and low-cost statements often compete for Machine capacity and capabilities to provide the expected performance. If the analysis time is short and the maximum performance is critical, consider using the tools provided by the vendor (they can quickly and automatically identify the process of resource-intensive SQL statements ). Database-GUYS Inc.'s SQL-GUY tool provides cost-Level Analysis for precise, real-time, and balanced SQL statements.

Continue Adjustment

The optimal performance not only requires high-cost SQL statements, but also ensures that the corresponding physical infrastructure is appropriate. The optimal performance can be achieved only when all the adjustment knob is set properly, the memory is effectively allocated to the pool and heap, and I/O is evenly allocated to each disk. Although measurement and adjustment take time, the DBAs implementing these 10 recommendations will be very successful to meet internal and external DB2 customers. Because of the changes and growth of e-commerce, even the best managed database needs to be adjusted regularly. DBA's work will never end!

Quick review of the best 10 tips

Use sufficient proxies for the workload.

DB2 is not allowed to shut down and open files without any need.

Long lock wait is not allowed.

Ensure the parallel I/O capability of the database's TEMPSPACE tablespace.

Conservative management of DB2 sorting memory should not cover up sorting problems with large SORTHEAP.

Analyze table access activities and determine tables with a particularly high number of read rows or overflow for each transaction.

Analyzes the performance characteristics of each tablespace, it also seeks to improve the tablespace performance with the slowest read time, longest wait time, highest physical I/O read rate, and the worst hit rate, as well as the expected access attribute.

Create multiple buffer pools and allocate tablespaces to the buffer pool to share access attributes.

Check the DB2 udb SQL Event Monitor information to find the SQL statement that consumes the most computing resources and take the correct action.

Once high-cost SQL statements are excluded, the configuration and Physical Design settings are re-evaluated immediately.

Here we will explain so much about the optimization skills of the DB2 database. I have mastered the optimization skills of the DB2 database described above, I believe it will be very helpful for your future work. I hope you will be able to gain some benefits from the content mentioned above.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.