Optimizing the database can make the database faster. This article will introduce you to the commonly used techniques in DB2 database optimization 6 for your reference. If you are not familiar with the DB2 database, I believe this article will be of great help to you.
1. Create an index for the tables used in the future. Note that the index is created before data insertion or after data insertion, but runstats is required)
Note: if data is created before insertion, indexes will also be updated during table insertion. In this case, a large log space is required, so the speed will be slow, data can be inserted without logs. If an index is created after the data difference is complete, the log statistics of the table are not updated. Therefore, the execution plan is very poor and the index is not used, runstats on tabble asiainfo. after aaaa and indexes all, the index statistics will be updated, so that the execution plan will take into account the use of the index, so the speed is fast.
2. Create a large table on a multi-node tablespace and create an index.
Note: each node in the existing db2 data warehouse uses 2 CPUs, 4 GB memory, and the DIM tablespace plan is the tablespace that stores the dimension table, so it is a single node. When using a table in this tablespace, only two CPUs and 4 GB memory are used at most, and other tablespaces also need to use the two CPUs and the 4 GB memory, therefore, resources are limited. We recommend that you do not place a large table in this tablespace, but create a partition key and store it in a multi-node tablespace. In this way, you can retrieve the table with 32 nodes at the same time, at last, it is summarized to 0 nodes for display, and the speed will certainly be very fast. In addition, although the 32-node concurrency is good, if an index is created, the speed will be faster !!
3. Insert the inserted table in a log-free manner.
Note: To ensure data consistency and rollability, the database needs to count logs when inserting, updating, or deleting data so that it can be rolled back when data fails, however, if the concurrency is large or the operation is very large, it will lead to competition for logs, resulting in very slow operations. If you insert, update, or delete logs in a non-log manner, the log is rarely used, but if the operation fails, the log cannot be rolled back. Therefore, the consistency is not guaranteed, this table can only be deleted and rebuilt !!!!
4. Create Table-level locks to reduce the number of locks.
Note: The maximum number of database locks is limited, and each lock occupies a certain amount of memory. Therefore, if the number of locks is large, more memory will be used, resource shortage.
5. When creating a temporary table, try to insert only the data used and do not insert data that is not available.
Note: In the program, data used is first inserted into a temporary table to improve the speed, but a lot of unused data is inserted, which leads to a large temporary table, therefore, try to insert only the data used in the temporary table and use the index as much as possible, which can greatly improve the speed.
6. Usage of left join
In the on condition, try to write only the join condition and restrict the left join table. Do not write the restriction on the primary table here. If it is written in, it will not only be very slow, but may also produce inexplicable results.