This article mainly describes several basic strategies worth understanding in the actual operations of DB2 database optimization, the following is a description of some basic strategies and solutions worth your understanding in the actual operations of DB2 database optimization. I hope this will help you in your future studies.
1. Create an index for the tables used in the future (note that the index is created before data insertion or after data insertion but runstats is required ):
Note: if data is created before insertion, indexes will also be updated during table insertion. In this case, a large log space is required, so the speed will be slow, you can insert data in a non-log manner. If the index is created after the data difference is complete, the log statistics of the table are not updated. Therefore, the execution plan is very poor and the index is not used. runstatsontabble asiainfo. after aaaa andindexesall, the index statistics will be updated, so that the execution plan will take into account the use of the index, so the speed is fast.
2. Create a large table on a multi-node tablespace and create an index:
Note: each node in the existing db2 data warehouse uses 2 CPUs, 4 GB memory, and the DIM tablespace plan is the tablespace that stores the dimension table, so it is a single node. When using a table in this tablespace, only two CPUs and 4 GB memory are used at most, and other tablespaces also need to use the two CPUs and the 4 GB memory, therefore, resources are limited. We recommend that you do not place a large table in this tablespace, but create a partition key and store it in a multi-node tablespace. In this way, you can retrieve the table with 32 nodes at the same time, at last, it is summarized to 0 nodes for display, and the speed will certainly be very fast. In addition, although the 32-node concurrency is good, if the index is set up, the speed will be faster.
3. Insert the inserted table in a log-free manner:
Note: To ensure data consistency and rollability, the database needs to count logs when inserting, updating, or deleting data so that it can be rolled back when data fails, however, if the concurrency is large or the operation is very large, it will lead to competition for logs, resulting in very slow operations. If you insert, update, or delete logs in a non-log manner, the log is rarely used, but if the operation fails, the log cannot be rolled back. Therefore, the consistency is not guaranteed, this table can only be deleted and rebuilt !!!!
4. Create Table-level locks to reduce the number of locks:
Note: The maximum number of database locks is limited, and each lock occupies a certain amount of memory. Therefore, if the number of locks is large, more memory will be used, resource shortage.
5. When creating a temporary table, try to insert only the data used. Do not insert data that is not available:
Note: In the program, data used is first inserted into a temporary table to improve the speed, but a lot of unused data is inserted, which leads to a large temporary table, therefore, try to insert only the data used in the temporary table and use the index as much as possible, which can greatly improve the speed.
6. Some experiences with left join:
In the on condition, try to write only the join condition and restrict the left join table. Do not write the restriction on the primary table here. If it is written in, it will not only be very slow, but may also produce inexplicable results.
The above content is an introduction to the basic strategies for optimizing DB2 databases. I hope you will get some benefits.
The above content is a description of the basic DB2 database optimization policies, hoping to help you in this regard.