Improve insert performance in DB2 database

Source: Internet
Author: User
Tags bulk insert

Category: Linux

An overview of the INSERT processing process

First, let's take a quick look at the processing steps when inserting a row. Each of these steps has the potential for optimization, which we'll discuss in a later step.

Prepares the statement on the client. For dynamic SQL, this step is done before the statement executes, where performance is important, and in the case of static SQL, the performance of this step is actually not significant, since the preparation of the statement is done in advance. At the client, the individual column values of the rows to be inserted are assembled and sent to the DB2 server. The DB2 server determines which page the row is inserted into. DB2 reserves a location in the buffer pool used for the page. If DB2 selects an existing page, it needs to read the disk, and if a new page is used, the space is physically allocated to the table space (in the case of SMS, the system-managed table space for the storage). Each page that is inserted into a new row is eventually written to disk from the buffer pool. Formats the row in the target page and obtains an X (exclusive, exclusive) row lock on the row. Writes a record that reflects the insert to the log buffer.

The transaction that contains the insert is finally committed, and if the record in the log buffer is not written to the log document, the records are written to the log document. Additionally, many types of additional processing may occur, depending on the database configuration, such as the presence of an index or trigger. This extra processing is also significant for performance, as we'll discuss later.

An alternative to INSERT

Before we discuss the optimization of INSERT, let's consider two alternatives to insert: Load and import. The import utility is actually a front end of SQL INSERT, but some of his features are useful to you. Load also has some useful additional features, but the main reason we use load instead of insert is to improve performance.

Load formats data pages directly, avoiding most of the overhead of processing each row due to insertions (for example, where logging is actually eliminated here). Moreover, load is better able to take advantage of parallelism on multiprocessor machines. With two new features in V8 load, they are particularly useful for the replacement of load as an insert, with two features: loading from the cursor and loading from the calling layer interface (CLI) application.

Loading from a cursor

This method can be used in the application's program code (via the Db2load API) or for DB2 scripts. Here is an example of the latter case:

DECLARE staffcursor cursor Forselect * from the staff;

Load from Staffcursor of the cursor insert into Myschema.new_staff;

These two lines can be replaced by the following line:

INSERT INTO Myschema.new_staff SELECT * from staff

Same as equivalent INSERT ... A SELECT statement can improve performance by almost 20% compared to a cursor load.

Loading from the CLI

This approach is obviously limited to the call-layer interface (CLI) application, but he is quick. This technique is similar to array insertion, and DB2 comes with an example where the speed of using load is twice times the size of a fully optimized array insert, almost 10 times times faster than an unfiltered array.

Where any insert can be improved

Let's take a look at some of the necessary steps for insert processing, along with the techniques we can use to optimize these steps.

1. Statement preparation

As an SQL statement, INSERT statements must be compiled by DB2 before they are executed. This step can occur automatically (for example, in the CLP, or in a CLI SQLExecDirect call), or explicitly (for example, through a SQL Prepare, CLI SQLPrepare, or JDBC preparestatement statement). The compilation involves authorization checks, optimizations, and other activities that are required to translate statements into executable formats. When a statement is compiled, the Access plan for the statement is stored in the package cache.

If the same INSERT statement is executed repeatedly, the access plan (usually) of the statement goes into the package cache, which removes the overhead of compiling. However, if the INSERT statement has a different value for each row, then each statement is considered unique and must be compiled separately. Therefore, repeat statements such as the following:

INSERT INTO mytable values (1, ' abc ')

INSERT INTO mytable values (2, ' Def ')

And so on, it is advisable to change to a statement with a parameter tag that is prepared and executed repeatedly.

INSERT INTO mytable values (?,?)

Using parameter markers allows a series of inserts to run more than a few times faster. (using host variables in a static SQL program can also achieve similar benefits.) )

2. Send column values to the server

There are several types of optimization techniques that can be attributed to this category. The most important technique is to include more than one line in each INSERT statement, which avoids client-server communication for each row and reduces the DB2 overhead. The techniques available for multi-line insertion are:

Contains multiple lines of content in the VALUES clause. For example, the following statement inserts three rows: INSERT INTO MyTable VALUES (1, ' abc '), (2, ' Def '), (3, ' Ghi ') using array insertion in the CLI (array insert). This involves preparing an INSERT statement with a parameter tag, defining an array to store the values to be inserted, binding the array to the parameter markers, and performing an insert once for a set of content in each array. Also, the sample program SQLLIB/SAMPLES/CLI/TBLOAD.C provides the basic framework for array insertion (but executes the CLI LOAD). Changing from not using an array to using an array with 100 rows can shorten the time by about 2.5 times times. Therefore, you should use an array containing at least 100 rows whenever possible.

Use a batch operation in JDBC. This is identical to the array insertion in the CLI, based on the same concept, but the implementation details are different. After the INSERT statement has been prepared through the Preparestatement method, the remaining steps are to invoke the appropriate Setxxxx method for each column (for example, setString or Setint), followed by Addbatch. Repeat these steps for each row you want to insert, and then call ExecuteBatch to perform the insertion. To see examples of this, see the JDBC Tutorial in the "Resources" section.

Use load to quickly load data into a staging table, and then use the INSERT ... SELECT to populate the main table. (The cost of saving by this method stems from the fast load, plus the INSERT ... SELECT is the transfer of data within the DB2 (on the server), thereby eliminating the cost of communication. In general, we will not use this method unless the INSERT ... The SELECT also has to do other processing that the load cannot complete.

If it is not possible to pass multiple rows in an INSERT statement, it is better to make a group of multiple INSERT statements that will be passed from the client to the server together. (However, this means that each insert has a different value and needs to be prepared, so its performance is actually worse than using a parameter marker.) Combining multiple statements into a single statement can be accomplished by Compound SQL. In SQL, compound statements are created by a begin ATOMIC or begin COMPOUND statement. In the CLI, compound statements can be established through SQLExecDirect and SQLExecute calls. For DB2 V8 Fixpak 4, another way to generate a compound statement is to configure the statement properties Sql_attr_chaining_begin before making multiple SQLExecute calls (for a preprocessing statement) and configure the statement properties after the call Sql_attr_ Chaining_end.

Here are some other tips on this topic:

If possible, let the client and the database to be accessed use the same code page to avoid the cost of conversion on the server. The code page of the database can be determined by running the "Get db cfg for".

In some cases, the CLI will automatically perform data type conversions, but it will also result in invisible (small) performance loss. Therefore, try to make the insertion value directly in the format corresponding to the corresponding column.

Minimizes the configuration overhead associated with inserts in the application. For example, when using array insertions in the CLI, for the entire set of insertions, you should try to ensure that only one sqlbindparameter is executed once for each column, rather than once for each set of array contents. For individuals, these calls are not expensive, but these costs are cumulative.

3. Where to find the storage line

DB2 uses one of three algorithms to determine where to insert rows. (If a multidimensional cluster (multi-dimensional CLUSTERING,MDC) is used, we will not discuss it here. )

The default mode is that DB2 searches for free space control records scattered across the pages of the table, to find pages that have enough free space to hold new rows. Obviously, if the free space on each page is relatively small, a lot of search time will be wasted. To cope with this, DB2 provides a db2maxfscrsearch registry variable to allow the search scope to be limited to less than the default of 5 pages.

The second algorithm is used when the table is placed in APPEND mode through ALTER table. This avoids the FSCR search altogether, as simply placing the line directly at the end of the table.

The last algorithm is used when the table has a clustered index (clustering index). In this case, DB2 attempts to insert each row into a page that has similar key values. If there is no space on that page, DB2 will try the nearby page, if there is no space near the page, DB2 FSCR search.

If you only consider the optimization of insertion time, then using the APPEND mode is the quickest way to bulk INSERT, but the effect of this approach is far less effective than many of the other methods we discuss here. The second best approach should be to use the default algorithm, but if the value of change db2maxfscrsearch is small in the best environment, the impact of this change is considerable in an environment with less I/O constraints.

It is not surprising that there is a large negative impact on insert performance if there is a clustered index, because the purpose of using a clustered index is to improve query (that is, select) performance by doing extra work at insert time. If you do need a clustered index, you can minimize its impact on insertions by ensuring that there is enough free space: use ALTER TABLE to increase PCTFREE, and then use REORG to reserve free space. However, if too much free space is allowed, it can cause queries to read extra pages, which in turn violates the intent of using clustered indexes. Another option is to remove the clustered index before the bulk INSERT, and then recreate the clustered index, perhaps the best approach (the overhead of creating a clustered index is about the same as the cost of creating a regular index, which is not very large, but with additional overhead when inserting).

4. Buffer pool, I/O, and page cleanup

Each insert, when executed, stores the new row in a page and eventually writes that page to disk. Once the page is specified as discussed earlier, the page must already be in the buffer pool before the row is added to the page. For bulk inserts, most pages are most recently assigned to a table, so let's look at the processing of new pages.

If the table is in the System Managed storage,sms table space in the Systems Management store, when a new page is required, by default, each page is allocated a space from the document System, respectively. However, if you run the DB2EMPFA command against the database, each SMS tablespace allocates a single segment for the new page at once. We recommend that you run the DB2EMPFA command and use the 32-page section.

For database Managed Storage,dms table spaces, the space is pre-allocated when the table space is created, but the section of the page is assigned to the table during the insert process. DMS's pre-allocation of space can improve performance by about 20% compared to SMS-when using DMS, changing the size of the extents does not have a noticeable effect.

If there is an index on the table, an entry is added to each index for each inserted row. This requires that an appropriate index page exists in the buffer pool. We'll discuss the maintenance of the index later, but just remember that the buffer pool and I/O considerations are similarly applied to the index pages when inserting, and the same for data pages.

As the insert progresses, more and more pages are populated with inserted rows, however, DB2 does not need to write any newly inserted or updated data or indexes to disk after insert or Commit. (This is due to DB2 's writeahead logging algorithm.) But there is one exception, which is discussed in the section on logging. However, these pages need to be written to disk at some point in time, which may not turn up until the database is closed.

In general, for bulk inserts, you will want to actively make asynchronous page Cleanup (asynchronous page cleaning) so that there is always a free place in the buffer pool that you can use for new pages. Page clear rate, or the total paging rate, may lead to a very different timing, making performance more prone to misunderstanding. For example, if you use a 100,000-page buffer pool and no page cleanup exists, BULK INSERT will not have any new or changed ("dirty") pages written to disk until the end, but subsequent operations (such as selection, even closing the database) are greatly delayed because there are at most 100,000 A dirty page that is generated at the time of insertion is written to disk. On the other hand, if a positive page purge is performed in the same situation, the bulk insert process may take longer, but then the dirty pages in the buffer pool are less, making the subsequent task perform better. It's not always clear which of those results is better, but in general it is not possible to store any dirty pages in the buffer pool, so it is necessary to take effective page cleanup in order to get the best performance.

To make page cleanup as good as possible: reduce the value of the Chngpgs_thresh database configuration parameter from the default of 60 to 5 so low. This parameter determines the percent of the dirty pages in the buffer pool, and when the dirty page reaches this percentage, the page cleanup is initiated.

Try to enable the registry variable db2_use_alternate_page_cleaning (most recently available in DB2 V8 Fixpak 4). By configuring this variable to on, you can provide a more aggressive approach to page cleanup than the default method (based on Chngpgs_thresh and LSN gap triggers). I have not evaluated its effect. See Fixpak 4 Release Notes for information about this. Ensure that the value of the Num_iocleaners database configuration parameter is at least equal to the number of physical storage devices in the database.

As for the I/O itself, the I/O activity can be minimized by using the largest possible buffer pools when indexing is required. If there is no index, the use of a larger buffer pool helps little, but only the I/O is postponed. That is, he allows any new pages to be temporarily placed in the buffer pool, but will eventually need to write these pages to disk.

Some general I/O tuning steps can speed up this process when I/O to a page is written to disk, such as:

Distribute tablespaces across multiple containers (these containers map to different disks).

Use the fastest hardware and storage management configuration possible, which includes factors such as disk and channel speed, write caching, and parallel write.

Avoid RAID5 (unless used with a storage device such as Shark).

5. Lock

By default, there is an X lock on top of each inserted row, which begins when the row is created, until the insert is committed. There are two performance issues related to insert and lock:

The CPU overhead incurred to obtain and release the lock.

Concurrency issues that may be caused by a lock conflict.

For well-optimized bulk inserts, the CPU overhead caused by obtaining an X lock on top of each row, along with the subsequent release of the lock, is considerable. For a lock on top of each new row, the only alternative is a table lock (no page lock in DB2). When table locks are used, time-consuming is reduced by 3%. There are 3 situations that can lead to the use of table locks, and before discussing the drawbacks of table locks, let's take a moment to look at these 3 scenarios:

Run ALTER table Locksize table. This causes DB2 to use a table lock for any SQL statements that subsequently use the table until the Locksize parameter is changed back to ROW.

Run LOCK TABLE in EXCLUSIVE MODE. This causes an X lock to be immediately on the table. Note that the table will be freed at the next commit (or rollback), so if you are running a test where every N rows are committed, you will need to repeat the LOCK table after each commit.

The default locks are used, but the values for the locklist and MAXLOCKS database configuration parameters are small. When a small number of row locks are obtained, the row lock is automatically gradually escalated to a table lock.

Of course, the disadvantage of any of these lies in the concurrency effect: if there is an X lock on the table, then other applications will not be able to access the table unless the isolation level of UR (uncommitted read) is used. If you know that exclusive access does not cause problems, you should use table locks as much as possible. However, even if you persist in using row locks, keep in mind that there may be thousands of new rows of X locks in the table during bulk INSERT, so there may be conflicts with other applications that use the table. There are ways to minimize these conflicts:

Ensure that lock escalation does not occur for no reason. You may need to increase the value of locklist and/or MAXLOCKS to allow the insert application to have enough locks. For other applications, use the isolation level of UR.

For V8 Fixpak 4, you might also be able to reduce lock collisions by db2_evaluncommitted registry variables: If you configure this variable to YES, in many cases, you can only get locks on rows that conform to a predicate, rather than getting locks on any of the rows being inspected.

A commit command is issued to release the lock, so it is sufficient to reduce the burden of lock collisions if more frequent submissions are made.

Attention

In V7, there are concurrency issues involving insert and key locks, but in V8, these problems are virtually gone because of the type-2 index provided. If you are migrating to V8, be sure to use the REORG INDEXES command with the CONVERT keyword to convert the index from type-1 to Type-2.

In V7, a W or NW lock may be used during the insertion process, but only if the type-1 index or the isolation level is RR is used in V8. Therefore, the two situations should be avoided as much as possible.

The lock that an insert is based on (usually an X-lock) is usually not affected by the isolation level. For example, using the isolation level UR does not prevent a lock from being obtained from the inserted row. However, if you use the INSERT ... Select, the isolation level affects the locks obtained from select.

6. Logging

By default, each insert will be recorded for recovery. Logging is first written to the in-memory log buffer pool, which is then written to the log document, usually when the log buffer pool is full or a commit occurs. The optimization of log records for bulk inserts is actually the number of times to minimize log writes, along with making the write as fast as possible.

The first consideration here is the size of the log buffer pool, which is controlled by the database configuration parameter LOGBUFSZ. The default value for this parameter is 8 pages or K, which is smaller than the ideal log buffer pool size required for most bulk inserts. For example, for a bulk insert, assuming that the log content for each row has 200 bytes, the log buffer pool will be filled after 160 rows are inserted. If you want to insert 1000 rows, because the log buffer pool will be filled several times, plus the commit, so there are about 6 log writes. If the value of LOGBUFSZ is increased to 64 pages (256K) or greater, the buffer pool will not be filled, so there is only one log write for the bulk insert (at commit time). Approximately 13% performance gains can be achieved by using larger LOGBUFSZ. The disadvantage of a larger log buffer pool is that it may take a little longer to recover from an emergency.

Another possibility of reducing log writes is to use "ALTER table ACTIVATE not logged initially" (NLI) for the table to which the new line is to be inserted. If this is done, no insert operations will be recorded within the unit, but there are two important issues related to NLI:

If a statement fails, the table is marked as inaccessible and needs to be deleted. This, along with other recovery issues (see SQL Reference discussion of Create Table), makes NLI a viable method in many cases.

The last commit in the unit of work must wait until any dirty pages involved in this unit of work are written to disk before it can be completed. This means that this commit takes a lot of time. If the page cleanup is not active, then the total time spent on insert plus commit is longer when using NLI. The use of NLI and active page cleanup can greatly reduce time-consuming. If you use NLI, stare at the time spent in the commit operation.

As for increasing the speed of log writes, there are some possibilities:

Place the log and the table to which you want to insert the new row on separate disks.

Divide the logs on multiple disks at the operating system level.

Consider using raw devices for the logs, but be aware that it is more difficult to manage them.

Avoid using RAID 5 because he is not suitable for write intensive (write-intensive) activities.

7. Submit

The commit forces the log record to be written to disk to ensure that the committed insert is definitely present in the database and frees the lock on the new row. These are valuable activities, but because commit always involves synchronous I/O (for logs), and insert does not, the cost of commit is easily higher than the cost of insert. Therefore, the practice of committing each row once for a bulk insert is bad for performance, so be sure not to use autocommit (which is the default for CLI and CLP). It is recommended to submit approximately every 1000 lines: performance can be increased by approximately 10 times times when each 1000 rows and not one or two rows are submitted. However, a single commit of more than 1000 rows can only save a small amount of time, but once a failure occurs, it takes more time to recover.

A correction to the above method: if the value of the MINCOMMIT database configuration parameter is greater than 1 (the default), then DB2 does not have to synchronize the I/O for each commit, but waits and attempts to share log I/O with a set of events. This is good for some environments, but often has no effect on bulk insertions, and even has a negative effect, so if the key task to perform is bulk insertion, you should keep the value of MINCOMMIT at 1.

Areas where improvements can be made selectively

For one insert, there are several types of processing that will occur automatically. If your primary goal is to reduce the insertion time, the simplest approach is to avoid the overhead of any of these processes, but it may not be worth it if you consider it as a whole. Let's go through the discussion in turn.

Index maintenance

For each row you insert, you must add an entry to each index on the table, including any primary key indexes. There are two main costs to this process:

Iterate through each index tree, searching for a page at each level of the tree to determine where the new entry must be stored (index entries are always stored in key order), and the CPU overhead that this process causes;

Reads any searched pages into the buffer pool and eventually writes each updated page to the I/O overhead on disk.

The worse scenario is that there is a lot of random I/O during index maintenance. Suppose you want to insert 10,000 rows, there are 5000 pages in the buffer pool of the index, and the key values of each row to be inserted are randomly distributed across the entire key range. Well, there are 10,000 so many leaf pages (possibly more non-leaf pages) that need to go into the buffer pool so that they can be searched and/or updated, for a given leaf page, he has a pre-existing probability of only 10% in the buffer pool. For each insert, the probability of reading the disk is so high that the scenario tends to be poorly performing.

For progressive insertion, adding a new row to an existing index is much more expensive than creating a new index. If you insert into an empty table, you should always create an index after the column is inserted. (Note that if load is used, the index should be pre-created.) If you are inserting into a filled table, then deleting the index before the column is inserted and re-creating the index after the column is inserted can be the quickest, but only when you want to insert a significant number of rows-probably greater than the table's 10-20%. If you use a large buffer pool for the index table space and sort the different inserts as much as possible so that the key values are ordered rather than random, you can help speed up index maintenance.

Improve insert performance in DB2 database

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.