Continue to learn about the new version. Concerning the lock, DB2 seems to be very strict. It is because it is too late, and it is also criticized by the same industry. Concurrency is obviously inferior to Oracle. The main reason is the design of the lock itself. Oracle has a rollback segment, so the entire table will not be held due to some changes. Users can still read data. For some data changes, Oracle reads before image, which is not the value in change, but the value before change. Therefore, business needs are usually greater than rational design.
The reason for using the lock, needless to say, the following three main reasons
1. Dirty reads read uncommitted data. If rollback is used, non-existing data will be read.
2. Fantasy reads New Lines added to other transactions.
3. Rows modified by other transactions cannot be read repeatedly or deleted.
DB2 adopts four isolation (read operation)
1. The RR is the strictest and can be read repeatedly. The price is that each row scanned will be locked by S, and transactions will not be released if they are not submit or rollback.
Advantage: ensure that the data read in the transaction is stable.
Disadvantage: The concurrency is poor. Other transactions cannot apply an X lock to the data scanned by the transaction, that is, they cannot be modified.
2. RS: relatively strict and stable reading. For example, if you use a predicate, the scan is performed on a table and the scanned rows are not locked, only the row selected by the predicate will be added with the NS lock.
But the table will be locked by the is lock.
Advantage: The selected row can maintain stability (no dirty reads and repeated reads are allowed)
Disadvantage: concurrency is not good, and cannot stop fantasies. For example, if you re-open the processed cursor using the same search criteria, the result set may be different and new data may be inserted, the modified data will also be searched out if it meets the search criteria.
3. CS: the cursor is stable, and the row to be referenced will be locked. It is also the default isolation level of the CLP to ensure maximum concurrency. It can only ensure that no dirty reads are performed.
4, ur, you can read dirty data, that is, you can see the unimplemented data rows.
Note that if the updatable cursor is used in the transaction, the running effect is the same as that in CS)
If the application runs at the US level, the CS level may be used, because the application uses a fuzzy cursor and the blocking option. The default value is unambig, which means that the fuzzy cursor can be updated.
To prevent the upgrade
1. Change the cursor in the program to non-fuzzy
Declare <cursor Name> cursor for select statement for read only
2. Bind the pre-compiled program, blocking all, And staticreadonly yes. Fuzzy cursor is regarded as read-only when the program is run.
For how to specify the isolation level for an application, functions are usually used in ODBC or JDBC. For example, settransactionisolation () is used for JDBC. If middleware webshere is used, we need to set the priority jdbc <-was <-CLP in it. The DB2 environment will be used even if no priority is set at the end, that is, CS (default CLP ), we can also modify the session level)
DB2 change isolation to ur
DB2 connect Reset
Lock Conversion
Table lock: In-> is-> S-> IX-> U-> X-> Z
Row lock: S-> U-> X
The conversion between the S lock and the IX lock is special. When a program holds the S lock of the table and requests the IX lock, it will be converted to the six. When holding the IX lock, it will be converted to the ixs lock (the difference is that, when IX is converted to ixs, the lock wait may be required. Because the IX lock is compatible with the IX lock, some data in this table may be modified. But if the S lock is implemented, you do not need to wait for the lock and directly convert it to six)
About the U Lock design: to prevent two programs from having a table s lock at the same time, and when the X lock is required, wait for each other to release, resulting in a deadlock. The advantage of the U Lock is that if the program finds that the X lock of the table cannot be obtained, it can first obtain the U Lock of the table, after obtaining the U Lock, it is easier to get the X lock of the table than other programs. The U Lock is in an intermediate state waiting for modification (you can read all the data in the table and wait for the X lock to modify all the data)
Lock upgrade
The acquisition and release of locks also consume CPU and memory. Therefore, DB2 will perform the lock upgrade in the following two cases. If the row lock is upgraded to the table lock, the lock will be much reduced. Resources are released.
1. When the number of locks of a program exceeds the percentage of locklist multiplied by maxlocks, the database will upgrade the lock until it is lower than this value.
2. When the number of locks of all programs in the database is close to the value of locklist, the database selects a connection with the maximum number of row locks on the table and converts these row locks into Table locks.
Generally, the lock upgrade is not good. Affects concurrency. If a program holds the is lock on the table, other programs can also obtain the IX lock of the table for data modification. However, after the lock is upgraded, the program will upgrade the table to the S lock, other programs cannot modify the table. Will be in the lockwaiting status.
The lock upgrade may fail. It is easy to understand. Next, the above lock is originally is, And the IX lock is not released yet. The lock is upgraded to the S lock, but s and IX are not compatible, in this case, the lock upgrade fails. The application receives a sqlcode of-912.
Finally, I would like to talk about three parameters for mitigating the lock performance.
When a piece of data is inserted in session1 without a commit, we select the entire table in session2, which will be in the lockwaiting state. (Except ur)
Both the index scan and the table scan are locked. Whether submitted or not. (This is the case for indexing and table scanning)
With a predicate
Indexed(Whether unique or not): Data is selected based on the predicate. If the uncommitted data (insert or delete) is included in the predicate, a lock wait occurs.
No Index(Table scan): In this case, DB2 locks all rows to be accessed (the same as the entire Select Table) before verifying whether the row meets the predicate query ), in this case, predicate verification is meaningless regardless of whether the data is submitted. Because DB2 needs to scan all the tables (Including unsubmittedWhen he reads uncommitted data, he will be held. uncommitted data is often locked by X, which is incompatible with other read locks. Therefore, session2 enters the lockwaiting State until it is rolled back (the value of locktime_out is related ). It should be emphasized that even if uncommited data is not included in your predicate range, it will be held because the table scan is used.
In the second case (No index), users tend to be reluctant to see it, because users do not know exactly what is going on, and often they only care about the running performance of the system. SAP first put forward some opinions on the above situation, so that IBM introduced three registration variables after db2v8 to ease lock wait and provide concurrency (principle isPredicate scan in advance to allow read scan to delay or avoid row lockThe index type must be type-2.
Db2_evaluncommited
I think this parameter is applicable to the second scenario, that is, when a table scan is performed, DB2 will calculate the predicate in advance to exclude the locks of the predicate. Index scanning is not affected because index scanning is designed to avoid table scanning. (It should be noted that, even if the table has an index on it, it may not always take an index scan. You should refer to your predicate and select fields to see what scanning is performed.)
No index is created for the child table.
DB2 + C "insert into child_table (child_join_column, child_data_column) values ('10', 'II ')"
DB2 "select * From child_table where child_join_column = '1'" (Execution successful)
For the data contained in the predicate, when insert and update (for commit), select will still be held. However, during the delete operation, the uncommitted data will be ignored. What does Niu say is unreasonable? Isn't that ur reading !? -_-!! So be careful when deleting the test.
DB2 + C "delete from child_table where child_join_column = '1 '"
DB2 "select * From child_table where child_join_column = '1 '"
Child_join_column child_data_column
-------------------------------------
0 record (s) selected.
Db2_skipdeleted
The above settingsDb2_evaluncommited = onWhen the table is scanned, the Skip does not commit the delete data (even if there is an index ). If you perform an index scan, data that is not submitted will not be dropped by the Skip, because the pseudo index key still exists. Therefore, if you want to skip off the uncommit data scanned by the index, you need to set the value of db2_skipdeleted.
Set this value separately
No matter whether the table scan or index scan is performed, the Skip will be used to delete data without commit.
Db2_skipinserted
The same as above. no matter whether the table scan or index scan is performed, data without an insert statement will be Skip removed.
Original address: http://www.db2china.net/home/space.php? Uid = 24415 & Do = Blog & id = 10472