The first part indicates that he locks an index range. In this range, all indexes are locked using the T lock;
The second part is the Key that has been hit in this range. These keys will be locked using the K lock.
In this Range, the locking mode of the index Range and specific row is Range T-K.
Let's take an example in the previous article:
SELECT [data] FROM [MyTable] WHERE [index_column]> = 20 AND [index_column] <= 40
The usage of the lock is:
In fact, the lock generated by the preceding statement has two parts: The first is the Range S lock, the Range is the index Range of 20-40, and the second is the S lock used on the Key, in the figure, we can see that three keys have been hit, which are the indexes corresponding to "infinity", "25", and "30. Its Mode is the Range S-S, its Type is KEY, that is, their Range lock is Range S, Key lock is S lock.
Update and insert locks
There are two types of locks involved, one is the Range S-U lock and the other is the Range X-X lock.
Range S-U, which gets the S lock for the selected index Range and the Key hit is locked using the U lock for future conversion to the X lock. When the update is performed, the lock mode in the Range becomes Range X-X. Because the updated data columns are different (which may be index columns, but may not), the indexes used are also different (clustering, non-clustering, unique, and so on ), therefore, the situation is not as easy as the Range S-S lock so easy to draw the law. In general, there are several situations that are consistent. Here we will not experiment one by one (here we strongly recommend that you read SQL Server 2008 Internals, which describes the lock chapter clearly ):
First, when the index is a unique index, the equal judgment (for example, "=") is used. If this index hits, there will be no Range T-K lock record Range, and the corresponding record directly Gets U lock or X lock;
Second, the Range T-K lock locks the next record if the index does not hit the record, regardless of whether the index is unique or not. (For more information about "next", see the previous article );
Third, in the Range condition (>, <, BETWEEN), regardless of whether the index is unique, if the index hits, not only the Range will get the Range T-K lock, the "next" record for the Range also gets the Range T-K lock.
Update and insert locks
There are two types of locks involved, one is the Range S-U lock and the other is the Range X-X lock.
Range S-U, which gets the S lock for the selected index Range and the Key hit is locked using the U lock for future conversion to the X lock. When the update is performed, the lock mode in the Range becomes Range X-X. Because the updated data columns are different (which may be index columns, but may not), the indexes used are also different (clustering, non-clustering, unique, and so on ), therefore, the situation is not as easy as the Range S-S lock so easy to draw the law. In general, there are several situations that are consistent. Here we will not experiment one by one (here we strongly recommend that you read SQL Server 2008 Internals, which describes the lock chapter clearly ):
First, when the index is a unique index, the equal judgment (for example, "=") is used. If this index hits, there will be no Range T-K lock record Range, and the corresponding record directly Gets U lock or X lock;
Second, the Range T-K lock locks the next record if the index does not hit the record, regardless of whether the index is unique or not. (For more information about "next", see the previous article );
Third, in the Range condition (>, <, BETWEEN), regardless of whether the index is unique, if the index hits, not only the Range will get the Range T-K lock, the "next" record for the Range also gets the Range T-K lock.
Why does the Serializable isolation level make deadlocks easier?
We can see from the figure in the first article that the SERIALIZABLE level can ensure the strictest data consistency, but these protection methods can be changed to deadlocks as long as they change slightly. In fact, in various isolation levels, the higher the data consistency, the more likely the deadlock will occur; the lower the data consistency, the less likely the deadlock will occur.
Among these isolation levels, SERIALIZABLE is the easiest to deadlock, thanks to the Range T-K lock locking Range not only limited to existing data, but also future data; not only limited to several existing data pages, it is a broad range.
Among them, the most terrible problem is the locking of the "next" data. This can easily cause a large range of deadlocks. The first example is used to describe
The code is as follows: |
Copy code |
SELECT @ findCount = COUNT (id) FROM MyTable WHERE [fk_related_id] = @ Argument IF (@ findCount> 0) begin rollback transaction return ERROR_CODE end insert into MyTable ([fk_related_id],…) VALUES (@ Argument ,...) Commit transaction return SUCCESS_CODE |
In this example, the column fk_related_id of the MyTable table is a unique index (non-clustered) and the transaction isolation level is SERIALIZABLE. Different stored procedure execution will pass in different @ Argument. On the surface, this is not a problem, but due to the lock of the "next" data, in a slightly higher level of concurrency, there are about 80% failures, all of which come from deadlocks. We selected one of them:
We try to call the stored procedure within the range of [1, 1000] with 15 pressures per second in @ Argument. In this process, a record whose @ Argument is 115 is successfully inserted first!
Id fk_related_id data
1 115...
Next, A record whose @ Argument is 74 gets the opportunity. Let's assume that its Session Id is. It executes the SELECT statement:
Id fk_related_id data
1 115 (A Gets the Range S-S Lock )...
Next, a record with @ Argument 4 gets the opportunity. Let's assume that its Session Id is B. It executes the SELECT statement:
Id fk_related_id data
115 (A, B get the Range S-S Lock )...
Next, Session A executes the INSERT statement, then the Range S-S
The Lock will try to perform A transformation test (Range I-N Lock), but this obviously doesn't work, because Session B also gets the Range S-S Lock, so Session A is in the wait;
Session B also runs the INSERT statement. In the same way, it also falls into A waiting state. In this way, Session
Wait for Session B to discard the Range lock, and Session B to wait for Session A to discard the lock. This is A deadlock.
The worst thing is that for any record whose @ Argument is less than 115, he will try to get a new Range S-S lock for the next record,
So as to enter the infinite waiting, at least, 1-115 records deadlock,
In the end, 114 must be abandoned and 1 must be successful. This is why the SERIALIZABLE isolation level will not only cause deadlocks,
In some cases, large deadlocks occur.
In short: at the SERIALIZABLE isolation level, as long as there is a situation similar to the same index as the condition for first read and then write,
There is a high probability of deadlock under a large concurrency,
In addition, if the existing record indexes are in a very backward position by sorting rules, a large deadlock may occur.
So how can we solve this problem? Er, lowering the isolation level is certainly a method. For example, if you can accept Phantom READ, then REPEATABLE READ
Is a good choice. But I suddenly saw the method of using SELECT WITH UPDLOCK in a blog. In fact,
This makes deadlocks easier.
For example, if A stored procedure select B and then select a, and another Stored Procedure select a and then select B, the order is different, when the exclusive lock is only Read, a deadlock may occur.
So why is repeatable read much better? Because repeatable read tightly locks existing records,
Instead of using the Range lock. We still take the stored procedure as an example. In this way, only two locked rows of data are on the same page (because page-level locks are used by default), or close enough, only the records on this data page can be deadlocked, and the deadlock will not affect other records. Therefore, the deadlock probability is greatly reduced.