Before talking about the lock mechanism of SQL Server, consider the following scenario: When you are riding your car to your destination in the heat of the summer, is it depressing to meet a few long red lights on the road? Sometimes you just can't stand to run a red light, in fact, in most cases, the problem is not much, if there are a lot of cars that is not easy to say. Because there are too many people who do not follow the rules, they are not willing to wait for the shortest way to reach their goal. This is the traffic police. The role of traffic police is to maintain the rules of these traffic lights. These traffic lights are like locks that lock or prolong the time you go to your destination. But what if there are no traffic police and people are not free to follow the rules of the traffic lights? You know, you can think about it.
A lock manager in the transaction manager provided in an article in this series is the traffic policeman here. It maintains locks in SQL Server. Most of the things mentioned in the preceding paragraph refer to the time when the lock is never a big problem when the volume of system transactions is small. Unless you know your system is always for a few people to use, or to avoid the system after the increase in the amount of concurrency caused by data security and efficiency problems, then you have to understand the lock mechanism. Before studying the lock, assume that you already know the acid concept of the transaction, which is the essence of the entire SQL Server. If there is no transaction, then there is no need to talk about the lock, except that the transaction needs to lock anything other than this to make the SQL not free mechanism. In the final analysis, lock is a balance of concurrency and data security mechanism, if there is no lock, any SQL can overwrite other SQL execution data, then the data will appear inconsistent. If the lock is too aggressive, it will affect the concurrency and efficiency of the database system (including the extra overhead of locking itself). This is a trade-off, as the SQL Server Lock manager acts as a trade-off between the two, as shown in the following illustration:
There are too many knowledge points for locks in SQL Server, such as locks from the schema: Shared locks (S), update locks (U), exclusive locks (X), schema Locks (sch-s, sch-m), intent Locks (is, IU, IX), conversion locks (SIX, SIU, UIX), Bulk update locks (BU) The lock is divided into the granularity: Database lock, File lock, table lock, heap lock, index lock, page lock, key locks, zone locks, row locks, application locks, metadata locks, compatibility issues between locks, locks are upgraded according to situation, deadlock is not a good lock control, pessimistic lock isolation: uncommitted read, committed read, repeatable read, serializable Isolation of optimistic locks: Read Commit snapshot Isolation, snapshot isolation, latch (Shuan) lock ... It takes a lot of space to be clear about a lot of questions. Or with the previous article style, carefully analyze a specific problem-lock upgrade.
1. Prepare
There is a dynamic management view that can view all locks: sys.dm_tran_locks, there is also a dynamic management view that can see which requests are blocking other requests: Sys.dm_os_waiting_tasks
2, what is the lock upgrade
Lock escalation means that the granularity of the lock is converted from fine to coarse. such as: from the row lock into a table lock.
3. Need lock upgrade?
Generally speaking, the smaller the size of the lock, the better the concurrency, but if you lock more things need more locks, this will consume SQL Server CPU and memory. A lock takes up about 96 bytes of memory, and you calculate how much memory is needed to lock millions of tables with row locks. And the management of locks (create locks, maintain locks, destroy locks, etc.) also has a cost, will consume CPU. If you use a larger lock, these millions of locks will be merged into a lock, and management is also convenient to consume resources and small.
4. When a lock upgrade occurs
SQL Server realizes that the number of locked pages or rows is too large to occur. How did you realize that it was too big? Recognized by two methods: the number of locks requested is greater than the lock number threshold; The lock manager consumes too much memory for a single query exceeding the memory threshold. If one of the other values exceeds the threshold, SQL Server tries to upgrade. Note that the lock data and memory are the values that occur with the same query, not the total. The critical values here are not fixed, and SQL Server uses heuristic algorithms to dynamically adjust.
5, Control lock upgrade
SQL Server provides some portals that allow us to control lock upgrades. In the SQLServer2008, you can pass:
ALTER TABLE test
Set (Lock_escalation = auto|table|disable)
We can also force SQL Server to use a thicker lock by displaying the specified Pagelock, Tablock prompts in the code. However, if this setting is unreasonable, it will cause concurrent reduction. It is generally not recommended unless you are aware of the impact.
6, the example explains
6.1 Build the table:
Create DATABASE Test
CREATE TABLE Test
(
ID identity (1,1) primary key,
[Name] varchar NOT NULL default ',
Createdtime datetime NOT NULL default GETDATE ();
)
To view the current lock condition:
A default connection has a shared lock on the entire database.
6.2 Loop Insert hundreds of thousands of record:
While 1 = 1
INSERT into Test (Name) VALUES (' KK ')
Lock snapshot at time of insertion:
As you can see from the above picture, there are three database share locks, one page-level intent exclusive lock, one table-level intent exclusive lock, and two row-level exclusive locks.
Three database share locks: Previously mentioned, a default connection to the entire database has a shared lock;
A page-level intent exclusive lock, a table-level intent exclusive Lock: A portion of the resources that represents a resource at the page and table level is actually protected by a lock, allowing other request locks to be checked at the table page level, reducing unnecessary and finer lock requests, and improving performance. For example, in this case, if the alter operation is allowed, then the operation waits because there is a table-level exclusive lock, which prompts the alter to manipulate the table for activity.
6.3 Tracking Lock:escalation Events
Set up tracking only lock:escalation events, lock escalation events in profiler.
6.4 Update the records in the table:
Update Test Set name = ' name ' WHERE name = ' KK '
The Lock:escalation event was triggered in the profiler:
The snapshots are (in order) when they are updated:
As shown above: At this point the update operation locks the rows that it updates with exclusive.
As shown above: At this point the update operation locks the entire table with an exclusive lock and locks its associated metadata table with the schema stable lock (sch-s).
As shown above: The schema-stabilized lock (SCH-S) lock on the metadata table is released at this time, leaving exclusive locks on the entire table.
From the analysis above, it is a bit complicated to find the SQL Server lock mechanism, but it's also interesting. After research, you'll find it really smart. This concludes today's analysis, and if there is a description of the improper, please point out. Common progress is the hard truth. (Source: Blog Park)