From:http://www.cnblogs.com/kissdodog/p/3170036.html
A lock is a mechanism that prevents a process that is performing an action on an object from conflicting with another that has been performed on that object. That is, if someone else is manipulating an object, you cannot manipulate it on that object. Whether you can perform the action depends on what other users are doing.
Issues that can be prevented by locking
The following 4 main problems can be solved by locks:
- Dirty Read
- Non-REPEATABLE READ
- Phantom reading
- Missing updates
1. Dirty Reading
If a transaction reads a record that is part of another unfinished transaction, a dirty read occurs. If the first transaction completes properly, then there is a problem. However, if the previous transaction is rolled back, the information is obtained from transactions that have never occurred in the database.
2, non-repeatable read
It is easy to confuse non-repeatable reads with dirty reads. A non-repeatable read occurs when a record is read two times in one transaction, and another transaction changes the data during that time.
For example, the balance of a bank account is not allowed to be less than 0. If a transaction reads an account with a balance of $125, then another transaction reads $125, and if two transactions are charged 100 yuan, then the balance of the database becomes-75 yuan.
There are two ways to prevent this problem:
- Create a check constraint and monitor 547 errors
- Set the isolation level to repeatableread or serializable
Check constraints look pretty straightforward. To be sure, this is a passive rather than an active approach. However, in many cases the use of non-repeatable reads may be required, so this is preferred in many cases.
3. Phantom Reading
The probability of a phantom reading occurring is very small and only happens in very accidental circumstances.
For example, you want to raise the salary of all people below 100 in a salary table to 100 yuan. You might execute the following SQL statement:
UPDATE Tb_money SET Salary = WHERE Salary < 100
Such a statement, usually, is not a problem. But if, during the update, someone happens to have an insert with a salary of less than 100, because it's a completely new data flight, it's not locked, and it's missing the update.
The only way to solve this problem is to set the transaction isolation level to serializable, in which case any updates to the table cannot be placed in the WHERE clause or they will be locked out.
4. Missing updates
A lost update occurs when an update is successfully written to the database and is accidentally overridden by another transaction. How did this happen? If there are two transactions that read the entire record, and then one writes the update information to the record, and the other transaction writes the update information to the record, the loss of the update occurs.
One example is very good, so let's knock it down. If you are a credit analyst at the company, you receive a call from Customer X saying that he has reached its limit of credit and wants to apply for an increase, so you look at the customer's information and you find that he has a credit limit of 5000 and that he can pay on time every time.
When you look at it, another employee in the credit department reads the customer x record and enters the information to change the customer's address. The record It reads also shows a credit limit of 5000.
At this point you decide to increase the credit limit for customer X to 10000 and press ENTER, and the database now shows customer x's credit limit of 10000.
Sally now also updates the address of customer X, but she still uses the same editing screen as you, which means she has updated the entire record. Do you remember the credit limit that was displayed on her screen? Is 5000. The database now displays the credit limit of 5000 for customer X again. Your update is missing.
The solution to this problem depends on how the code recognizes that another connection has updated the record during the time that you read the data and the data to be updated. The way this is identified depends on the access method you are using.
Resources that can be locked
For SQL Server, there are 6 types of lockable resources, and they form a hierarchical structure. The higher the level of the lock, the thicker its granularity. Sorted by granularity from coarse to fine, these resources include:
- Database: Locks the entire database. This usually happens when the entire database schema changes.
- Table: Locks the entire table. This contains all the data associated with the table, including the actual data rows (each row) and the keys in all the indexes associated with the table.
- Section: Locks the entire segment. Because a section consists of 8 pages, the block lock refers to a lock that controls the section, controls the 8 data or index pages in that section, and all the data in the 8 pages.
- Page: Locks all data or index keys in the page.
- Key: There is a lock on a particular key or a line between the indexes. Other keys in the same index page are not affected.
- row or row identifier: Although technically, the lock is placed on the row identifier, but essentially, it locks the entire data row.
The effect of lock escalation and lock on performance
Escalation means being able to recognize that maintaining a finer granularity (for example, row locks instead of page locks) is meaningful only if the number of items locked is small. As more and more projects are locked, the overhead of maintaining these locks can actually affect performance. This can lead to longer durations.
When the number of locks is maintained to a certain limit, the lock is promoted to the next higher level, and it will not need to be so tightly managed to manage lower levels of locks (freeing resources and helping to speed up).
Note that the upgrade is based on the number of locks, not the number of users. The point here is that you can lock the table individually by performing a number of updates-row locks can be promoted to page locks, and page locks can be upgraded to table locks. This means that all other users may be locked out of the table. If the query uses more than one table, it is likely to lock everyone out of those tables.
Lock mode
In addition to having to consider locking the resource hierarchy, consider the locking mode that the query will acquire, as if there is a need to lock different resources, as well as different locking modes. Some of the patterns are mutually exclusive. Some modes do nothing and modify only the other modes. Whether patterns can be used together depends on whether they are compatible.
1. Shared lock
This is the most basic kind of lock. Shared locks are used only when the data is read, that is, when a shared lock is locked, no action is made to change the content, and other users are allowed to read.
Shared locks can be compatible with other shared locks. Although shared locks do not mind the presence of other locks, some locks do not coexist with shared locks.
Shared locks tell other locks that a user is already there, and that they do not provide a lot of functionality, but they cannot be ignored. However, a shared lock can be done to prevent the user from performing dirty reads.
2, Exclusive lock
Exclusive lock as the name implies, exclusive lock is not compatible with any other lock. If any other locks exist, exclusive locks cannot be used, and when an exclusive lock still works, they do not allow any form of new locks to be created on top of the resource. This prevents two people from updating, deleting, or performing any action at the same time.
3, update the lock
Update locks are a mixture of shared and exclusive locks. The update lock is a special placeholder. In order to perform the update, the WHERE clause needs to be validated to indicate the specific data row that you want to update. This means that only a shared lock is required until a physical update is actually made. During a physical update, an exclusive lock is required.
- The first phase indicates the content that satisfies the WHERE clause condition, which is part of the update query that has an update lock.
- The second phase is that if you decide to perform an update, the lock is promoted to an exclusive lock. Otherwise, the lock is converted to a shared lock.
The advantage of doing this is that it prevents deadlocks. The deadlock itself is not a locking type, but a condition that has formed a contradiction, with two locks waiting for each other, multiple locks forming a ring waiting for the front transaction to clear the resource.
If the lock is not updated, the deadlock will continue to occur. Two update queries run in shared mode. Query a completes its queries and prepares for physical updates. It wants to upgrade to an exclusive lock, but it is not possible because query B is completing the inquiry. Unless query B needs to be physically updated, it will complete the query. To do this, query B must be upgraded to an exclusive lock, but cannot do so because query a is waiting. This creates an impasse.
The update lock prevents other update locks from being established. As soon as the second transaction tries to get an update lock, they go into a wait state until the timeout is exceeded-the lock will not be granted. If the first lock is cleared before the lock time-out, the lock is granted to the new requestor, and the process continues. If it is not clear, an error occurs.
Update locks are only compatible with shared locks and intent shared locks.
4. Intent Lock
What does the intent lock mean? That is, if you lock a row, you also add a table's intent lock (not allowing others to block you through a table lock).
An intent lock is a true placeholder, which is used to handle object-level problems. Suppose that a lock has been established on a row, but someone wants to create it on the page or section, or modify the table. You certainly do not want another transaction to hinder you by reaching a higher level.
If there is no intent lock, then the higher-level object will not know that there is a lock on the lower level. Intent locks improve performance because SQL Server only needs to check for intent locks at the table level (without having to check each row or page lock on the table) to determine whether a transaction can safely lock the entire table.
The intent lock is divided into the following 3 different types:
- Intent shared Lock: This intent lock means that a shared lock is already or will be established at some lower point in the hierarchy.
- Intent Exclusive Lock: It is the same as the intent shared lock, but an exclusive lock is set on the lower level items.
- Shared Intent exclusive Lock: it means that a shared lock has been or will be established under the object hierarchy, but is intended to modify the data so that it becomes an intent exclusive lock at some point.
5. Mode lock
The pattern lock is divided into the following two types.
- Pattern modification Lock: Changes the object mode. During a sch-m lock, the object cannot be queried or manipulated by other create, alter, or DROP statements.
- Pattern Stabilization Lock Lock: It is similar to a shared lock; the sole purpose of this lock is to modify the lock in a modal mode because there are already other queries (or create, ALTER, drop statements) on the object. It is compatible with all other locks.
6. Batch update lock
The bulk update lock (BU) is just a slightly different form of table locking variation. Bulk update locks allow data to be loaded in parallel. That is, the table is locked for any other normal operation (T-SQL), but you can perform multiple bulk INSERT or bcp operations at the same time.
Compatibility of Locks
Lock the compatibility table for the resource lock mode, the existing locks are displayed in columns, and the locks to be compatible are displayed in rows.
Types of Locks |
Intent shared lock (IS) |
Shared Lock (S) |
Update Lock (U) |
Intent exclusive Lock (IX) |
Shared Intent exclusive Lock (SIX) |
Exclusive Lock (X) |
Intent shared lock (IS) |
Is |
Is |
Is |
Is |
Is |
Whether |
Shared Lock (S) |
Is |
Is |
Is |
Whether |
Whether |
Whether |
Update Lock (U) |
Is |
Is |
Whether |
Whether |
Whether |
Whether |
Intent exclusive Lock (IX) |
Is |
Whether |
Whether |
Is |
Whether |
Whether |
Shared Intent exclusive Lock (SIX) |
Is |
Whether |
Whether |
Whether |
Whether |
Whether |
Exclusive Lock (X) |
Whether |
Whether |
Whether |
Whether |
Whether |
Whether |
Other than that:
- The sch-s is compatible with all locking modes except out of sch-m.
- The sch-m is incompatible with all locking modes.
- BU is compatible only with mode stability locks and other bulk update locks.
Sometimes you want to have more control over the lock in the query or throughout the transaction. This can be done by using the optimizer hint (optimizer hints).
The optimizer hint explicitly tells SQL Server to escalate a lock to a unique level. These hint information is included after the name of the table that will be affected.
Optimizer hints are an advanced topic that experienced SQL Server developers often use, and they attach considerable importance to it.
Use Management Studio to determine the lock
The best way to view locks is to use management Studio. By using Activity monitor,management Studio displays the lock in two ways-via ProcessID or object.
To display a lock using Management Studio, simply navigate to <Server> 's Activity Monitor node, where <Server> is the top-level node of the server that monitors its activity.
Expand the node of interest (the overview section is expanded by default), and you can view a large number of measures from the scroll bar-including the valid locks in the current system.
The display interface is as follows:
Setting the isolation Level
The connection between the transaction and the lock is very close. By default, once any locks related to data modification are created, the lock will exist throughout the transaction. If you have a large transaction, it means that other processes will be blocked from accessing the locked object for a long time. This is obviously problematic.
There are 5 isolation levels of transactions:
- READ COMMITTED
- READ UNCOMMITTED
- Repeatable READ
- SERIALIZABLE
- SNAPSHOT
The syntax for switching between these isolation levels is also fairly straightforward:
SET TRANSACTION Isolation Level < READ COMMITTED | READ Uncommitted | Repeatable READ | SERIALIZABLE | SNAPSHOT >
Modifications to the isolation level affect only the current connection-so you don't have to worry about the other users being affected. Other users will not be able to influence you.
1. READ COMMITTED
By default, this is the case with read COMMITTED, any shared locks created will be automatically freed after the statement that created them is complete. That is, if you start a transaction, run some statements, run a SELECT statement, and then run some other statements, the lock associated with the SELECT statement is freed when the SELECT statement is complete-SQL Server does not wait for the transaction to end.
Action Queries (UPDATE, DELETE, INSERT) are a bit different. If a transaction executes a query that modifies data, the locks remain valid for the duration of the transaction.
By setting the default isolation level of Read Committed, you can determine that there is sufficient data integrity to prevent dirty reads. However, non-repeatable reads and Phantom reads are still occurring.
2. READ UNCOMMITTED
READ uncommitted is the most dangerous of all isolation levels, but it has the best performance in terms of speed.
Setting the isolation level to read uncommitted will tell SQL Server not to set any locks or any locks beforehand.
A lock is both your protector and your enemy. Locks can prevent data integrity problems, but locks often hinder or prevent you from accessing the data you need. Because this lock is at risk of dirty reads, this lock can only be applied to environments that are not very precise.
3. Repeatable READ
Repeatable read slightly upgrades the isolation level and provides an additional layer of concurrency protection, which not only prevents dirty reads, but also prevents non-repeatable reads.
Preventing non-repeatable reads is a big advantage, but maintaining a shared lock until the end of the transaction prevents the user from accessing the object, thereby affecting efficiency. It is recommended that you use other data integrity options, such as check constraints, rather than using this option.
The optimizer hint that is equivalent to the REPEATABLE read isolation level is RepeatableRead (except for a single space, which is not different).
4, SERIALIZABLE
The serializable is a bastion-level isolation level. In addition to missing updates, it prevents all forms of concurrency problems. Can even prevent phantom reading.
If you set the isolation level to serializable, it means that any update, DELETE, and insert operations on the table used by the object will never satisfy the conditions of the WHERE clause of any statement in the transaction. In essence, if a user wants to do something that is of interest to a transaction, it must wait until the transaction is complete.
The Serializable isolation level can also be simulated using the serializable or Holdlock optimizer hints in the query. Again, similar to read uncommitted and NOLOCK, the former does not need to be set every time, while the latter needs to set the isolation level back.
5, SNAPSHOT
Snapshot is the newest isolation level and is very interested in a combination of Read committed and READ UNCOMMITTED. Note that SNAPSHOT is unavailable by default-SNAPSHOT is available only if the Allow_snapshot_isolation special option is turned on for the database.
Like read uncommited, snapshot does not create any locks, nor does it implement people and the. The main difference is that they identify changes that occur at different times in the database. Changes in the database, regardless of when or whether they are committed, are seen by queries running the READ UNCOMMITTED isolation level. With snapshot, you can only see the changes that were committed before the snapshot transaction started. From the start of the snapshot transaction, all the data you see is submitted at the beginning of the time.
Handling deadlocks
The error number of the deadlock is 1205.
A lock can cause a deadlock if it does not complete the cleanup resources it should do because another lock occupies resources, and vice versa. When a deadlock occurs, one of the parties is required to win the fight, so SQL Server chooses a deadlock victim, rolls back the transaction for the deadlock victim, and notifies a deadlock that a 1205 error has occurred. Another transaction will continue to run normally.
1, the way to judge the deadlock
Every 5 seconds, SQL Server checks all current transactions to see what locks they are waiting for that have not been granted. It then checks the status of all open lock requests again, and if one of the previous requests has not been granted, it recursively checks all open transactions to find the loop chain of the lock request. If SQL Server finds such a village changer, it will select one or more deadlock victims.
2. Choose the way to deadlock victims
By default, the deadlock victim is selected based on the "cost" of the related transaction. SQL Server will choose to roll back the transaction with the lowest cost. To some extent, you can use the deadlock_priority set option in SQL Server to override it.
3. Avoid deadlocks
Common rules for avoiding deadlocks
use objects in the same order
- Make the transaction as short as possible and in one batch.
- Use the lowest transaction isolation level possible.
- No limit interrupts are allowed in the same transaction.
- In the control environment, use a bound connection.
1. Use objects in the same order
For example there are two tables: suppliers and products. Assume that there are two processes that will use both tables. Process 1 accepts inventory input, updates the Products table with the total amount of new product at hand, and then updates the Suppliers table with the total quantity of products already purchased. Process 2 records sales data, which updates the total number of sales products in the Supperlier table, and then reduces the inventory quantity in product.
If you run both processes at the same time, you may encounter problems. Process 1 attempted to obtain an exclusive lock on the product table. Process 2 acquires an exclusive lock on the Suppliers table. Then process 1 will attempt to obtain a lock on the Suppliers table, but process 1 must wait until process 2 clears the existing lock. Concurrently process 2 also waits for process 1 to clear existing locks.
The example above is that two processes lock two tables in reverse order, so that deadlocks can occur easily.
If we change process 2 to first reduce inventory quantity in products, then update the total quantity of the sales product in the Suppliers table. Two processes access two tables in the same order, thus reducing the occurrence of deadlocks.
2. Make the transaction as brief as possible
Keeping a short transaction will not cost you anything. It's so easy to put the content you want in a transaction and take out what you don't need. Its principle is not complicated-the longer the transaction opens, the more content it touches, and the greater the likelihood that some other process will want to get one or more objects in use. If you want to keep the transaction short, you will minimize the number of objects that may cause deadlocks and will reduce the time it takes to lock the object. The principle is so simple.
3. Use the lowest transaction isolation level possible
With a lower isolation level and a higher isolation level, shared locks persist for a shorter period of time, thus reducing the competition for locks.
4. Do not use transactions that allow infinite interrupts
When you start an open process time, do not create a lock that will always occupy the resource. Typically, it refers to user interaction, but it can also be any process that allows infinite waiting.