server| Data | database
Lock is a very important concept in network database, it is mainly used to guarantee database integrality and consistency under multi-user environment. The basic theory of lock used in various large databases is consistent, but it is different in concrete implementation. At present, most database management systems have more or less self-regulation and self-management functions, so many users are actually not clear about the lock theory and the implementation of the lock in the database.
Microsoft SQL Server (SQL Server), as a small and medium sized database management system, has been widely used, which emphasizes the management of locks by the system. When the user has the SQL request, the system analyzes the request, automatically satisfies the locking condition and the system performance to add the appropriate lock to the database, simultaneously the systems often automatically optimizes the processing during the operation, implements the dynamic lock. For the general user, through the system of automatic locking management mechanism can meet the basic requirements, but if the data security, database integrity and consistency have special requirements, you must control the database lock and unlock, this need to understand the lock mechanism of SQL Server, grasp the database locking method.
Multi-granularity of locks and lock escalation
A lock in a database is a software mechanism that indicates that a user (also known as a process session, the same below) has used a resource to prevent other users from making data modifications that affect the user or resulting in incomplete and inconsistent database data. Here the so-called resources, mainly refers to the user can manipulate the data rows, indexes and data tables and so on. Depending on the resource, the lock has the concept of multiple granularity (multigranular), which means the level of resources that can be locked. The granularity of resources that can be locked in SQL Server includes databases, tables, ranges, pages, key values (which are indexed row data), row identifiers (RIDs, or single-line data in a table).
The important use of multiple granularity locks is to support concurrent operations and ensure data integrity. SQL Server automatically adds the appropriate locks to the database based on the user's request. Assuming that a user operates on only a subset of the row data in one table, the system may add only a few row locks (rids) or page locks to support as many concurrent operations as possible for multiple users. However, if multiple records in a table are frequently manipulated in a user transaction, a row-level lock is added to many of the table's record rows, and the number of locks in the database system increases dramatically, aggravating system load and impacting system performance. Therefore, in the database system, lock escalation is generally supported. The so-called lock escalation refers to adjusting the size of the lock, replacing multiple low granularity locks into a few higher-grained locks to reduce system load. The system automatically upgrades row-level locks and page locks to table-level locks when there are more locks in a transaction in SQL Server and a lock escalation gate is reached. It is particularly noteworthy that in SQL Server, the lock escalation threshold and lock escalation are determined automatically by the system and do not require user settings.
mode and compatibility of locks
When locking in a database, in addition to locking different resources, you can use varying degrees of locking, that is, there are multiple modes of locking, and the lock mode in SQL Server includes:
1. Shared locks
In SQL Server, shared locks are used for all read-only data operations. Shared locks are non-exclusive and allow multiple concurrent transactions to read their locked resources. By default, when data is read, SQL Server immediately releases the shared lock. For example, when you execute the query select * from My_table, first lock the first page, release the lock on the first page, and then lock the second page. This allows you to modify the first page that was unlocked during the read operation. However, the Transaction Isolation level connection option setting and the lock setting in the SELECT statement can change this default setting for SQL Server. For example, "SELECT * from my_table HOLDLOCK" requires that a lock on the table be maintained throughout the query until the query completes before releasing the lock.
2. Modify Lock
Modifying locks is used to lock the resources that might be modified during the initialization phase of the modification operation, thus avoiding deadlocks caused by the use of shared locks. Because a shared lock is used, the operation to modify the data is divided into two steps, first obtaining a shared lock, reading the data, and then upgrading the shared lock to an exclusive lock before performing the modification operation. This allows the shared lock to be upgraded to an exclusive lock when data is modified, if two or more transactions are simultaneously requesting a shared lock on a transaction. At this point, these transactions do not release the shared lock but wait for the other to release, causing the deadlock. If a data is modified directly before the modification of the lock, and then upgraded to an exclusive lock when the data is modified, the deadlock can be avoided. Modifying a lock is compatible with a shared lock, which means that a resource is locked with a shared lock and then allowed to be modified by a lock.
3. Exclusive lock
Exclusive locks are reserved for modifying data. The resources it locks, other transactions cannot be read or modified. Exclusive locks cannot be compatible with other locks.
4. Structure Lock
Structural locks are divided into structural modification locks (sch-m) and structural stability Locks (sch-s). When you perform a table definition language operation, SQL Server uses a SCH-M lock, and SQL Server uses a sch-s lock when compiling the query.
5. Intent lock
The intent lock indicates that SQL Server has the intention of acquiring a shared or exclusive lock at the lower level of the resource. For example, a table-level shared intent lock indicates that a transaction intends to release an exclusive lock to a page or row in a table. The intent lock can be divided into shared intent lock, exclusive intent lock and shared exclusive intent lock. A shared intent lock indicates that the transaction intends to read data by placing a shared lock on the lower resources that are locked by the shared intent lock. An exclusive intent lock indicates that the transaction intends to modify the data by placing an exclusive lock on the lower resources that are locked by the shared intent lock. A shared exclusive lock indicates that a transaction allows another transaction to use a shared lock to read a top-level resource and to place an exclusive lock on the lower level of that resource.
6. Bulk Modify Locks
Bulk modification locks are used when copying data in bulk. You can set a bulk modify lock by TABLOCK hints from a table or by using the "table lock on bulk load" option of the system stored procedure sp_tableoption.
In addition, SQL Server command statement operations affect how locks are locked, and the combination of statements can also produce different locks, as detailed in the following table:
Lock conflict and its prevention methods
In a database system, a deadlock is one in which multiple users (processes) lock a resource, and then try to request a lock on a resource that the other has locked, resulting in a lock request loop that causes multiple users (processes) to be in a state of waiting for the other to release the locked resource.
In SQL Server, the system is able to automatically search for and handle deadlock problems on a regular basis. The system identifies all process sessions that are waiting to lock requests in each search, and SQL Server begins a recursive deadlock search if the process that is identified in the next search is still in the waiting state.
(up to D21 version) when a search detects a locked request ring, the system ends the lowest priority transaction based on the deadlock priority of each process session, after which the system rolls back the transaction and issues a number 1205th error message to the process. In this way, other transactions are likely to continue to run. The SET statement for the deadlock priority is:
SET deadlock_priority {Low | NORMAL}
Where low indicates that the process session has a lower priority, and you can break the transaction of the process first when a deadlock occurs. In addition, by setting the LOCK_TIMEOUT option in each process, you can set the maximum wait time for the process to be in a locked request state. The statement for this setting:
SET lock_timeout {Timeout_period}
Where the timeout_period is in milliseconds.
Understanding the concept of deadlocks, you can use some of the following methods in your application to try to avoid deadlocks:
(1) Reasonable arrangement of table access order.
(2) As far as possible in the transaction to avoid user intervention, as far as possible to make a transaction to deal with less tasks.
(3) The use of dirty reading technology. Dirty reading avoids lock conflicts because it does not lock the accessed table. In a client/server application environment, some transactions are often not allowed to read dirty data, but under certain conditions, we can use dirty read.
(4) Data access time domain Discretization method. The data access time domain Discretization method is a kind of control means in the client/server structure to control the object access time period in the database or database. Mainly through the following ways: reasonable arrangement of the implementation of the background transaction time, the use of workflow for the unified management of the background affairs. Workflow in the management of tasks, on the one hand to limit the number of threads of the same type of task (often limited to 1), to prevent excessive consumption of resources; On the other hand, reasonable arrangement of different tasks to carry out timing, time, try to avoid multiple background tasks at the same time, in addition, to avoid in the foreground transaction peak time running background tasks.
(5) Data storage space discretization method. The data storage space discretization method is to take various means to scatter the data logically in one table to some discrete space, in order to improve the access performance of the table. Mainly through the following methods: First, the large table by row or column decomposition into a number of small tables; Second, decompose according to different user groups.
(6) Use the lowest possible isolation level. The level of isolation is the degree to which multiuser transactions are isolated to ensure the integrity and consistency of database data, and SQL92 defines 4 levels of isolation: uncommitted read, commit read, repeatable read, and serializable. If you choose too high a level of isolation, such as serializable, although the system can achieve better isolation and more to ensure the integrity and consistency of data, but the conflict between transactions and deadlock opportunities greatly increased, greatly affecting the system performance.
(7) using bound connections. Bound connections allows two or more transaction connections to share transactions and locks, and any one transaction connection will request a lock, as if another transaction is requesting a lock, so that these transactions can be allowed to share data without locking conflicts.
(8) Consider using optimistic locking or making a transaction an exclusive lock first. One of the most common deadlocks occurs in series number generators, which are usually written in this way:
BEGIN Tran
Select new_id from Keytab holdlock
Update keytab Set New_id=new_id+l
Commit Tran
If there are two users running the transaction at the same time, they will get a shared lock and keep it. When two users attempt to obtain an exclusive lock on the keytab table, they enter a deadlock. To avoid this, you should rewrite the above transaction into the following form:
BEGIN Tran
Update keytab Set New_id=new_id+l
Select new_id from Keytab
Commit Tran
When overridden in this way, only one transaction can get a keytab exclusive lock, and other processes must wait until the first transaction completes, which increases execution time but avoids deadlocks.
If you require a repeatable ability to read in a transaction, consider writing the transaction in this way to get an exclusive lock on the resource before you read the data. For example, if a transaction needs to retrieve the average price of all books in the titles table and ensure that the results do not change until the update is applied, the optimizer assigns an exclusive table lock. Consider the following SQL code:
BEGIN Tran
Update titles set title_idid=title_id.
where 1=2
if (Selectavg (price) fromtitles) >$15
Begin
/* Perform some additional processing * *
End
Update titles Set price=price*1.10
Where price< (select AVG (price) from titles)
Commit Tran
In this transaction, it is important that no other process modifies the price of any row in the table, or that the value retrieved at the end of the transaction is different from the value retrieved at the start of the transaction. The WHERE clause here looks strange, but whether you believe it or not, this is the most perfect and efficient WHERE clause the optimizer has ever encountered, although the calculated result is always false. When the optimizer processes this query, because it cannot find any valid SARG, its query plan enforces an exclusive lock for table scans. When this transaction executes, the WHERE clause immediately obtains a value of false, so the actual scan is not performed, but the process still gets an exclusive table lock.
Because this process now has an exclusive table lock, it is guaranteed that no other transaction can modify any data rows, read repeatedly, and avoid potential deadlocks due to holdlock. However, to avoid deadlock, it is impossible not to pay the price. When you use table locking to minimize deadlocks, you also increase contention for table locking. So before you implement this approach, you need to weigh the point: it's more important to avoid deadlocks than to allow concurrent access to tables.
Manual lock
SQL Server system recommended that the system automatically manage locks, the system will analyze the user's SQL statement requirements, automatically add the appropriate lock for the request, and in the number of locks too much, the system will automatically lock upgrade. As mentioned earlier, the upgraded thresholds are automatically configured by the system and do not require user configuration.
In practical applications, sometimes in order for the application to run correctly and maintain the consistency of data, it is necessary to manually lock a table in the database. For example, in a transaction operation of an application, it is necessary to perform statistical operations on several tables based on a number, in order to ensure the consistency and correctness of statistical data time, from the first table to the end of all tables, other applications or transactions can no longer write data to these tables, The application wants manual lock (also known as explicit locking) technology to be used to lock the tables artificially (explicitly) from the time the first data table is counted or at the beginning of the entire transaction.
An explicit lock is supported in SQL Server SQL statements (SELECT, INSERT, DELETE, UPDATE). These 4 statements are similar to explicit lock syntax, with the following example given syntax only for the SELECT statement:
SELECT from
[ with
which
]
refers to the type of lock that needs to be added to the table when the statement is executed.
1. HOLDLOCK: Keep the shared lock on the table until the entire transaction ends, rather than immediately releasing the added lock after the statement has been executed.
2. NOLOCK: Do not add shared and exclusive locks, when this option is in effect, may read UNCOMMITTED read data or "dirty data", this option applies only to select statements.
3. Paglock: Specifies to add a page lock (otherwise it is usually possible to add a table lock).
4. ReadCommitted: Set TRANSACTION to read commit isolation level.
5. READPAST: Skipping data rows that have been locked, this option will enable transactions to read data while skipping data rows that have been locked by other transactions, rather than blocking until other transactions release the lock, READPAST only applies to read Committed the SELECT statement operation in a transaction operation under the isolation level.
6. ReadUncommitted: Equal to Nolock.
7. RepeatableRead: Sets the transaction to a repeatable read level of isolation.
8. Rowlock: Specifies the use of row-level locks.
9. SERIALIZABLE: Sets the level of isolation for a transaction to be serializable.
10. TABLOCK: Specifies that a table-level lock is used instead of a row-level or page-level lock, which is released by SQL Server when the statement is executed, and if Holdlock is specified, the lock remains until the end of the transaction.
11. Tablockx: Specifies that exclusive locks are used on the table, which prevents other transactions from reading or updating data for this table until the statement or the entire transaction ends.
12. UPDLOCK: Specifies that a modify lock (update lock) is set when the data is read in the table instead of a shared lock, which is persisted to this statement or to the end of the entire transaction, using UPDLOCK to allow the user to read the data first (and not to block other users from reading the data), and ensure that the data is not modified by other users for a period of time when the data is later updated.
Visible from the above, in SQL Server can be flexible and diverse for SQL statements to lock, if appropriate, we can complete some of the special requirements of the program to ensure the consistency and integrity of the data. For the general user, understanding the lock mechanism does not necessarily mean that it must be used. In fact, SQL Server recommends that the system automatically manage locks in the database, and that some of the setup options for locks are not provided to users and database administrators, and that for special users, it can meet high data consistency and reliability requirements by explicitly locking resources in the database, Just pay special attention to avoid the deadlock phenomenon.
There are several types of locks specified: