How to master the lock mechanism of SQL Server

Source: Internet
Author: User
Tags final requires resource

Lock of database

Lock is a very important concept in database, it is mainly used to guarantee database integrality and consistency under multi-user environment. We know that data inconsistency occurs when multiple users can manipulate data in the same database at the same time. That is, if there are no locks and multiple users access a database at the same time, problems may occur when their transactions use the same data at the same time. These issues include missing updates, dirty reads, non-repeatable reads, and hallucination reads:

1. When two or more transactions select the same row and then update the row based on the value originally selected, a loss update problem occurs. Every transaction has no knowledge of the existence of other transactions. The final update overrides updates made by other transactions, which results in data loss. For example, two editors made electronic copies of the same document. Each editor changes its copy independently, and then saves the changed copy, overwriting the original document. The editor who finally saved its change copy overwrites the changes made by the first editor. This problem can be avoided if the second editor is able to make changes after the first editor finishes.

2. Dirty reading means that when a transaction is accessing the data and the data has been modified and the modification has not been submitted to the database, another transaction accesses the data and then uses the data. Because this data is not yet submitted data, then another transaction read the data is dirty data, the operation based on dirty data may be incorrect. For example, an editor is changing an electronic document. During the change process, another editor copies the document (which contains all the changes made so far) and sends it to the intended user. After that, the first editor decided that the current changes were wrong, and then deleted the edits and saved the document. The document that is distributed to the user contains edits that no longer exist, and those edits should be considered never existed. This problem can be avoided if no one is able to read the changed document until the first editor determines the final change.

3. Non-repeatable reading refers to reading the same data multiple times within a transaction. When this transaction is not finished, another transaction accesses the same data. Then, between the two read data in the first transaction and the modification of the second transaction, the data read by the first transaction two times may be different. This makes it happen that the data read two times within a transaction is not the same, so it is called a non repeatable read. For example, an editor reads the same document two times, but between two reads, the author rewrites the document. When the editor reads the document for the second time, the document has changed. The original read cannot be duplicated. You can avoid this problem if the editor can read the document only after the author has finished writing it all.

4. Illusion Reading is a phenomenon that occurs when a transaction is not performed independently, for example, the first transaction modifies the data in a table that involves all the rows of data in the table. At the same time, the second transaction modifies the data in this table, which is inserting a row of new data into the table. So, it's going to happen later. The user of the first transaction discovers that there are no modified data rows in the table, as if there were hallucinations. For example, an editor changes the document submitted by the author, but when the production department merges its changes to the master copy of the document, it discovers that the author has added the unedited new material to the document. This problem can be avoided if no one is able to add new material to the document until the editor and production department finish processing the original document.

Therefore, the method of processing multi-user concurrent access is lock. Locks are a major means of preventing other transactions from accessing the specified resource control and implementing concurrency control. When a user locks an object in the database, other users can no longer access the object. The effect of lock on concurrent access is reflected in the granularity of the lock. In order to control the locked resources, we should first understand the space management of the system. In a SQL Server 2000 system, the smallest space management unit is a page, and a page has 8 K. All data, logs, and indexes are stored on the page. In addition, there is a limit to using a page, which is that a row of data in a table must be on the same page and not spread across pages. The Space management unit above the page is the disk area, and a disk area is 8 consecutive pages. The minimum occupancy unit for tables and indexes is the disk area. A database is made up of one or more tables or indexes, that is, a plurality of disk areas. A lock placed on a table restricts concurrent access to the entire table; Locks placed on the panel limit access to the entire disk area; Locks placed on the data page restrict access to the entire data page; Placing a lock on a row restricts concurrent access to that row.

lock mechanism for SQL Server 2000

SQL Server 2000 has a multiple-granularity lock that allows one transaction to lock different types of resources. To minimize the cost of locking, SQL Server automatically locks resources at a level appropriate to the task. Locking in a smaller granularity, such as a row, can increase concurrency but requires significant overhead, because if you lock many rows, you need to control more locks. Locking in a larger granularity, such as a table, is quite expensive because locking an entire table restricts access to any part of the table by other transactions, but requires less overhead because fewer locks are needed to maintain. SQL Server can lock resources such as rows, pages, extents, tables, libraries, and so on.

Rows are the smallest space that can be locked, row-level locks occupy the least amount of data, so allowing other transactions to continue manipulating the same table or other data on the same page during transaction processing greatly reduces the time for other transactions to wait for processing and increases the concurrency of the system.

Page-level locking refers to the fact that the data on this page cannot be manipulated by other transactions, regardless of the number of transaction processing data, and each time a page is locked during transaction manipulation. Before SQL Server 7.0, page-level locks were used. Page-level lock locks are more resource-locked than row-level locks. In a page-level lock, even if a transaction manipulates only one row of data on a page, other data rows on that page cannot be used by other transactions. As a result, when you use page-level locks, there is a waste of data, which means that data is occupied but not used on the same page. In this phenomenon, data is wasted up to no more than a row of data on a page.

Table-level locks are also a very important lock. A table-level lock is when a transaction manipulates data in a table, locking the entire table where the data resides, and other transactions cannot access other data in that table. Table-level locks are generally used when transactions are handled with a large amount of data. Table-level locks are characterized by the use of less system resources, but consume more data resources. Table-level locks occupy less system resources, such as memory, than row-level and page-level locks, but occupy the largest data resources. In a table-level lock, there is a risk of a large amount of wasted data, because table-level locks lock the entire table, so other transactions cannot manipulate other data in the table.

A disk lock is a special type of lock that can only be used in special cases. A cluster-level lock means that a transaction occupies a disk area that cannot be occupied by other transactions at the same time. For example, when creating a database and creating a table, the system uses this type of lock when it allocates physical space. The system is allocated space according to the disk area. When the system allocates space, a disk lock is used to prevent other transactions from using the same disk area at the same time. This type of disk lock is no longer used when the system finishes allocating space. In particular, a disk lock is not used when transactions involving data operations are involved.

Database-level locking is the locking of the entire database to prevent any user or transaction from accessing the locked database. Database-level locks are a very special kind of lock that is used only during the recovery operation of a database. This level of lock is the highest class lock because it controls the operation of the entire database. As long as the database is restored, the database needs to be set to Single-user mode, so the system can prevent other users from doing various things with the database.

Row-level locks are an optimal lock because row-level locks are not likely to cause a waste of data being both occupied and unused. However, if multiple records in a table are frequently manipulated in a user transaction, a row-level lock is added to many of the table's record rows, and the number of locks in the database system increases dramatically, aggravating system load and impacting system performance. As a result, lock escalation is also supported in SQL Server. The so-called lock escalation refers to adjusting the size of the lock, replacing multiple low granularity locks into a few higher-grained locks to reduce system load. The system automatically upgrades row-level locks and page locks to table-level locks when there are more locks in a transaction in SQL Server and a lock escalation gate is reached. It is particularly noteworthy that in SQL Server, the lock escalation threshold and lock escalation are determined automatically by the system and do not require user settings.

Lock mode for SQL Server 2000

When locking in a SQL Server database, in addition to locking different resources, you can use varying degrees of locking, that is, there are multiple modes of locking, and the lock mode in SQL Server includes:

1. Shared locks SQL Server, shared locks are used for all read-only data operations. Shared locks are non-exclusive and allow multiple concurrent transactions to read their locked resources. By default, when data is read, SQL Server immediately releases the shared lock. For example, when you execute the query select * from AUTHORS, first lock the first page, release the lock on the first page, and then lock the second page. This allows you to modify the first page that was unlocked during the read operation. However, the Transaction Isolation level connection option setting and the lock setting in the SELECT statement can change this default setting for SQL Server. For example, "SELECT * from AUTHORS HOLDLOCK" requires that a lock on the table be maintained throughout the query until the query completes before releasing the lock.

2. Update lock update locks are used to lock resources that may be modified during the initialization phase of the modification operation to avoid deadlocks caused by the use of shared locks. Because a shared lock is used, the operation to modify the data is divided into two steps, first obtaining a shared lock, reading the data, and then upgrading the shared lock to an exclusive lock before performing a modification operation. This way, if two or more transactions are simultaneously requesting shared locks on a transaction, these transactions will upgrade the shared locks to exclusive locks when data is modified. At this point, these transactions do not release the shared lock but wait for the other to release, causing the deadlock. If a data is applied to update the lock directly prior to modification, the deadlock can be avoided if the data is modified to exclusive lock.

3. The exclusive locking lock is reserved for modifying the data. The resources it locks, other transactions cannot be read or modified.

4. Structure lock performs data definition language (DDL) operations on a table (for example, adding columns or dropping tables) using schema Modification (sch-m) locks. The schema stability (sch-s) lock is used when compiling the query. Schema stability (sch-s) locks do not block any transaction locks, including exclusive locks. As a result, other transactions, including those with exclusive locks on the table, can continue to run when the query is compiled. However, DDL operations cannot be performed on the table.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.