How to master the SQL Server Lock Mechanism

Source: Internet
Author: User

Database lock

Locks are a very important concept in databases. They are mainly used to ensure database integrity and consistency in multi-user environments. We know that data inconsistency occurs when multiple users can manipulate data in the same database at the same time. That is, if the transaction is not locked and multiple users access a database at the same time, the problem may occur when their transactions use the same data at the same time. These problems include loss of updates, dirty reads, non-repeated reads, and Phantom reads:

1. When two or more transactions select the same row and update the row based on the originally selected value, the update will be lost. Every transaction does not know the existence of other transactions. The last update will overwrite the updates made by other firms, which will lead to data loss. For example, two editors make an electronic copy of the same document. Each editor independently changes its copy and saves the modified copy to overwrite the original document. Finally, the edited member who saves the change copy overwrites the changes made by the first edited member. This problem can be avoided if the second editor can make changes only after the first editor is complete.

2. dirty read means that when a transaction is accessing data and modifying the data has not been committed to the database, another transaction also accesses the data, then the data is used. Because the data has not been committed, the data read by another transaction is dirty data, and the operations performed based on the dirty data may be incorrect. For example, an editor is changing an electronic document. During the change process, another editor copied the document (this copy contains all changes made so far) and distributed it to the expected users. After that, the first editor thought that the current change was incorrect, so he deleted the edit and saved the document. Documents distributed to users include edited content that no longer exists, and such edited content should never be considered as existent. This issue can be avoided if no one can read the modified document before the first editor determines the final change.

3. Non-repeated read refers to reading the same data multiple times in a transaction. When the transaction is not completed, another transaction also accesses the same data. Therefore, the data read twice in the first transaction may be different because of the modification of the second transaction. In this way, the data read twice in a transaction is different, so it is called non-repeated read. For example, an editor reads the same document twice, but the author overwrites the document between the two reads. The document has been changed when the editor reads the document for the second time. The original reads cannot be repeated. This issue can be avoided if the editor can read the document only after the author has completed writing.

4. Phantom read refers to a phenomenon that occurs when a transaction is not executed independently. For example, the first transaction modifies the data in a table, which involves all the data rows in the table. At the same time, the second transaction also modifies the data in this table. This modification inserts a new row of data into the table. In the future, the user who operates the first transaction will find that there are still data rows in the table that have not been modified, just like an illusion. For example, an editor changes the document submitted by the author, but when the production department merges the changes into the primary copy of the document, the author has added unedited new materials to this document. This issue can be avoided if no one can add new materials to the document before the editors and production departments process the original document.

Therefore, the locking method is used to process concurrent access by multiple users. Lock is a major means to prevent other transactions from accessing specified resource control and implement concurrency control. When a user locks an object in the database, other users cannot access the object. The impact of locking on concurrent access is reflected in the lock granularity. To control locked resources, you should first understand the system space management. In SQL Server 2000, the smallest unit of space management is a page, with a page of 8 K. All data, logs, and indexes are stored on the page. In addition, there is a restriction on the use of pages. This means that a row of data in the table must be on the same page and cannot be cross-page. The space management unit on the page is the disk area, and a disk area is eight consecutive pages. The smallest unit used by tables and indexes is the disk area. A database consists of one or more tables or indexes, that is, multiple disk partitions. The lock placed on a table restricts concurrent access to the entire table; the lock placed on the disk area limits access to the entire disk area; the lock placed on the data page limits access to the entire data page; the lock placed on the row only limits concurrent access to the row.

SQL Server 2000 Lock Mechanism

SQL Server 2000 has multi-granularity locking, allowing a transaction to lock different types of resources. To minimize the cost of locking, SQL Server automatically locks resources at the appropriate task level. Locking at a small granularity (such as rows) can increase concurrency but requires a large overhead, because if many rows are locked, more locks need to be controlled. Locking at a large granularity (such as a table) is expensive in terms of concurrency, because locking the entire table limits access to any part of the table by other transactions, but requires a low overhead, because the number of locks to be maintained is small. SQL Server can lock rows, pages, extended disk areas, tables, databases, and other resources.

Rows are the smallest space that can be locked, and row-level locks consume the least amount of data resources. Therefore, during transaction processing, other transactions are allowed to continue to manipulate other data in the same table or on the same page, it greatly reduces the waiting time of other transactions and improves the concurrency of the system.

Page-level locks are used to lock a page at a time regardless of the amount of data processed by a transaction. Data on this page cannot be manipulated by other transactions. Before SQL Server 7.0, a page lock was used. There are more page-Level Lock resources than Row-Level Lock data resources. In a page-Level Lock, even if a transaction operates only one row of data on the page, other data rows on the page cannot be used by other transactions. Therefore, when page-level locks are used, data is wasted, that is, data is occupied but not used on the same page. In this case, the data waste cannot exceed the data rows on a single page.

Table-level locks are also very important. Table-Level Lock refers to the entire table in which the data of a table is locked when the transaction operates. Other transactions cannot access other data in the table. Table-level locks are generally used when the volume of data processed by a transaction is large. Table-level locks use a small amount of system resources, but occupy a large amount of data resources. Compared with row-level locks and page-level locks, table-level locks consume less system resources, such as memory, but consume the largest data resources. During table-level locks, a large amount of data may be wasted. Because table-level locks lock on the entire table, other transactions cannot manipulate other data in the table.

A disk lock is a special type of lock and can only be used in some special cases. A cluster-level lock means that a transaction occupies a disk zone, which cannot be occupied by other transactions at the same time. For example, this type of lock is used when the system allocates physical space during database creation and table creation. The system allocates space according to the disk area. When the system allocates space, the disk zone lock is used to prevent other transactions from using the same disk zone at the same time. After the system allocates space, this type of disk zone lock is no longer used. In particular, the disk zone lock is not used when transactions involving data operations are involved.

Database-level locks are used to lock the entire database and prevent any user or transaction from accessing the locked database. A database-Level Lock is a very special lock, which is only used during database recovery operations. A lock of this level is the highest level because it controls the operation of the entire database. As long as the database is restored, you need to set the database to the single-user mode, so that the system can prevent other users from performing various operations on the database.

Row-level locks are the optimal locks, because row-level locks cannot be wasted because data is occupied and not used. However, if your transactions frequently operate on multiple records in a table, row-level locks will be applied to many record rows in the table, and the number of locks in the database system will increase dramatically, this increases the system load and affects system performance. Therefore, SQL Server also supports lock escalation ). The so-called lock upgrade refers to adjusting the lock granularity and replacing multiple low-granularity locks with a few higher-granularity locks to reduce the system load. In SQL Server, when there are many locks in a transaction and the lock upgrade threshold is reached, the system automatically upgrades row-level locks and page locks to table-level locks. In SQL Server, the threshold for Lock upgrade and lock upgrade are automatically determined by the system and do not need to be set by the user.

SQL Server 2000 lock mode

When locking an SQL Server database, in addition to locking different resources, you can also use different degrees of locking, that is, there are multiple lock modes. The locking modes in SQL Server include:

1. in SQL Server, shared locks are used for all read-only data operations. The shared lock is exclusive and allows multiple concurrent transactions to read the locked resources. By default, after data is read, SQL Server immediately releases the shared lock. For example, when "SELECT * from authors" is executed, the first page is locked first. After reading the page, the first page is released and the second page is locked. In this way, you can modify the first page that is not locked during the reading operation. However, both the transaction isolation level connection option setting and the lock setting in the SELECT statement can change the default setting of SQL Server. For example, "SELECT * from authors holdlock" requires that the table be locked during the entire query process until the query is complete.

2. The update lock is used to lock resources that may be modified during the initialization phase of the modification operation. This prevents deadlocks caused by the use of shared locks. When a shared lock is used, the data modification operation is divided into two steps. First, a shared lock is obtained, data is read, the shared lock is upgraded to an exclusive lock, and then the modification is performed. In this way, if two or more transactions apply for a shared lock for one transaction at the same time, these transactions must upgrade the shared lock to an exclusive lock when modifying data. At this time, these transactions will not release the shared lock, but will wait for the other party to release, resulting in a deadlock. If you apply for an update lock before data modification and upgrade it to the exclusive lock when data is modified, the deadlock can be avoided.

3. The exclusive lock is retained to modify data. The resources it locks. Other transactions cannot be read or modified.

4. The schema lock is used when the schema lock is used to execute the table's Data Definition Language (DDL) operation (such as adding a column or removing a table. When compiling a query, the schema stability (Sch-S) Lock is used. The schema stability (Sch-S) Lock does not block any transaction lock, including the exclusive lock. Therefore, when compiling a query, other transactions (including those with an exclusive lock on the table) can continue to run. However, you cannot perform DDL operations on tables.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.