SQL SERVER lock mechanism (IV)-Overview (impact of various transaction isolation levels)

Source: Internet
Author: User

SQL server lock mechanism (I)-Overview (lock types and ranges)

SQL SERVER lock mechanism (II)-Overview (Lock compatibility and lockable resources)

 

This article introduces the locking mechanism of SQL SERVER (iii) -- Overview (lock and transaction isolation level)

 

 

Vi. Impact of various transaction isolation levels

Users who modify data will affect other users who read or modify the same data at the same time. That is, these users can access data concurrently. If the data storage system has no concurrency control, you may see the following negative effects:

· Uncommitted dependency (dirty read)

· Inconsistent analysis (not repeated)

· Phantom read

(1) Dirty read:

For example, Zhang is executing a business as follows:

Begin traninsert tbUnRead select 3, 'zhang san' unionselect 4, 'Li si' --- delay seconds, simulating real transaction conditions, used to process business logic waitfor delay' 00: 00: 05 'rollback tran --- Li queries the data in the table and runs the following statement: set Transaction isolation level read uncommitted -- query data select * from tbUnRead where name like 'sheet %'

 

 

Li can see the insert statement executed by Zhang and add the data to the database, as shown in.

However, Zhang eventually rolled back the transaction instead of committing the transaction, so this record was not actually inserted into the database. As a result, Li considers the dirty read data as a real query result.

To solve this problem, we need to change the transaction isolation level of the database from uncommitted read to committed read. Only when the query result is correct is not very important, or is queried every time, even if the query result is incorrect, the next query result is correct and does not have a big impact, this is suitable for uncommitted read.

 

(2) Non-repeated read

For example, Zhang is querying data as follows:

Set Transaction isolation level read committedbegin transelect * from tbUnRead where ID = 2 --- latency seconds, simulating real Transaction conditions, used to process business logic waitfor delay '00: 00: 05 'select * from tbUnRead where ID = 2 commit tran --- at this time, Li updates the data in the table, the following statement: update tbUnReadset name = 'Jack _ upd' where ID = 2

 

The preceding statement causes Zhang to have two identical query conditions in the same transaction, and different results (such as) are queried ).

This is because the time for retaining the shared lock at the isolation level of "committed read" is: release immediately after the query is completed, rather than release after the transaction is completed. Although ZHANG is still using the transaction, all the exclusive locks in the transaction will be retained until the transaction is completed. However, the shared lock is immediately released when the queried data is in the transaction process, so others can still modify it, resulting in two identical query conditions in a transaction, different results can be obtained. The best solution is to set the isolation level to "repeatable ".

The "Repeatable read" transaction isolation level keeps the shared locks established in the transaction process to the completion of the transaction, although it can avoid the problem of "unrepeatable read, however, it may also cause data to be locked for too long, while others cannot read data, which affects the concurrency rate and even increases the "deadlock" rate.

 

(3) Phantom read

For example, Zhang is querying data as follows:

Set Transaction isolation level REPEATABLE READbegin transelect * from tbUnRead where ID = 3 --- delay seconds, simulating real Transaction conditions, used to process business logic waitfor delay '00: 00: 05 'select * from tbUnRead where ID = 3 commit tran -- at this time, Li adds a record, the following statement: INSERT TBUNreadselect 3, 'phantom read'

 

 

 

Zhang has set the isolation level to "Repeatable read". Although the data that has been read is retained to the end of the transaction, both the shared lock and the mutex lock cannot prevent others from running the new operation, as a result, no data exists during the first query, but data exists during the second query. It is called "Phantom read ". For example.

To avoid such problems, you can set the isolation level to "serializable". After the setting, no one else can add data.

 

 

7. Access errors that can be prevented at different isolation levels

Isolation level

Dirty read

Non-repeated read

Phantom read

Uncommitted read

Yes

Yes

Yes

Committed read

No

Yes

Yes

Repeatable read

No

No

Yes

Snapshots

No

No

No

Serializable

No

No

No

 

 

VIII. Interaction between common locks and transaction isolation levels

 

Committed read

Repeatable read

Snapshots

Serializable

Share

Release after reading data

Release only after the transaction ends

No lock, controlled by version

Release only after the transaction ends

Update

After data is read, it is released or upgraded to an exclusive lock.

After data is read, it is released or upgraded to an exclusive lock.

No lock, controlled by version

After data is read, it is released or upgraded to an exclusive lock.

Exclusive

Release only after the transaction ends

Release only after the transaction ends

No lock, controlled by version

Release only after the transaction ends

 

IX. Dynamic Lock Management

The database engine uses dynamic lock management policies to control locking and system cost effectiveness. The database engine can dynamically adjust the data granularity and lock type. When the lowest level of row lock is used instead of a larger range of page locks, the possibility of two transactions requiring the same range of data locks can be reduced, enhance the parallel access capability to serve more users at the same time and reduce the chance of deadlock. On the contrary, converting a low-level lock into a high-level lock reduces the system's resource burden, but increases the possibility of concurrent contention.

This mechanism is managed by the lock manager. Each lock must be recorded in the memory and cooperated with the lock manager to complete data access operations, you can imagine that when there are 1 million records in the table, when you execute an update command without a where statement, the database engine will use the row lock by default, however, to record 1 million rows of lock records and related intention-sharing locks, a large amount of system resources will be consumed. When the system resources are insufficient, the database engine will automatically increase the lock level, that is, the row lock is upgraded to the page lock. if the resources are insufficient, the row lock is upgraded to the table lock again.

For the above example, if 200 records can be placed on each page, the row locks of the 1 million table records will be converted into 5000 page locks, saving a lot of intention sharing locks. If the resource is still enough, you can upgrade the lock level to the table lock again, so that you only need one lock.

The larger the scope of the lock, the less resources are spent on the Management lock. However, the fewer concurrent accesses to the resource at the same time. For example, if you use a row lock, you can access your records, and my records will not affect each other. However, if you upgrade to a page lock, if you grab the page first, the access record is displayed on this page, so you must release the page before accessing it. If the table lock is upgraded, only one person can access the table's records at the same time. Others cannot access the table. For example.

 

 

 

Generally, you do not need to manually set the lock range. You can use the dynamic lock policy to determine the most economical Lock Based on the situation of the Microsoft SQL Server database engine. When performing a query, the database engine automatically determines the most appropriate Lock Based on the architecture and query features. For example, to reduce the locking overhead, the Optimizer may select a page lock in the index when performing an index scan.

Dynamic locking has the following advantages:

· Simplified database management. The Database Administrator does not have to adjust the lock upgrade threshold.

· Improve performance. The Database Engine minimizes system overhead by using locks suitable for tasks.

· Application developers can focus on development. The database engine is automatically locked.

In SQL Server 2008, the lock upgrade action has changed, and the LOCK_ESCALATION option is introduced.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.