Lost modifications, non-repeated reads, dirty reads, Phantom reads

Source: Internet
Author: User

Lost modifications, non-repeated reads, dirty reads, Phantom reads

Common concurrency consistency problems include: Lost modifications, non-repeated reads, dirty reads, and Phantom reads (Phantom reads are often classified as non-repeated reads in some materials ).
Loss of Modification
Next, let's take an example to illustrate the data inconsistency caused by concurrent operations.
Consider an activity sequence in the aircraft booking system:
A ticket point (transaction A) reads the balance of A flight's ticket A and sets A to 16.
Ticket B (transaction B) reads the balance A of the same flight, which is also 16.
A ticket points sell A ticket, modify the balance of A small A-1. So A for 15, write A back to the database.
B ticket points also sell A ticket, modify the balance of A small A-1. So A is 15, write A back to the database.
As a result, the balance of the two tickets is reduced by 1.
To sum up, two transactions T1 and T2 read and modify the same data. The results submitted by T2 destroy the results submitted by T1, resulting in loss of modifications to T1. The problems and solutions mentioned in the previous article (data deletion and update in 2.1.4) are often aimed at such concurrent problems. But there are still several problems that cannot be solved through the above method, that is:
Non-repeated read
Non-repeated read means that after transaction T1 reads data, transaction T2 performs an update operation, so that T1 cannot reproduce the previous read results. Specifically, unrepeatable reading involves three situations:
After transaction T1 reads a data, transaction T2 modifies it. When transaction 1 reads the data again, it gets a different value from the previous one. For example, T1 reads B = 100 for calculation, T2 reads the same data B, modifies it, and writes B = 200 back to the database. T1 re-reads B for the read value, and B is already 200, which is inconsistent with the first read value.
After transaction T1 reads some data records from the database based on certain conditions, transaction T2 deletes some of the records. When T1 reads data based on the same conditions again, some records are found to have vanished.
After transaction T1 reads some data records from the database based on certain conditions, transaction T2 inserts some records. When T1 reads data again based on the same conditions, it finds that there are more records. (This is also called phantom read)
Read "dirty" Data
Reading "dirty" data means that transaction T1 modifies a data and writes it back to the disk. After transaction T2 reads the same data, transaction T1 is revoked for some reason, at this time, T1 has modified the original data recovery value. The data read by T2 is inconsistent with the data in the database, and the data read by T2 is "dirty", that is, the data is incorrect.
The main reason for the above three types of data inconsistency is that concurrent operations undermine the isolation of transactions. Concurrency Control is to use the correct method to schedule concurrent operations so that the execution of a user's transaction is not affected by other transactions, so as to avoid data inconsistency.
Solutions to concurrency consistency problems
Locking)
Blocking is a very important technology for implementing concurrency control. The so-called blocking means that transaction T sends a request to the system before performing operations on a data object, such as a table or record, to lock it. After the lock, transaction T has some control over the data object. Before transaction T releases its lock, other transactions cannot update the data object.
There are two basic types of lock: Exclusive locks (X locks) and shared locks (Share locks ).
Exclusive locks are also called write locks. If transaction T adds an X lock to Data Object a, only T is allowed to read and modify A. No other transaction can add any type of lock to, wait until T releases the lock on. This ensures that other transactions cannot read or modify A before T releases the lock on.
A shared lock is also called a read lock. If transaction T adds the S lock to Data Object a, other transactions can only apply the S lock to a, but not the X lock until T releases the S lock on. This ensures that other transactions can read A, but cannot modify A before T releases the S lock on.
Blocking Protocol
When using the X lock and S lock to lock data objects, some rules need to be agreed, for example, when to apply for the X lock or S lock, hold the lock time, and when to release the lock. These rules are called Locking protocols ). Different blocking protocols are formed when different rules are specified for the blocking method. The following describes the three-level blocking protocol. The three-level blocking protocol solves the inconsistency issues such as lost modifications, non-repeated reads, and read "dirty" data to varying degrees, and ensures the correct scheduling of concurrent operations. The following section only defines the three-level blocking protocol and does not discuss it too much.
Level 1 blocking Protocol
Level 1 blocking Protocol: Transaction T must apply an X lock to the data R before it modifies the data until the transaction ends. The transaction end includes the normal end (COMMIT) and the non-normal end (ROLLBACK ). The level-1 blocking protocol Prevents Loss of modifications and ensures that the transaction T is recoverable. In the level-1 blocking protocol, if only the read data is not modified, no locks are required. Therefore, it cannot ensure Repeatable read and non-read "dirty" data.
Level 2 blocking Protocol
Level-2 blocking Protocol: level-1 blocking Protocol plus transaction T must first apply the S lock to the data R before reading it, and then release the S lock after reading it. In addition to preventing loss of modifications, the level-2 blocking protocol can further prevent reading of "dirty" data.
Level 3 blocking Protocol
Level 3 blocking Protocol: Level 1 blocking Protocol plus transaction T must first apply the S lock to the data R before reading it until the transaction ends. The three-level blocking protocol not only prevents loss of modifications and non-reading of 'dirty' data, but also further prevents non-repeated reads.
Transaction isolation level
Although the database theory provides a sound solution to the concurrency consistency problem, it is obviously difficult for programmers to control how to lock, lock, and unlock the timing. The vast majority of databases and development tools provide transaction isolation levels, allowing users to easily handle concurrency consistency issues. Common transaction isolation levels include ReadUnCommitted, ReadCommitted, RepeatableRead, and Serializable. The access to the database at different isolation levels and the returned results of the database may be different. We will explore the transaction isolation level through several experiments and how SQL Server converts them into locks in the background.
Serializable
The Serializable isolation level is the highest transaction isolation level. At this isolation level, dirty Data Reading, non-repeated read, and phantom read will not occur. Before explaining in detail why, let's first look at what Phantom reads are.
Phantom read refers to the process where transaction 1 reads certain data records from the database based on certain conditions, and transaction 2 inserts some new records that meet the search criteria of transaction 1, when transaction 1 reads data again based on the same conditions, it finds that there are more records.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.