Concurrency consistency issues

Source: Internet
Author: User

Missing changes, non-repeatable read, read dirty data, Phantom read

Let's take a look at an example that shows inconsistencies in the data caused by concurrent operations.

Missing changes

Consider an activity sequence in the aircraft booking system:
A ticketing point (a) read out the balance of a flight ticket a, set a=16.
b Ticketing Point (b business) read out the same flight ticket balance a, also 16.
Sell a ticket at a ticket office, modify the balance a←a-1. So a is 15, write a back to the database.
B Ticket sales also sell a ticket, modify the balance a←a-1. So a is 15, write a back to the database.
As a result, two tickets were sold, and the balance in the database was reduced by only 1.

Summed up is: Two transactions T1 and T2 read into the same data and modify, T2 submitted results destroy the results of the T1 submitted, resulting in T1 modification is lost.

Non-REPEATABLE READ

Non-repeatable read refers to the transaction T1 after the data is read, T2 performs the update operation, so that T1 cannot reproduce the previous read result. Specifically, non-repeatable reading consists of three cases:
After the transaction T1 reads a data, the transaction T2 modifies it, and when transaction 1 reads the data again, it gets a different value than the previous one. For example, T1 reads b=100 to perform operations, T2 reads the same data B, modifies it, and then writes b=200 back to the database. T1 in order to proofread the read value b,b has been 200, inconsistent with the first read value.
Transaction T1 After some data records have been read from the database by a certain condition, the transaction T2 deleted some of the records, and when T1 again reads the data in the same condition, some records are found to have disappeared.
Transaction T1 After some data records are read from the database by certain criteria, the transaction T2 inserts some records, and when T1 again reads the data in the same condition, it finds some more records. (This is also called Phantom Reading)

Read "Dirty" data

Read "Dirty" data refers to the transaction T1 modify a certain data, and write it back to disk, transaction T2 read the same data, T1 for some reason was revoked, at this time T1 modified data Recovery original value, T2 read the data is inconsistent with the data in the database, then T2 read the data is "dirty" data, that is, incorrect data.

The main reason for the above three types of data inconsistency is that concurrent operations undermine the isolation of transactions. Concurrency control is the correct way to dispatch concurrent operations, so that the execution of a user's transaction is not affected by other transactions, so as to avoid the inconsistency of data.

Solutions to concurrency consistency issues

Blockade (Locking)

Blocking is a very important technique for implementing concurrency control. The so-called blockade is a transaction t before a data object such as tables, records, and so on, before the system to make a request to lock it. After locking, the transaction T has some control over the data object, and other transactions cannot update the data object until the transaction T releases its lock.

There are two basic types of blocking: exclusive locks (Exclusive locks précis-writers for x locks) and shared locks (Share locks précis-writers for s locks).

Exclusive lock is also called write lock. If the transaction t adds an X lock to the data object A, only T reads and modifies a, and no other transaction can add any type of lock to a, until T releases the lock on A. This guarantees that the other transaction can no longer read and modify a when T releases the lock on A.

Shared locks are also known as read locks. If the transaction t adds the S lock to the data object A, the other transaction can only be locked to a plus s, not the x lock, until T releases the S lock on a. This guarantees that the other transaction can read a, but cannot make any modifications to a when T releases the S lock on a.

Blockade protocol

When using the two basic blocks of X-Lock and S-lock, it is also necessary to stipulate some rules when locking data objects, such as when to apply for X-lock or S-lock, lock-in time, when to release, etc. We call these rules the blockade agreement (Locking Protocol). The different rules governing the way in which closures are imposed have created different kinds of blockade agreements. The three-level lockdown protocol is described below. The level three blocking protocol solves the inconsistency problem of lost modification, non-repeatable reading and reading "dirty" data in different degree, and provides certain guarantee for the correct scheduling of concurrent operation. The following is only a definition of level three blockade protocol, no more discussion.

Level 1 Lockdown Protocol

The 1-level blocking protocol is that transaction T must be X-locked before the data R is modified until the end of the transaction is released. The end of the transaction consists of a normal end (COMMIT) and an abnormal end (ROLLBACK). A Level 1 blocking protocol prevents loss of modification and guarantees that transaction T is recoverable. In a Level 1 blocking protocol, if only the read data does not modify it, it does not need to be locked, so it does not guarantee repeatable read and not read "dirty" data.

Level 2 Lockdown Protocol

Level 2 Blocking protocol is: Level 1 blocking protocol plus transaction T before reading the data r must be added to the S lock, after reading can release S lock. The Level 2 blocking protocol prevents the loss of modifications and further prevents the reading of "dirty" data.

Level 3 Lockdown Protocol

Level 3 Blocking protocol is: Level 1 blocking protocol plus transaction T before reading the data R must first lock it, until the end of the transaction is released. The Level 3 blocking protocol prevents non-repeatable reads, in addition to preventing loss of modification and non-reading of ' dirty ' data.

Transaction ISOLATION LEVEL

Although database theory provides a perfect solution to concurrency consistency, it is very difficult for programmers to control the timing of Hega, locking and unlocking. The vast majority of databases and development tools provide transaction isolation levels that allow users to handle concurrency consistency issues in an easier way. Common transaction isolation levels include: ReadUncommitted, ReadCommitted, RepeatableRead, and serializable four.

Concurrency consistency issues

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.