Concurrency consistency issues

Source: Internet
Author: User
Tags ticket

Common concurrency concurrency Consistency issues include: missing changes, non-repeatable reads, read dirty data, Phantom reads (Phantom reads are often associated with non-repeatable reads in some materials)

Missing modifications

Let's take a look at an example that shows inconsistencies in the data caused by concurrent operations.

Consider an activity sequence in the aircraft booking system:
A ticketing point (a) read out the balance of a flight ticket a, set a=16.
b Ticketing Point (b business) read out the same flight ticket balance a, also 16.
Sell a ticket at a ticket office, modify the balance a←a-1. So a is 15, write a back to the database.
B Ticket sales also sell a ticket, modify the balance a←a-1. So a is 15, write a back to the database.

As a result, two tickets were sold, and the balance in the database was reduced by only 1.

Summed up is: Two transactions T1 and T2 read into the same data and modify, T2 submitted results destroy the results of the T1 submitted, resulting in T1 modification is lost. The problems and workarounds mentioned in the previous article (2.1.4 Data deletion and update) are often addressed to such concurrency problems. But there are still several types of problems that cannot be solved by the above method:
non-repeatable read

Non-repeatable read refers to the transaction T1 after the data is read, T2 performs the update operation, so that T1 cannot reproduce the previous read result. Specifically, non-repeatable reading consists of three cases:
After the transaction T1 reads a data, the transaction T2 modifies it, and when transaction 1 reads the data again, it gets a different value than the previous one. For example, T1 reads b=100 to perform operations, T2 reads the same data B, modifies it, and then writes b=200 back to the database. T1 in order to proofread the read value b,b has been 200, inconsistent with the first read value.
Transaction T1 After some data records have been read from the database by a certain condition, the transaction T2 deleted some of the records, and when T1 again reads the data in the same condition, some records are found to have disappeared.
Transaction T1 After some data records are read from the database by certain criteria, the transaction T2 inserts some records, and when T1 again reads the data in the same condition, it finds some more records. (This is also called Phantom Reading)

Solutions to concurrency consistency issues
blockade (Locking)

Blocking is a very important technique for implementing concurrency control. The so-called blockade is a transaction t before a data object such as tables, records, and so on, before the system to make a request to lock it. After locking, the transaction T has some control over the data object, and other transactions cannot update the data object until the transaction T releases its lock.

There are two basic types of blocking: exclusive locks (Exclusive locks précis-writers for x locks) and shared locks (Share locks précis-writers for s locks).

Exclusive lock is also called write lock. If the transaction t adds an X lock to the data object A, only T reads and modifies a, and no other transaction can add any type of lock to a, until T releases the lock on A. This guarantees that the other transaction can no longer read and modify a when T releases the lock on A.

Shared locks are also known as read locks. If the transaction t adds the S lock to the data object A, the other transaction can only be locked to a plus s, not the x lock, until T releases the S lock on a. This guarantees that the other transaction can read a, but cannot make any modifications to a when T releases the S lock on a.
Blockade protocol

When using the two basic blocks of X-Lock and S-lock, it is also necessary to stipulate some rules when locking data objects, such as when to apply for X-lock or S-lock, lock-in time, when to release, etc. We call these rules the blockade agreement (Locking Protocol). The different rules governing the way in which closures are imposed have created different kinds of blockade agreements. The three-level lockdown protocol is described below. The level three blocking protocol solves the inconsistency problem of lost modification, non-repeatable reading and reading "dirty" data in different degree, and provides certain guarantee for the correct scheduling of concurrent operation. The following is only a definition of level three blockade protocol, no more discussion.
Level 1 Lockdown Protocol

The 1-level blocking protocol is that transaction T must be X-locked before the data R is modified until the end of the transaction is released. The end of the transaction consists of a normal end (COMMIT) and an abnormal end (ROLLBACK). A Level 1 blocking protocol prevents loss of modification and guarantees that transaction T is recoverable. In a Level 1 blocking protocol, if only the read data does not modify it, it does not need to be locked, so it does not guarantee repeatable read and not read "dirty" data.
Level 2 Lockdown Protocol

Level 2 Blocking protocol is: Level 1 blocking protocol plus transaction T before reading the data r must be added to the S lock, after reading can release S lock. The Level 2 blocking protocol prevents the loss of modifications and further prevents the reading of "dirty" data.
Level 3 Lockdown Protocol

Level 3 Blocking protocol is: Level 1 blocking protocol plus transaction T before reading the data R must first lock it, until the end of the transaction is released. The Level 3 blocking protocol prevents non-repeatable reads, in addition to preventing loss of modification and non-reading of ' dirty ' data.
Transaction ISOLATION LEVEL

Although database theory provides a perfect solution to concurrency consistency, it is very difficult for programmers to control the timing of Hega, locking and unlocking. The vast majority of databases and development tools provide transaction isolation levels that allow users to handle concurrency consistency issues in an easier way. Common transaction isolation levels include: ReadUncommitted, ReadCommitted, RepeatableRead, and serializable four. The way the database is accessed under different isolation levels and the results of the database return may be different. We'll go through several experiments to learn more about the transaction isolation level and how SQL Server transforms them into locks in the background.
Serializable

The Serializable isolation level is the highest transaction isolation level, where the problem of reading dirty data, non-repeatable reads, and Phantom reads does not occur at this isolation level. Before detailing why first let's look at what Phantom reads.

The so-called Phantom reading means: Transaction 1 After some data records are read from the database by certain conditions, transaction 2 inserts a number of new records that conform to the transaction 1 retrieval criteria, and when transaction 1 reads the data again in the same condition, a few more records are found.

Concurrency consistency issues

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.