Summary of online data notes!! Database transaction concurrency problem, lock mechanism and corresponding 4 isolation levels

Source: Internet
Author: User

Database transaction concurrency Problems

The operations of the database are usually written and read, that is, the crud: Add (Create), read, update, and delete.
A transaction is a complete thing to do.
Transactions are the basic unit of recovery and concurrency control.
A transaction must always keep the system in a consistent state, regardless of the number of concurrent transactions at any given time.
Transactions in a relational database, a transaction can be an SQL statement, a set of SQL statements, or an entire program . is a program execution unit for various data items in a database .
A transaction is a user-defined sequence of operations (multiple tables Read and write). These operations are either done or not, and are an inseparable unit of work.
A transaction usually begins with a BEGIN transaction and ends with a commit or rollback.
Commit: means that the transaction completes the commit, that is, all operations that commit the transaction, specifically, all updates to the database in the transaction are written back to the physical database on disk, and the transaction ends normally.
ROLLBACK: Represents a rollback of a transaction, that is, a failure occurred during the operation of the transaction, the transaction cannot proceed, and the system undoes all the completed operations on the database in the transaction, rolling back to the state of the transaction or to the set rollback point.

When multiple transactions access the database at the same time, the following 5 types of problems occur, including 3 types of data read problems (dirty read, non-repeatable read, Phantom Read), 2 types of data update problems (first category missing updates, the second category missing updates):

(http://blog.csdn.net/zhangzeyuaaa/article/details/46400419 the blog post has specific examples)

1, dirty Read (dirty Read) a transaction reads the change data that the B transaction has not yet committed and operates on this data basis. If the B transaction is rolled back, then the data read by a transaction is not legal at all, called Dirty read. In Oracle, dirty reads do not occur due to version control. 2, non-repeatable read (unrepeatable read) a transaction read the B transaction has been submitted Change (or delete)Data. For example, a transaction reads the data for the first time, then the B transaction changes the data and commits, the a transaction reads the data again, and the data is read differently two times. 3, Phantom Read (Phantom Read) a transaction read the B transaction has been committed NewData. Note and non-repeatable read differences, here is the new, non-repeatable read is changed (or deleted). The two scenarios are different,for non-repeatable reads, only row-level locks are required to prevent the record data from being changed or deleted, but forPhantom reads must be added to the table level lockTo prevent a new piece of data from being added to this table. 4,first category missing updatesA transaction Revoke, the data for the committed B transaction is overwritten. Such errors can have very serious consequences. 5,category two missing updatesA transaction Submit, the data for the committed B transaction is overwritten. Such errors can have very serious consequences. For first category missing updatesAnd category two missing updatesProblem comparison Analysis:

First category missing updates

when a transaction is revoked, the updated data of the already committed B transaction is overwritten . This error may cause serious problems, as can be seen through the following account withdrawal transfer:

Time

Withdrawal transaction A

Transfer Transaction B

T1

Start a transaction

T2

Start a transaction

T3

Enquiry account balance of 1000 yuan

T4

Enquiry account balance of 1000 yuan

T5

Import $100 to change the balance to $1100

T6

Commit a transaction

T7

Remove $100 and change the balance to $900

T8

Revoking a transaction

T9

Balance restored to $1000 (missing update)

When a transaction is withdrawn, "careless" erases the amount of the B transaction that has been transferred to the account.

Category two missing updates

A transaction overwrites the data already submitted by the B transaction, causing the operation of the B office to be lost:

Time

Transfer Transaction A

Withdrawal transaction B

T1

Start a transaction

T2

Start a transaction

T3

Enquiry account balance of 1000 yuan

T4

Enquiry account balance of 1000 yuan

T5

Remove $100 and change the balance to $900

T6

Commit a transaction

T7

Import $100

T8

Commit a transaction

T9

Change the balance to $1100 (missing updates)

In the above example, because the cheque transfer transaction covers the withdrawal transaction to the deposit balance of the update, resulting in the bank lost 100 yuan, on the contrary, if the transaction is first submitted, then the user account will lose 100 yuan.

---solution to the concurrency problem of database transactions: Lock mechanism--4 Isolation LEVELShared and exclusive locks

In order to solve the concurrency problem, the database system introduces the lock mechanism.

There are two basic types of blocking: exclusive locks (Exclusive locks précis-writers for x locks) and shared locks (Share locks précis-writers for s locks).

    • Exclusive lock is also called write lock. If the transaction t adds an X lock to the data object A, only T reads and modifies a, and no other transaction can add any type of lock to a, until T releases the lock on A. This guarantees that the other transaction can no longer read and modify a when T releases the lock on A.
    • Shared locks are also known as read locks. If the transaction t adds the S lock to the data object A, the transaction T can read a but cannot modify a, and the other transaction can only be locked to a plus s, not the x lock, until T releases the S lock on a. This guarantees that the other transaction can read a, but cannot make any modifications to a when T releases the S lock on a.

Block granularity

Blocking technology is used in the database to implement concurrency control.

The size of the blocked object is called the blocking granularity (granularity).

The blocked object can be either a logical unit or a physical unit. As an example of a relational database, a blocking object can be a logical unit: a property value, a collection of attribute values, a tuple, a relationship, an index item, an entire index item, or a whole database, or some physical unit: a page (a data page or index page), a physical record, and so on.

Blocking protocols and isolation levels

When you use exclusive and shared locks to lock data objects, you also need to contract rules such as when to apply for an exclusive or shared lock, lock-in time, when to release, and so on. These rules are called the blockade Agreement (Locking Protocol). The different rules governing the way in which closures are imposed have created different kinds of blockade agreements. Different blocking protocols correspond to different isolation levels.

First level blocking protocol (corresponds to read uncommited)

The first-level blocking protocol is that transaction T must be X-locked before the data R is modified until the end of the transaction is released. The end of the transaction consists of a normal end (COMMIT) and an abnormal end (ROLLBACK).

The first-level blocking protocol prevents loss of updates and guarantees that transaction T is recoverable.

In the first-level blocking protocol, if only the read data does not modify it, it does not need to be locked, so it does not guarantee repeatable read and do not read "dirty" data.

Secondary blocking protocol (corresponding to read commited)

Secondary blocking protocol is: the first level of the blockade protocol plus transaction T before reading the data r must be added to the S lock, after reading can release S lock (instant S lock).

The secondary blocking protocol, in addition to preventing lost updates, further prevents the reading of "dirty" data.

Level three blocking protocol (corresponds to reapetable read)

Level three blocking protocol is: the first level of the blockade protocol plus transaction T before reading the data r must be added to the S lock, until the end of the transaction is released.

The Level three lockdown protocol prevents non-repeatable read and overwrite updates in addition to preventing loss of updates and unread ' dirty ' data.

Level four blocking protocol (corresponds to serialization)

Level four blocking protocol is an enhancement to the level three blocking protocol, and the implementation mechanism is the simplest, directly to the table of the data read or changed in the transaction table lock, that is, other transactions can not read and write any data in the table. These five types of concurrency problems can be avoided!

Note: Blocking protocols and isolation levels are not strictly correspondence.

The ANSI SQL 92 standard defines 4 levels of transaction isolation levels: READ UNCOMMITTED, read COMMITTED, repeatable read, SERIALIZABLESQL 92 recommended REPEATABLE READ Guaranteed read consistency of data. To view the transaction isolation level for MySQL: Default, Global, and session transaction isolation levels:

SELECT @ @tx_isolation;
SELECT @ @global. tx_isolation;
SELECT @ @session. tx_isolation;

There are 4 isolation levels for database transactions, low to HighREAD UNCOMMITTED, Read committed, Repeatable read, Serializable, respectively isolation level of the transaction:
The SQL standard defines 4 isolation levels, sets certain rules, and limits those changes within a transaction to be visible, which are invisible. Low-level transactions typically support higher concurrency processing, which involves lower system overhead.
Read Uncommitted "Reading uncommitted data": Allows all transactions to read data modifications that have not been committed by another transaction. Causes dirty reads, non-repeatable reads, and Phantom reads to occur.
Read Committed "Reading submitted data": Allows only transactions to read data modifications that have been committed by another transaction.default levels for Oracle and SQL Server, you can avoid dirty reads, but non-repeatable reads and Phantom read problems will still occur.
repeatable Read "re-readable": Yesdefault transaction ISOLATION level for MySQL , which ensures that multiple instances of the same transaction will see the same rows of data while concurrently reading the data. But the problem of phantom reading is not lifted.
Serializable "Serializable": The highest isolation level, which resolves a phantom reading problem by forcing transactions to sort, making it impossible to conflict with one another. It is to add a shared lock on each read row of data. At this level, a large number of timeouts and lock competitions can result.

Issues related to the isolation level of the database: Yes there is a problem, no and there is no transaction concurrency conflict problem:

the difference between an optimistic lock and a pessimistic lock

Pessimistic lock (pessimistic lock), as the name implies, is very pessimistic, every time to take the data when they think others will change, so every time when the data are locked, so that others want to take this data will block until it gets the lock. Traditional relational database in the use of a lot of this locking mechanism, such as row locks, table locks, read locks, write locks, etc., are in operation before the lock.

Optimistic lock (optimistic lock), as the name implies, is very optimistic, every time to take the data when they think others will not be modified, so will not be locked, but in the update will be judged in the period when others have to update this data, you can use the version number and other mechanisms. Optimistic locking is useful for multi-read application types, which can improve throughput, such as the fact that a database provides an optimistic lock similar to the write_condition mechanism.

The two kinds of locks have advantages and disadvantages, not to think of one better than the other, like the optimistic lock for less write, that is, the conflict really rarely occurs, this can save the lock overhead, increase the overall system throughput. However, if there is frequent conflict, the upper application will continue to retry, which is to reduce the performance, so in this case, pessimistic locking is more appropriate.

Summary of online data notes!! Database transaction concurrency problem, lock mechanism and corresponding 4 isolation levels

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.