Transaction isolation level

Source: Internet
Author: User

Problems caused by transaction concurrency

  1. Dirty read: One Transaction reads data that has not been committed by another transaction.

  2. Non-repeated read: The same transaction reads the same data twice and gets different results.

  3. Phantom read: The same transaction reads data twice with the same condition. The number of data records in the result set is different (the number of data records is too large or less ).

Supplement: explain the terms above.

  1. Dirty read: When transaction 1 reads the data that transaction 2 has not committed, if transaction 2 fails, a rollback occurs, which will cause transaction 1 to get the wrong data. For example, James's salary is 1000, and transaction 2 Changes James's salary to 2000, but the data is not submitted. Transaction 1 queries James's salary and the result is 2000. If transaction 2 fails to be changed at this time, transaction 2 is rolled back. Therefore, the result 2000 returned by transaction 1 is incorrect.

  2. Non-repeated read: Transaction 1 reads a record for the first time. At this time, transaction 2 changes the same record. If transaction 1 reads this record again, the data read for the second time in transaction 1 is different from the data read for the first time.

  3. Phantom read: Transaction 1 first queries by a condition to obtain a result set. Transaction 2 inserts or deletes a record that meets the conditions queried by transaction 1. When transaction 1 is queried based on the same condition for the second time, because transaction 2 inserts or deletes data, the resulting result set (the number of data entries increases or decreases) will be different.

To solve these concurrency problems, the database provides four transaction isolation levels for processing.

Four transaction isolation levels

  1. Transaction_read_uncommitted: prevents loss of updates and allows dirty reads, non-repeated reads, and Phantom reads.

  2. Transaction_read_committed: prevents dirty reads and allows repeated and Phantom reads. This is also the default isolation level for most databases.

  3. Transaction_repeatable_read: prevents dirty reads and repeated reads, and allows Phantom reads.

  4. Transaction_serializable: prevents dirty reads and repeated reads. (Transactions are fully serialized and executed one by one in sequence, without any concurrency issues .)


Update loss Dirty read Non-repeated read Phantom read
Uncommitted read N Y Y Y
Committed read N N Y Y
Repeatable read N N N Y
Serializing N N N N

Lock:

A lock is used to prevent other transactions from accessing specified resources. Locks are the main method for implementing concurrency control. They are an important guarantee that multiple users can simultaneously manipulate data in the same database without data inconsistency. In general, locks can prevent dirty reads, non-repeated reads, and Phantom reads.

Use locks to handle dirty reads, non-repeated reads, and Phantom reads

An exclusive lock can be used to prevent update loss (dirty reads, non-repeated reads, and Phantom reads are allowed. When transaction 1 is updating a record, other transactions need to block the wait lock if they want to update the record. After transaction 1 releases the lock and transaction 2 obtains the lock, transaction 2 can update this record. This prevents loss of updates. In this case, dirty reads occur, because only the exclusive write lock is applied to the data, and this lock can only prevent the execution of other write transactions. The read data does not need to be locked. Therefore, other read transactions can still read data during data modification. In this way, dirty reads will occur. Likewise, repeated and Phantom reads may also occur.

To prevent dirty reads (non-repeated reads are allowed, Phantom reads), you can use the exclusive lock and shared lock to add a shared lock to the read transaction (the query is released after execution, you do not need to wait until the transaction ends.) add an exclusive lock to the write transaction ". For example, when transaction 2 modifies a data entry, transaction 1 wants to read the record. Transaction 2 adds an exclusive lock to the data and does not release the lock after the modification is successful. Therefore, transaction 1 cannot obtain the shared lock on this record, transaction 1 is blocked until the modification action of transaction 2 is completed and the lock is released. At this time, transaction 1 can get the "shared lock" to read data. Therefore, transaction 1 cannot read uncommitted data, so it can prevent dirty reads. (Why cannot I prevent repeated reads ?) For example, when transaction 1 reads a record for the first time, it adds a "share lock" to the data during reading, and releases the lock after reading the record. At this time, transaction 2 modifies the record, apply the exclusive lock when modifying the lock. After the modification is complete, release the lock. Then the transaction 1 reads the record again, then it can normally obtain the "share lock" and read the data. However, the data read at this time is the modified data. The data read twice is different. This causes non-repeated reads.

To prevent repeated reads (phantom read is allowed), the "exclusive lock" and "shared lock" can be used to add "shared lock" to read transactions and "exclusive lock" to write transactions ". Both locks are released after the transaction is completed. For example, if transaction 1 reads a record for the first time and adds a "share lock" to the data, transaction 1 will not release the lock until the transaction is completed, transaction 2 wants to modify this record, so transaction 2 needs to apply for an exclusive lock. Because transaction 1 is not completed and the lock is not released, transaction 2 will be blocked and wait for the lock. Transaction 2 does not have the opportunity to obtain the lock execution until transaction 1 completes the lock release. Therefore, no matter how many queries there are in transaction 1, the returned results are the same. This prevents repeated reads. (Why cannot we prevent Phantom reads ?) Although the corresponding locks are applied to read and modify data for concurrency control, the data is newly added when you add data to the database, there is no lock limit on the new data. Therefore, two queries of a transaction may obtain different result sets (different records ).

To prevent phantom read, the range lock ranges ranges_s mode is used, and the lock search range is read-only. In this way, no data within this range can be added or deleted, this prevents Phantom reads.

(PS:Use locks to handle dirty reads, non-repeated reads, and Phantom readsAll text descriptions may seem a little difficult. But I can try my best to see it. I have understood the lock for a long time before I can understand it)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.