Tutorial _ MySQL

Source: Internet
Author: User
This article describes how to explain the transaction isolation level and lock relationship of Innodb in MySQL. it is based on the experience of the technical team of Meituan. if you need it, refer Preface:

We all know the several properties of transactions. to maintain these properties, especially consistency and isolation, the database usually uses locking. At the same time, the database is a highly concurrent application, with a large number of concurrent accesses at the same time. if the lock is too high, the concurrent processing capability will be greatly reduced. So the lock processing is the essence of the database's transaction processing. Here, we analyze the locking mechanism of the InnoDB engine in MySQL to give readers a better understanding of what the database has done in transaction processing.

One lock or two locks?
Because there are a large number of concurrent accesses, in order to prevent deadlocks, it is generally recommended to use a blocking method in applications, that is, at the beginning of the method, you already know what data will be used, and then lock all the data, after the method is run, unlock it all. This method can effectively avoid loop deadlock, but it is not applicable in the database, because the database does not know what data will be used at the beginning of the transaction.
The database follows the two-segment lock protocol, which divides the transaction into two phases: the locking phase and the unlocking phase (so the two locks are called)

Lock phase: you can perform the lock operation in this phase. Apply for and obtain the S lock before performing read operations on any data (shared lock, other transactions can continue to apply for a shared lock, but cannot apply an exclusive lock ), apply for and obtain the X lock before performing the write operation (exclusive lock, other transactions can no longer obtain any lock ). If the lock fails, the transaction enters the waiting state until the lock is successful.
Unlock phase: when a transaction releases a lock, the transaction enters the unlock phase. in this phase, only the unlock operation can be performed and no lock operation can be performed.
Transaction lock/unlock processing
Begin;
Insert into test... add the lock corresponding to insert
Update test set... add the lock corresponding to update
Delete from test... add the lock corresponding to delete
Commit; when a transaction is committed, the corresponding locks of insert, update, and delete are released at the same time.
Although this method cannot avoid deadlocks, the two-segment lock protocol can ensure that the concurrent scheduling of transactions is serialized (serialization is very important, especially during data recovery and backup.

Locking method in transactions
Four isolation levels of transactions
In database operations, the transaction isolation level is proposed to effectively ensure the correctness of concurrent data reading. Our database locks also exist to build these isolation levels.

Isolation level Dirty Read (Dirty Read) non-repeated Read (NonRepeatable Read) Phantom Read (Phantom Read)

  • Uncommitted Read (Read uncommitted) may be
  • Read committed is impossible.
  • Repeatable read is impossible
  • Serializable cannot be impossible

Uncommitted Read (Read Uncommitted): Dirty Read is allowed, that is, data that may be Read from other sessions that have not committed transaction modifications.

Read Committed: only Committed data can be Read. Most databases such as Oracle are at this level by default (no repeated read)
Repeated Read. The queries in the same transaction are consistent at the start time of the transaction, and the InnoDB default level. In the SQL standard, this isolation level eliminates non-repeated reads, but phantom reads still exist.
Serializable: fully Serializable read. a table-level shared lock is required for each read, which blocks read/write operations.
At the Read Uncommitted level, databases are generally not used, and no operation is locked. we will not discuss it here.

Types of locks in MySQL
There are many types of locks in MySQL, including common table locks and row locks, and new Metadata locks. table locks apply locks to a whole table, although they can be divided into read locks and write locks, after all, locking the entire table will lead to a reduction in concurrency, which is generally used for ddl processing.

The row lock is used to lock data rows. This locking method is complicated, but because only limited data is locked and no restrictions are imposed on other data, it is highly concurrent, mySQL generally uses row locks to process concurrent transactions. The row lock is discussed here.

Read Committed (Read submitted content)
At the RC level, data reading is unlocked, but data writing, modification, and deletion require locking. The effect is as follows:

MySQL> show create table class_teacher \G\Table: class_teacherCreate Table: CREATE TABLE `class_teacher` ( `id` int(11) NOT NULL AUTO_INCREMENT, `class_name` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL, `teacher_id` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `idx_teacher_id` (`teacher_id`)) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci

1 row in set (0.02 sec)

MySQL> select * from class_teacher;

+ ---- + -------------- + ------------ + | Id | class_name | teacher_id | + ---- + ---------------- + ------------ + | 1 | class 1 in the first three years | 1 | 3 | class 1 in the second day | 2 | 4 | second class 2 | 2 | + ---- + -------------- + ------------ +

Because InnoDB of MySQL uses the RR level by default, we need to enable this session to the RC level and set the binlog mode.

SET session transaction isolation level read committed;SET SESSION binlog_format = 'ROW';

(Or MIXED)
Transaction A transaction B
Begin;

Update class_teacher set class_name = 'third grade second class' where teacher_id = 1; update class_teacher set class_name = 'third grade three class' where teacher_id = 1;

ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

commit; 

To prevent modification conflicts during the concurrency process, MySQL in transaction A locks the data row with teacher_id = 1 and does not commit (release the lock), so transaction B will never get the row lock, wait continues until timeout.

At this time, we should note that teacher_id has an index. what if it is a class_name without an index? Update class_teacher set teacher_id = 3 where class_name = 'First three class ';
MySQL locks all data rows in the entire table. It sounds a bit incredible here, but when SQL is running, MySQL does not know which data rows are in the class_name = 'First three class' (no index ), if a condition cannot be quickly filtered by indexes, the storage engine locks all records and returns the results, which are then filtered by the MySQL Server layer.

However, in actual use, MySQL has made some improvements. when the filtering conditions of MySQL Server are not met, the unlock_row method will be called, release a record that does not meet the conditions (violation of the constraints of the second lock protocol ). This ensures that only the locks on the matching records are held at the end, but the locking operation of each record cannot be omitted. It can be seen that even MySQL will violate the specifications for efficiency. (See p181, Chinese version 3 of High Performance MySQL)

This situation also applies to the default isolation level RR of MySQL. Therefore, when you batch modify a table with a large amount of data, if you cannot use the corresponding index, MySQL Server will be particularly slow in data filtering, even though some rows of data are not modified, they are locked.

Repeatable Read (repeable)
This is the default isolation level for InnoDB in MySQL. Let's explain it in two modules: "read" and "write.

Read
Read is repeatable. The concept of repeatable is that when multiple instances of a transaction read data concurrently, they will see the same data rows, which are a bit abstract. let's take a look at the effect.

Presentation in RC (repeatable) mode

Transaction A transaction B

begin; begin;

Select id, class_name, teacher_id from class_teacher where teacher_id = 1; id class_name teacher_id1 class 3 class 2 12 class 3 Class 1 update class_teacher set class_name = 'class 3 where id = 1; commit; select id, class_name, teacher_id from class_teacher where teacher_id = 1; id class_name teacher_id1 first class 3 class 12 first class 3 Class 1

The data modified by transaction B is not the same as the result of the first query and cannot be re-read.

commit; 

After the data with the id = 1 modified by transaction B is committed, transaction A performs the same query. The last query result is different from the previous one. This means that it cannot be re-read (the result produced by re-reading is different ). This may cause some problems. let's look at the performance of MySQL at the RR level:


Transaction A transaction B transaction C

Begin; select id, class_name, teacher_id from class_teacher where teacher_id = 1; id class_name teacher_id1 class 3 class 2 class 12 class 3 class 3 Class 1 update class_teacher set class_name = 'class 3 class 3 'where id = 1; commit; insert into class_teacher values (null, 'Class 3 ', 1); commit; select id, class_name, teacher_id from class_teacher where teacher_id = 1; id class_name teacher_id1 class 3 class 2 class 12 class 3 Class 1

The data modified by transaction B is rewritable, just like the data read by the first SQL statement.

The newly added data of transaction C was not read.

commit; 

We noticed that when teacher_id = 1, transaction A first reads data, and transaction B modifies the data with id = 1 in the middle, and after commit, the data read by transaction A for the second time is exactly the same as that for the first time. So it is repeatable. So how does MySQL do it? Let's take a look at it.

What is the difference between unrepeatable read and Phantom read?
It is easy for many people to confuse non-repeated and phantom reads. Indeed, they are somewhat similar. But repeatable reading focuses on update and delete, while phantom reading focuses on insert.

If you use the lock mechanism to implement these two isolation levels, in the repeatable read, after the SQL reads the data for the first time, it locks the data and other transactions cannot modify the data, you can achieve repeatable read. However, this method cannot lock the insert data. Therefore, when transaction A previously reads data or modifies all the data, transaction B can still commit the insert data, at this time, transaction A will find that there is an additional piece of data that is not available before, which is phantom read and cannot be avoided through row locks. Serializable isolation level is required. read locks are used for reading and write locks. read locks and write locks are mutually exclusive. This can effectively avoid phantom read, non-repeated read, dirty read, and other problems, but it will greatly reduce the concurrency of the database.

Therefore, the biggest difference between non-repeatable read and Phantom read lies in how to solve their problems through the locking mechanism.

As mentioned above, the pessimistic locking mechanism is used to deal with these two problems. However, for performance considerations, mature databases such as MySQL, ORACLE, and PostgreSQL, MVCC (multi-Version Concurrency control) based on optimistic locking theory is used to avoid these two problems.

Pessimistic lock and optimistic lock
Pessimistic lock
As its name implies, it refers to a conservative attitude towards data being modified by the outside world (including other current transactions of the system, as well as transactions from the external system). therefore, the data is locked during the entire data processing process. Pessimistic locks often rely on the locks provided by the database (and only the locks provided by the database layer can truly guarantee the exclusive data access; otherwise, even if the locking mechanism is implemented in the system, it cannot be ensured that the external system will not modify the data ).

In the case of pessimistic locks, consistent read locking is required to ensure transaction isolation. The lock is applied when data is read. other transactions cannot modify the data. The lock is also required to modify and delete data, and other transactions cannot read the data.

Optimistic lock
Compared with pessimistic locks, optimistic locks adopt a more loose locking mechanism. Pessimistic locks are implemented by the database lock mechanism in most cases to ensure maximum operation exclusiveness. However, there is a large amount of database performance overhead, especially for long transactions, which is often unbearable.

The optimistic lock mechanism solves this problem to some extent. Optimistic locks are mostly implemented based on the data Version record mechanism. What is the data version? Add a version ID for the data. in the database table-based version solution, you can add a "version" field to the database table. When reading the data, read the version number together, and then add one to the version number when updating the data. In this case, the version data of the submitted data is compared with the current version information recorded in the database table. if the submitted data version number is greater than the current version number of the database table, it is updated, otherwise, expired data is considered.

It should be noted that there are no fixed specifications for the implementation of MVCC, and each database will have different implementation methods. Here we discuss the MVCC of InnoDB.

Implementation of MVCC in MySQL InnoDB
In InnoDB, two additional hidden values are added after each row of data to implement MVCC. the two values record when this row of data is created, the other one records when this row of data expires (or is deleted ). In actual operations, the transaction version is not stored as time, but as the transaction version number. each time a new transaction is started, the transaction version number increases progressively. At the Repeatable reads transaction isolation level:

  • SELECT, read the creation version <= current transaction version number, delete the version number is null or> the current transaction version number.
  • INSERT, save the current transaction version number as the row creation version number
  • When deleting, save the current transaction version number as the row deletion version number
  • During UPDATE, insert a new record, save the current transaction version number as the row creation version number, and save the current transaction version number to the original deleted row
  • Through MVCC, although each row of records requires additional storage space, more row checks and some additional maintenance work, but it can reduce the use of locks, most read operations do not need to be locked, the data reading operation is very simple and has good performance. It can also ensure that only the compliant rows are read, and only the necessary rows are locked.

No matter from the database teaching books or from the Internet, most of them are listed in the four isolation levels of transactions in the above section. the RR level is repeatable, however, phantom reading cannot be solved, but phantom reading can be solved only at the Serializable level. So I added transaction C to demonstrate the effect. A data commit with teacher_id = 1 is added to transaction C. phantom reads should occur at the RR level, transaction A reads the newly added data of transaction C when querying data with teacher_id = 1. However, after testing, it is found that this situation does not exist in MySQL. after transaction C is committed, transaction A still does not read this data. It can be seen that the RR level of MySQL solves the Phantom read problem. See

The read problem is solved. according to the definition of MVCC, conflicts may occur when data is submitted concurrently. how can this problem be solved? Let's take a look at how to process write data at the RR level in InnoDB.

Difference between "read" and "read"
Some readers may wonder that the isolation level of transactions is actually the definition of read data. But here, we split the transaction into two modules: read and write. This is mainly because the read in MySQL is different from the read in the transaction isolation level.

We can also see that, at the RR level, the MVCC mechanism makes the data repeatable, but the data we read may be historical data, which is not timely, it is not the current data of the database! This may cause problems in some services that are especially sensitive to the timeliness of data.

For this method of reading historical data, we call it snapshot read, and the method of reading data of the current database version is called current read ). Obviously, in MVCC:

Snapshot read: select

select * from table ....;

Current read: special read operations, insert/update/delete operations, which belong to the current read and process the current data and need to be locked.

select * from table where ? lock in share mode;select * from table where ? for update;insert;update ;delete;

The isolation level of transactions actually defines the current read level. MySQL introduces the concept of snapshot reading to reduce the lock processing time (including waiting for other locks) and improve the concurrency capability, so that the select statement does not need to be locked. However, the "current read" such as update and insert requires another module to solve the problem.

Write ("current read ")
Although only the requirements for reading data are defined at the transaction isolation level, this can be said to be the requirement for writing data. The above "read" refers to the snapshot read, and the "write" mentioned here is the current read.
To solve the Phantom read problem in the current read, MySQL transactions use the Next-Key lock.

Next-Key lock
The Next-Key lock is the combination of the row lock and GAP (GAP lock). The Row lock is described above. Next, let's talk about the GAP lock.

The row lock prevents data conflicts when data modifications of different transaction versions are committed. But how to avoid other transactions from inserting data becomes a problem. Let's take a look.Comparison between RR level and RC level

RC level:

Transaction A transaction B

Begin; select id, class_name, teacher_id from class_teacher where teacher_id = 30; insert into class_teacher values (null, 'third-grade second class', 30); commit; select id, class_name, teacher_id from class_teacher where teacher_id = 30; id class_name teacher_id2 Grade 3 class 4 3010 Grade 3 class 2 30RR level: transaction A transaction Bbegin; begin; select id, class_name, teacher_id from class_teacher where teacher_id = 30; id class_name teacher_id2 class 3 class 2 30 update class_teacher set class_name = 'class 3 class 4 'where teacher_id = 30; insert into class_teacher values (null, 'Class 2, 30); waiting .... select id, class_name, teacher_id from class_teacher where teacher_id = 30; id class_name teacher_id2 Grade 3 class 4 30 commit;

After transaction Acommit, insert of transaction B is executed.
Through comparison, we can find that at the RC level, transaction A modifies all the data with teacher_id = 30, but after the transaction Binsert enters the new data, transaction A finds that A row of teacher_id = 30 data is not modified by the previous update statement, which is the Phantom read of "current read.

At the RR level, transaction A locks after update, and transaction B cannot insert new data. in this way, transaction A keeps the data read before and after update consistent, avoiding Phantom read. This lock is the Gap lock.

MySQL is implemented in this way:

In the class_teacher table, teacher_id is an index, which maintains a set of B + tree data relationships. to simplify it, we use the linked list structure to express it (actually a tree structure, but the principle is the same)

InnoDB uses clustered indexes. as a secondary index, teacher_id maintains a tree structure of the index fields and primary key IDs (represented in the form of a linked list) and keeps them sorted in sequence.

Innodb divides this data into several intervals.

(Negative infinity, 5], (5, 30], (30, positive infinity); update class_teacher set class_name = 'third grade four class' where teacher_id = 30;

Not only is the row lock used to lock the corresponding data rows, but also the gap lock is added to the intervals (5, 30] and (30, positive infinity) on both sides. In this way, transaction B cannot insert new data in these two intervals.

Due to this implementation method, Innodb often locks unneeded locks. As follows:

Transaction A transaction B transaction C

Begin; select id, class_name, teacher_id from class_teacher; id class_name teacher_id1 first class 3 class 3 class 2 30 update class_teacher set class_name = 'first class 1 'where teacher_id = 20; insert into class_teacher values (null, 'First Class 3 and 5', 10); waiting ..... insert into class_teacher values (null, 'First three five class', 40); commit; after transaction A commit, this statement is successfully inserted into commit;

The updated teacher_id = 20 is in the (5, 30] range. even if no data is modified, Innodb will apply the gap lock in this range, while other intervals will not affect the normal insertion of transaction C.

If you are using a field without an index, such as update class_teacher set teacher_id = 7 where class_name = 'eighth class (even if no data is matched )', the gap lock will be applied to the entire table. At the same time, it cannot automatically remove non-conforming locks by filtering through MySQL Server like the row lock above. because there is no index, these fields are not sorted and there is no interval. Other transactions cannot insert any data unless the transaction is committed.

The row lock prevents other transactions from being modified or deleted. The GAP lock prevents other transactions from being added, the Next-Key lock formed by the combination of row locks and GAP locks solves the Phantom read problem when writing data at the RR level.

Serializable
This level is simple. read and write locks and exclusive locks are mutually exclusive. The pessimistic lock theory is used to achieve simple implementation and data security, but the concurrency capability is very poor. This mode can be used if your business has little or no concurrency and requires timely and reliable data.

Here, we need to say that the select statement will not be locked. at the Serializable level, it will still be locked!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.