Analysis on MySQL lock and Deadlock handling

Source: Internet
Author: User
Tags mysql version prepare


"MySQL Lock processing analysis"

Background

Mysql/innodb's lock analysis has always been a difficult topic. In the course of my work, I often have colleagues who have consulted on this issue. At the same time, Weibo also often receive MySQL lock-related dms, let me help solve some deadlock problems. This paper, to prepare for the mysql/innodb of the lock problem, launched a more in-depth analysis and discussion, mainly introduce a thinking, using this idea, get any SQL statement, can fully analyze this statement will add what lock? What is the risk of use? Even a deadlock scene on the analysis line, Understand why deadlocks occur.

Note: MySQL is a database system that supports plug-in storage engines. All of the following descriptions are based on the InnoDB storage engine, and the performance of the other engines makes a big difference.

Mvcc:snapshot Read vs Current Read

The MySQL InnoDB storage engine, which is based on multiple versions of the Concurrency Control Protocol--MVCC (multi-version concurrency controls) (note: In contrast to MVCC, is based on the concurrency control of the lock, lock-based concurrency control). MVCC The biggest advantage, believe is also familiar: read without lock, read and write not conflict. In an OLTP application with less read and write, read-write conflict is very important, greatly increasing the concurrency performance of the system, which is why at this stage, almost all RDBMS support the MVCC.

In MVCC concurrency control, read operations can be divided into two categories: snapshot read (snapshot read) and current read. Snapshot read, read the visible version of the record (possibly historical version), without locking. The current read reads the latest version of the record, and the record returned by the current read will be locked to ensure that other transactions do not modify the record concurrently.

In a system that supports MVCC concurrency control, which reads are snapshot read? which operations are currently read? Take MySQL InnoDB as an example:

Snapshot read: Simple select operation, belongs to snapshot read, without lock. (Of course, there are exceptions, which are analyzed below)

SELECT * from table where?;

Current read: Special read operation, insert/update/delete operation, belong to current read, need to lock.

SELECT * FROM table where? lock in Share mode;

SELECT * FROM table where? for update;

Insert into table values (...);

Update table set? Where?;

Delete from table where?;

All of the above statements belong to the current read and read the latest version of the record. Also, after reading, it is necessary to ensure that other concurrent transactions cannot modify the current record and lock the read record. Among them, except the first statement, the read record plus S lock (shared lock), the other operations, all add an X lock (exclusive lock).

Why is the Insert/update/delete operation classified as current read? You can take a look at the following update operation, the execution process in the database:




From the diagram, you can see the specific process of an update operation. When update SQL is sent to MySQL, the MySQL server reads the first record that satisfies the condition according to the Where condition, and then the InnoDB engine returns the first record and locks (current read). After the MySQL server receives this locked record, it will initiate an update request that updates the record. A record operation completes, and then the next record is read until no record of the condition is met. Therefore, within the update operation, a current read is included. Similarly, the delete operation is the same. The insert operation is slightly different, in simple terms, where the insert operation may trigger a conflict check for the unique key and a current read.

Note: The interaction between InnoDB and MySQL server for a currently read SQL statement is based on the interaction of the above diagram, so the lock is made in one piece. Lock a record that satisfies the condition, return it to the MySQL Server, do some DML operations, and then read the next lock until the read is complete.

Cluster Index: Clustered Indexes

The data organization of the INNODB storage engine is a clustered index table: A complete record, stored in a primary key index, and indexed by a primary key to get all the columns logged. For the organization of the clustered index table, you can refer to the official MySQL documentation: Clustered and secondary Indexes. This paper assumes that the reader has a certain understanding of this, will no longer do a specific introduction. The next section, the primary key index/clustered index two names, there will be some mixed, hope readers know.

2pl:two-phase Locking

One of the principles of traditional RDBMS plus locks is 2PL (two-phase Lock): two-phase locking. Relatively speaking, 2PL is easier to understand, said that the lock operation is divided into two stages: the lock phase and the unlock phase, and ensure that the lock phase and the unlock phase does not intersect. Below, still take MySQL as an example, to a simple look at 2PL in MySQL implementation.




As can be seen from the above figure, 2PL is the lock/unlock into two completely disjoint phase. Lock stage: Lock only, do not put lock. Unlock phase: Lock only, without lock.

Isolation level

Isolation levels: The isolation level is also a key feature of the RDBMS. Believe that the database has some understanding of friends, for 4 isolation levels: Read uncommited,read committed,repeatable read,serializable, have a deep understanding. This article is not going to discuss how to define the meaning of these 4 isolation levels in database theory, but to tell you about how MYSQL/INNODB defines these 4 isolation levels.

4 isolation levels defined by MYSQL/INNODB:

Read uncommited

You can read UNCOMMITTED records. This isolation level is not used, ignored.

Read committed (RC)

Snapshot read ignored, this article is not considered.

For the current read, the RC isolation level guarantees that the recorded records are lock (record locks), and there is a phantom reading phenomenon.

Repeatable Read (RR)

Snapshot read ignored, this article is not considered.

For the current read, the RR isolation level ensures that the recorded records are lock (record locks), while ensuring that the range of reads is locked, the new records that meet the query criteria cannot be inserted (gap locks), and there is no phantom reading.

Serializable

From MVCC concurrency control to concurrency control based on lock. Does not distinguish between snapshot read and current read, all reads are current read, read plus read lock (s lock), write write lock (x lock).

Serializable isolation level, read-write conflict, so the degree of concurrency dropped dramatically, not recommended under MYSQL/INNODB.

Analysis of a simple SQL lock implementation

After introducing some background knowledge, this article will select several representative examples to analyze MySQL lock processing in detail. Of course, from the simplest example. Often a friend sends me a SQL and then asks me, what lock does this SQL add? Just like the following two simple SQL, what lock do they add?

Sql1:select * from t1 where id = 10;

Sql2:delete from t1 where id = 10;

What is the answer to this question? One answer I can imagine:

SQL1: No locks are added. Because MySQL is using multiple versions of concurrency control, read without locking.

SQL2: Write a lock on the record for id = 10 (Take the primary key index).

Is that the right answer? That may be true, or it may be wrong, the known conditions are insufficient, the question has no answer. If I were to answer this question, I would have to know some of the following prerequisites, and I could give a different answer. What are the prerequisites for answering this question?

Premise One: Is the ID column a primary key?

Premise Two: What is the isolation level of the current system?

Premise Three: ID column if not the primary key, then the ID column has an index?

Prerequisite Four: If there is a level two index on the ID column, is this index a unique index?

Premise Five: What is the execution plan for two SQL? Index scan? Full table scan?

Without these prerequisites, a direct SQL is given, and it is an amateur performance to ask what locks the SQL will add. And when these questions have a clear answer, the given SQL will add a lock, and it will be at a glance. Next, I will combine the answers to these questions and then analyze each combination in the order from easy to difficult, and what locks will be added to the corresponding SQL?

Note: The following combinations, I have made a premise that when there is an index, the execution plan will definitely choose to filter using the index (Index Scan). But the actual situation will be much more complicated, the real implementation plan, or need to be based on MySQL output.

Combination One: ID column is primary key, RC isolation Level

Group Two: ID column is a level two unique index, RC isolation Level

Group III: ID column is a two-level, non-unique index, RC isolation Level

Group Four: No index on the ID column, RC isolation level

Group Five: ID column is primary key, RR isolation level

Group VI: ID column is a level two unique index, RR isolation level

Group Seven: ID column is a two-level, non-unique index, RR isolation level

Group VIII: No index on the ID column, RR isolation level

Combination nine: Serializable isolation level

The permutations and combinations have not been exhaustive, but they seem to have been much. Is it really necessary to be so complicated? In fact, to analyze the lock, it needs to be so complicated. But from another point of view, as long as you select a combination, SQL needs to add which locks, in fact, is also determined. Next, let's analyze the SQL lock strategy for each of the 9 combinations individually.

Note: In the previous eight combinations, that is, the RC,RR isolation level, the sql1:select operation is not locked, using snapshot read, so in the following discussion is ignored, the main discussion of sql2:delete operation Lock.

Combination one: ID primary key +RC

This combination is the simplest and easiest to analyze. ID is the primary key, Read committed isolation level, given sql:delete from t1 where id = 10; You only need to add an X lock on the primary key, the ID = 10 record. As shown in the following illustration:




Conclusion: When the ID is the primary key, this SQL only needs to add an X lock on the id=10 record.

Group Two: ID unique index +RC

This combination, the ID is not a primary key, but a unique two-level index key value. Then, under the RC isolation level, delete from t1 where id = 10; What kind of lock do you need? See figure below:




In this combination, the ID is a unique index, and the primary key is the name column. At this point, the lock situation is different because of the combination. Because the ID is a unique index, the DELETE statement selects the index of the ID column to filter the Where condition, and after the id=10 record is found, the id=10 index record on the unique index is first added with an X lock, and the primary key index is returned based on the Name column read to ( Clustered index), and then add an X lock to the primary key index entry that corresponds to name = ' d ' on the clustered index. Why is the record on the clustered index also locked? Imagine if a concurrent SQL is updated through a primary key index: UPDATE t1 Set id = the WHERE name = ' d '; At this point, if the DELETE statement does not lock the record on the primary key index, then the concurrent update will not perceive the existence of the DELETE statement, violating the constraint that the update/deletion on the same record requires serial execution.

Conclusion: If the ID column is a unique column, it has a unique index on it. Then SQL requires two X locks, one for the ID = 10 on the ID unique index, and the other for the record of [name= ' d ', id=10] on the clustered index.

Group III: ID non-unique index +RC

Compared to the combination of one or two, the combination of three has changed, the isolation level is still RC unchanged, but the ID column constraints on the lower, ID column is no longer unique, only a normal index. Suppose the delete from t1 where id = 10; statement, and still select the index on the ID column to filter the Where condition, which locks are held at this time? See also the following figure:




As you can see from this figure, first, the records that satisfy the ID = 10 query criteria are locked on the ID column index. At the same time, the records on the primary key index for these records are also locked. And the combination of two the only difference is that the combination of two up to only one to meet the equivalent of the query records, and the combination of three will meet the query conditions of all records are locked.

Conclusion: If there is a non unique index on the ID column, all the corresponding records that satisfy the SQL query condition will be locked. At the same time, the records on the primary key index are also locked.

Group four: ID no index +RC

This is a special case compared to the previous three combinations. There is no index on the ID column, where id = 10; This filter condition cannot be filtered through the index, then the whole table scan can be filtered. What locks do SQL add to this combination? or in other words, what locks are added when a full table is scanned? There are a lot of answers: Some people say that they will put an X-lock on the table, and some say they will add an X lock to the record of the selected ID = 10 on the clustered index. What about the actual situation? Please look at the following figure:




Because there is no index on the ID column, you can only go to the clustered index for all scans. As you can see from the diagram, there are two records that meet the deletion criteria, but all the records on the clustered index are added with an X lock. All x locks are added regardless of whether the record satisfies the condition. It is neither a table lock nor a row lock on a record that satisfies a condition.

One might ask? Why not just lock the records that meet the criteria? This is due to the MySQL implementation. If a condition cannot be filtered quickly through the index, then the storage engine level locks all records back and then filters them by the MySQL server layer. So all the records are locked.

Note: In the actual implementation, MySQL has some improvements in the MySQL server filter conditions, found not satisfied, will invoke the Unlock_row method, the record does not meet the requirements of the lock (violate the 2PL constraints). This ensures that only the lock that satisfies the condition record is held at the end, but the lock operation of each record cannot be omitted.

Conclusion: If there is no index on the ID column, SQL will take the full scan of the clustered index to filter, because the filtering is done by the MySQL server level. So each record, whether or not it satisfies the condition, is added with an X lock. However, in order to efficiency considerations, MySQL has done optimization, for the records that do not meet the conditions, will be in judgment after the lock, the final hold, is to meet the conditions of the record on the lock, but do not meet the conditions of the record on the lock/lock action will not be omitted. At the same time, optimization also violates the constraints of 2PL.

Group five: ID primary key +RR

The four combinations above are the locking behavior under the Read Committed isolation level, and the next four combinations are the locking behavior at the REPEATABLE read isolation level.

Combination five, ID column is primary key column, repeatable read isolation level, for delete from t1 where id = 10; This SQL, lock, is consistent with the combination one: [ID primary key, Read committed].

Group Six: ID unique index +RR

Similar to the combination of five, the combination of six locks, with the combination of two: [ID unique index, Read committed] consistent. Two x locks, the ID unique index satisfies the condition of the record on the corresponding clustered index on the record one.

Group Seven: ID non-unique index +RR

Do you remember the difference between the four isolation levels of MySQL mentioned earlier? The RC Isolation level allows Phantom reads, while the RR isolation level does not allow Phantom reads. But in the combination of five and six, the lock behavior is exactly the same as the lock behavior under RC. So how do you prevent phantom reading under the RR isolation level? The answer to the question is revealed in the combination of seven.

Combination seven, repeatable read isolation level, with a non unique index on the ID, executing delete from t1 where id = 10; Suppose you select the index on the ID column for conditional filtering, and the final lock behavior, what is it like? Look at the following picture:




This figure, relative to the combination of three: [ID column is not a unique lock, Read committed] looks the same, in fact, there is a big difference. The biggest difference is that this picture has a gap lock, and the gap lock does not appear to be added to the record, it is like loading two records between the position, Gap lock what is the use?

In fact, this extra gap lock, is RR isolation level, relative to the RC isolation level, will not appear the key to the Phantom reading. Indeed, the gap lock position is not the record itself, but the gap between the two records. The so-called Phantom reading, is the same business, do two consecutive reads (For example: SELECT * from t1 where id = ten for update;), then the two current reads return exactly the same record (the number of records is the same, the record itself is consistent), the second time the current read, no more than the first Returns more records (Phantom).

How to ensure that two times the current read returns a consistent record, it requires that in between the first and second current readings, the other transactions are not inserted into the new record that satisfies the condition and are submitted. In order to achieve this function, gap lock emerged.

As shown in the figure, there are locations where you can insert a new item (id = 10) that satisfies the condition, considering the order of the B + Tree index, the item that satisfies the condition must be stored continuously. The record of the id=10 is not inserted until the [6,c] is recorded; [6,c] and [10,b] can be inserted between [ten, AA]; Between [10,b] and [10,d], you can insert a new [10,bb],[10,c]; [10,d] and [11,f] can be inserted between [10,e], [10,z] that satisfies the condition, and [11,f] do not insert records that meet the criteria. Therefore, in order to ensure that [10,d] and [11,f] between [6,c] and [10,d] between [10,b] and [10,b] do not insert new records that satisfy the condition, MySQL chooses to lock the three gap by Gap lock.

Inserts, such as insert [10,AA], are first positioned between [6,c] and [10,b] and then, before inserting, checks whether the gap has been locked and, if locked, inserts cannot insert the record. Therefore, the first pass of the current reading, not only will meet the conditions of the record lock (x Lock), and the combination of three similar. At the same time, add 3 gap Lock, will possibly insert satisfies the condition record the 3 gap to lock, guarantees the subsequent inserts cannot insert the new id=10 record, also eliminates the same transaction the second current reads, appears the illusion situation.

Interested friends see here, you can ask: since the prevention of Phantom Reading, need to rely on gap lock protection, why combination of five, combination of six, RR isolation level, but do not need to add gap lock it?

First of all, this is a good question. Secondly, the answer to this question is also very simple. The purpose of the gap lock is to prevent two times of the same transaction from being read at the time of a phantom read. The combination of five, the ID is the primary key, the combination of six, the ID is a unique key, can guarantee uniqueness. An equivalent query, can only return one record, and the new record of the same value, must not be inserted in the new, so also avoid the use of gap lock. In fact, there is a further problem with this problem: if combined with six, the sql:select * from t1 where id = a for update; The first query, did not find the record to meet the query conditions, then gap lock can also be omitted? This question is left to everyone to think about.

Conclusion: REPEATABLE read isolation level, there is a non unique index on the ID column, corresponding sql:delete from t1 where id = 10; First, the ID index is positioned to the first record that satisfies the query condition, plus the X lock on the record, plus the gap lock on the gap, then the record X lock on the primary key clustered index is added, then the next one is repeated. Until the first record does not meet the conditions [11,f], at this time, do not need to add a record x lock, but still need to add gap lock, and finally return to the end.

Combination eight: ID no index +RR

Combination eight, repeatable read the last case under isolation level, there is no index on the ID column. At this point sql:delete from t1 where id = 10; There are no other paths to choose from, only full table scans. The final lock case, as shown in the following illustration:




As pictured, this is a very scary phenomenon. First, all the records on the clustered index are added with an X lock. Secondly, the gap (GAP) between each record of the clustered index is also added to the gap lock. This sample table, with only 6 records, requires 6 record locks and 7 gap locks. Imagine if there were 10 million records on the table?

In this case, the table, in addition to the unlocked snapshot degree, any other locked concurrent SQL, can not be executed, can not be updated, can not be deleted, can not be inserted, the entire table was locked dead.

Of course, with the combination of four: [ID no index, Read committed] similar, in this case, MySQL has done some optimization, is called Semi-consistent Read. Semi-consistent read Open, MySQL will release the lock in advance for records that do not meet the query criteria. For this use case, all record locks are released except for records [d,10],[g,10], without gap locks. Semi-consistent read how to trigger: either the Read Committed isolation level, or the repeatable read isolation level, with the Innodb_locks_unsafe_for_binlog parameters set. More detailed introduction of Semi-consistent read, can refer to my previous blog: Mysql+innodb semi-consitent read principle and implementation analysis.

Conclusion: At the repeatable read isolation level, all records in the table are locked and all gaps in the clustered index are locked, eliminating all concurrent update/delete/INSERT operations if the current reading of the full table scan is performed. Of course, the locking overhead and concurrency can also be mitigated by triggering semi-consistent read, but Semi-consistent read itself brings other problems that are not recommended.

Combination of nine: Serializable

For the simple SQL mentioned earlier, the last case: Serializable isolation level. For sql2:delete from t1 where id = 10; , the Serializable isolation level is exactly the same as the repeatable read isolation level, so no introductions are made.

Serializable isolation level, which affects sql1:select * from t1 where id = 10; This SQL, under the RC,RR isolation level, is both snapshot read and unlocked. However, at the Serializable isolation level, SQL1 reads the lock, which means that the snapshot read no longer exists, and MVCC concurrency control is demoted to lock-based CC.

Conclusion: In Mysql/innodb, the so-called reading without locking is not applicable to all cases, but to the isolation level. Serializable isolation level, read without lock will no longer set up, all the read operation, are currently read.

A complex SQL

Write here, in fact, MySQL lock implementation has also been introduced to the 8899. As long as the analysis of the above ideas, most of the SQL, can be analyzed it will add which locks. And here, let's look at a slightly more complex SQL that illustrates another logic for MySQL locking. The SQL use cases are as follows:




What locks are added to the SQL in the figure? Assume that the lock under the REPEATABLE READ isolation level (Read Committed isolation level is left to the reader for analysis.) At the same time, assuming that SQL is going with the Idx_t1_pu index.

Before detailed analysis of this SQL lock situation, but also need to have a knowledge of the storage, that is, where in the SQL how to split? The specific introduction, suggest reading my previous article: SQL in the Where condition, in the database extraction and application analysis. Here, I give a direct analysis of the results:

Index key:pubtime > 1 and Puptime < 20. This condition, which determines the scope of the query for SQL on the Idx_t1_pu index.

Index Filter:userid = ' hdc '. This condition can be filtered on the Idx_t1_pu index, but it is not part of the index Key.

Table filter:comment is not NULL. This condition is not filtered on the Idx_t1_pu index and can only be filtered on the clustered index.

After analyzing the composition of the SQL where condition, take a look at the lock on this SQL (RR isolation level), as shown in the following illustration:




It can be seen from the diagram that the range defined by index key is added to the gap lock under repeatable read isolation level, and when the index filter lock given condition (UserID = ' hdc ') is filtered, depending on the MySQL version, in MySQL 5.6 version, not supported index Condition pushdown (ICP), so the index filter in the MySQL server layer filter, in 5.6 after the support of the index Condition pushdown, the index on the filter. If you do not support ICP, index filter does not meet the record, but also need to add record x lock, if the support of ICP, the index filter does not meet the record, no need to add a record x lock (figure, with Red arrows marked with the X lock, whether to add, depending on whether or not to support ICP); Filter filters in the clustered index, and then filtered at the MySQL server level, so an X lock is also required on the clustered index. Finally, a record [8,hdc,d,5,good] satisfying the condition is selected, but the number of locks is much larger than the number of records satisfying the condition.

Conclusion: Under the REPEATABLE read isolation level, for a complex SQL, it is necessary to extract its where condition first. Index key to determine the scope, need to add gap Lock, index filter filter conditions, depending on whether the MySQL version of the support of ICP, if the support of ICP, the index filter does not meet the record, without X lock, otherwise need x lock; Table Filter filter conditions, whether or not satisfied, need to add an X lock.

The principle and analysis of deadlock

The previous section of this article basically covers all the lock rules for MYSQL/INNODB. A deeper understanding of how MySQL locks up has two important roles:

According to the MySQL lock rule, write the SQL that does not happen deadlock;

Can be based on MySQL lock rules, locating the cause of deadlock on the line;

Below, take a look at two deadlock examples (one is two session of two SQL generated deadlock, and the other is two session of a SQL, resulting in deadlock):







The two deadlock cases above. The first very good understanding, but also the most common deadlock, each transaction executes two SQL, holding a lock, and then add another lock, create a deadlock.

The second use case, although each session has only one statement, still produces a deadlock. To parse this deadlock, you must first use the MySQL locking rules mentioned earlier in this article. For session 1, starting with the name index, read [HDC, 1],[HDC, 6] all meet the conditions, not only add the record X lock on the name index, but also add the record X lock on the clustered index, the lock order is first [1,hdc,100], and then [6,hdc,10]. and session 2, starting from the Pubtime index, [10,6],[100,1] all meet the filter conditions, the same will be added to the cluster index of the record X lock, lock order for [6,hdc,10], after [1,hdc,100]. Found no, with session 1 lock order is just the opposite, if two sessions exactly all hold the first lock, the request plus the second lock, the deadlock occurred.

Conclusion: The deadlock is not due to the number of SQL statements in the transaction, the key to deadlock is: two (or more) session lock sequence is inconsistent. Using this article, we analyze the locking rules for each SQL statement in MySQL, analyze the lock sequence of each statement, and then check whether there are multiple concurrent SQL in the opposite order of the case, you can analyze a variety of potential deadlocks, can also analyze the cause of deadlock on the line.

Summarize

Write here, this article also came to a conclusion, do a simple summary, to do fully grasp the MYSQL/INNODB lock rules, and even any other database lock rules, need to have some of the following knowledge points:

Understand some of the basic theoretical knowledge of the database: Data storage format (heap organization table vs clustered index Table), Concurrency Control Protocol (MVCC vs lock-based CC); Two-phase locking the isolation level definition of the database (isolation levels);

Understand the execution plan for the SQL itself (primary key scan vs unique key scan vs range scan vs full table scan);

Understand some implementation details of the database itself (filter condition extraction; Index Condition pushdown; Semi-consistent Read);

Understand the causes of deadlocks and the Methods of analysis (locking sequence is inconsistent; analyze the lock order for each SQL)

With these knowledge points, coupled with the appropriate actual combat experience, the overall control of MYSQL/INNODB lock rules, when it is not alone.

"MySQL Deadlock analysis"

Background

There are students in the "MySQL Lock processing analysis" This blog post consulting a MySQL deadlock scene, or completely put me to a puzzled. This deadlock, completely contrary to my original lock knowledge system, let me baffled. The spirit of the machine will not deceive, since the report of the deadlock, then there must be the principle of deadlock, I have again in-depth analysis of the innodb corresponding to the source code implementation, to carry out many experiments, with just the right aura, but really let me analyze the cause of the deadlock. The remainder of this blog post's content arrangement, first is gives the "embellish clean" schoolmate to describe the deadlock scene, then gives my analysis. To the individual, this is a very necessary summary, for the reader of this blog, hope to encounter similar deadlock in the future, can be clear about the cause of the deadlock.

An incredible deadlock.

"Run clean" classmate, give out the deadlock scene as follows:

Table structure:

CREATE TABLE Dltask (

ID bigint unsigned not NULL auto_increment COMMENT ' AUTO id ',

A varchar () not NULL COMMENT ' uniq.a ',

b varchar () not NULL COMMENT ' uniq.b ',

C varchar () not NULL COMMENT ' uniq.c ',

x varchar () not NULL COMMENT ' data ',

PRIMARY KEY (ID),

UNIQUE KEY Uniq_a_b_c (A, B, c)

) Engine=innodb DEFAULT Charset=utf8 comment= ' deadlock test ';

A,b,c three columns, grouped into a unique index, with the primary key indexed as an ID column.

Transaction ISOLATION Level:

RR (repeatable Read)

There is only one SQL per transaction:

Delete from Dltask where a=? and b=? and c=?;

SQL's execution plan:


Deadlock log:


Preliminary analysis

Concurrent transactions, with only one SQL statement per transaction: Delete A record given a unique two-level index key value. Each transaction, at most, deletes only one record, and why does a deadlock occur? This is absolutely impossible. But, in fact, a deadlock is really happening. The two transactions that produce a deadlock, which deletes the same record, should be a potential cause of deadlocks, but even deleting the same record should not, in principle, cause a deadlock. Therefore, after a preliminary analysis, this deadlock is impossible to produce. This conclusion is far from enough!

How to read a deadlock log

Before giving the reason for this deadlock in detail, let's take a look at how to read the deadlock log given by MySQL.

The above printed deadlock log is generated by the lock0lock.c::lock_deadlock_recursive () function in the InnoDB engine. The transaction information in a deadlock is handled by calling the function Lock_deadlock_trx_print (), and the lock information held by each transaction, waiting, is generated by the Lock_deadlock_lock_print () function.

For example, the above deadlock, there are two transactions. Transaction 1, which is currently operating on a table (MySQL tables in use 1), holds two locks (2 lock structs, a table-level intent lock, a row lock (1 row lock)), and this transaction, the statement currently being processed is a DELETE statement. At the same time, this unique row lock is in the waiting state (waiting for the lock to is granted).

Transaction 1 The row lock in wait, the lock object is a unique index uniq_a_b_c on the page Number 12713 page (note: Which row, cannot see. But what can be seen is that this row lock, a total of 96 bits can be used to lock 96 row Records, N bits 96:lock_rec_print () method). Also, the waiting row lock mode is the next key lock (Lock_mode X). (Note: For the InnoDB lock mode, refer to my early PPT: "InnoDB transaction/Lock/multiple version implementation analysis.") In simple terms, the next key lock has two meanings, one is to add an X lock on the current record, prevent the record from being modified concurrently, and lock the gap before recording to prevent new records from being inserted into this record. )

Similarly, you can analyze transaction 2. There are two row locks on transaction 2, and two row locks correspond to a single record on page number 12713 on the unique index Uniq_a_b_c. A row lock is in holding state, the lock mode is x lock with no gap (note: Record lock, only lock record, but not lock record before gap,no Gap Lock). A row lock is in the waiting state, and the lock mode is next key lock (note: Consistent with the lock mode waiting for Transaction 1.) At the same time, it is important to note that the two lock modes of transaction 2 are not consistent and not fully compatible. The lock mode holding is X lock with no gap, and the waiting lock mode is next key lock X. Therefore, it is not possible to say that next key lock X can be added because it holds an X lock with no gap. )。

If you analyze this deadlock log, you can find a deadlock. The next key lock X of transaction 1 is waiting for transaction 2 to hold the X lock with no gap (row lock x Conflict), at the same time, the next key lock X for transaction 2 is waiting for the next key lock that transaction 1 is waiting on (note: Here, the reason for transaction 2 waiting for transaction 1 is Fair competition, the elimination of business 1 of the phenomenon of hunger. ), forming a cyclic waiting, deadlock generation.

After the deadlock is generated, the weight of transaction 1 is smaller according to the weights of two transactions, and is chosen as the victim of deadlock and rolled back.

Based on the analysis of the deadlock log, confirm that the deadlock does exist. Also, the two transactions that produce deadlocks are indeed running the same equivalent delete operation based on a unique index. Now that the deadlock does exist, the next step is to grasp the cause of the deadlock.

Deep analysis of deadlock causes

Lock logic for delete operations

In the article "MySQL Lock processing analysis", I analyzed the locking logic of various SQL statements in detail. For example, the DELETE statement internally contains a current read (lock read) and then calls the delete operation for deletion through the record returned by the current read. In this article, the combination of the six: ID unique index +RR, you can see that, under the RR isolation level, for the query records that meet the criteria, the record plus exclusive lock (x Lock), but does not lock the record before the gap (no gap lock). For the deadlock example above, transaction 2 holds a lock that is an exclusive lock on a record, but Gap (Lock_mode X locks Rec but not gap) before the record is locked, consistent with my previous lock analysis.

In fact, in the "MySQL Lock processing analysis" of the combination of seven: ID non-unique index +RR part of the last, I also raised a question: if the combination of five, combined six, for the sql:select * from t1 where id = a for update; The first query, did not find the records to meet the query conditions, then gap lock can also be omitted? In response to this problem, the friends involved in the test, the correct answer is: At this time gap lock can not be omitted, will be in the first not satisfied with the query conditions of the record on the gap lock to prevent new records to meet the conditions inserted.

In fact, the above two locking strategies are correct. The above two policies, respectively, correspond to: 1 The record that satisfies the query criteria on the unique index exists and is valid; 2 The record that satisfies the query criteria on the unique index does not exist. But, in addition to these two, there are actually a third kind: 3 The record that satisfies the query criteria on the unique index exists but is not valid. As we all know, deleting a record on a innodb is not really a physical deletion, but it identifies the record as a deletion state. (Note: These records that are identified as deletion status are subsequently recycled by the purge operation in the background and physically deleted.) However, records that delete status are stored in the index for a period of time. At the RR isolation level, the unique index satisfies the query criteria, but deletes the record, how to lock it? InnoDB's processing strategy here is different from the first two strategies, or a combination of the first two: for a deleted record that satisfies a condition, InnoDB adds next key lock X on the record (X lock on the record itself, while locking the gap before the record, preventing new records from being inserted.) Unique query, three kinds of cases, corresponding to three kinds of lock strategy, summarized as follows:

Find the record that satisfies the condition, and the record is valid, then add X lock to the record, No Gap Lock (Lock_mode x Locks Rec but not gap);

Locate the record that satisfies the condition, but the record is not valid (identified as a deleted record), then the record is added with the next key lock (while the record itself is locked and the Gap:lock_mode X before it is recorded);

Failure to find a record that satisfies the condition, the first record that does not meet the condition is added to the gap lock to ensure that the record is not satisfied with the condition (locks Gap before REC);

Here, we see the next key lock, does it look familiar? Yes, the lock in the front deadlock, transaction 2 in the waiting state, is next key lock. Understand these three lock strategy, in fact constructs a certain concurrent scene, the deadlock reason already is ready. However, there is also a prerequisite strategy to introduce, that is, InnoDB internal deadlock prevention strategy.

Deadlock prevention Strategy

InnoDB engine interior (or all of the database inside), there are a variety of lock types: Transaction lock (Row lock, table lock), Mutex (Protection of internal shared variable operations), RWLock (also known as latch, to protect the internal page read and modify).

InnoDB each page is 16K, read a page, need to add s lock on the page, update a page, need to add X lock on the page. In any case, the operation of a page, will be the page lock, page lock plus, the page stored in the index records will not be concurrently modified.

Therefore, in order to modify a record, InnoDB internally handles:

According to the given query conditions, find the corresponding record of the page;

Add an X lock to the page (RWLock), and then search the page for records that meet the criteria;

In the case of holding page lock, add transaction lock to the record that satisfies the condition (row lock: According to whether the record satisfies the query condition, whether the record has been deleted, corresponds to one of the 3 locking strategies mentioned above respectively);

Deadlock prevention strategy: relative to Transaction lock, page lock is a short-term holding lock, and transaction lock (row lock, table lock) is a long held lock. Therefore, in order to prevent the page lock from creating a deadlock between the transaction lock. InnoDB A deadlock prevention strategy: hold the transaction lock (row lock, table lock), you can wait to get the page lock, but instead, hold the page lock, cannot wait to hold the transaction lock.

According to the deadlock prevention strategy, when holding a page lock and adding a row lock, if the row lock needs to wait. Release the page lock and wait for the row lock. At this point, the row lock fetch has no lock protection, so the record may have been modified concurrently after the row lock was added. Therefore, at this time to add back to the page lock, to judge the status of the record, again under the protection of the page lock, lock the record. If the record is not being modified concurrently, the second lock can be completed quickly because the lock of the same mode is already held. However, if the record has been modified concurrently, it is possible to cause the deadlock problem mentioned earlier in this article.

The above InnoDB deadlock prevention processing logic, corresponding to the function, is Row0sel.c::row_search_for_mysql (). Interested friends, you can track the process of this function under debugging, very complex, but the essence of the focus of InnoDB.

Analyze the cause of deadlock

Done so many bedding, with the delete operation of the 3 locking logic, InnoDB deadlock prevention strategy, and so on to prepare knowledge, and then back to analyze the first mention of the deadlock problem, will hand to come, and half of the work times.

First, assume that there is only one record in Dltask: (1, ' A ', ' B ', ' C ', ' data '). Three concurrent transactions, executing the following sql:

Delete from Dltask where a= ' a ' and b= ' B ' and c= ' C ';

And the following concurrency execution logic is generated and a deadlock is generated:


The concurrency process analyzed above fully demonstrates the cause of deadlock in the deadlock log. In fact, according to transaction 1 Step 6, and the order between transaction 0 step 3/4, there is another possibility in the deadlock log, which is that transaction 1 waits for the lock mode for the record X-lock + No gap Lock (Lock_mode x Locks Rec but not gap waiting). This second situation, is also "run clean" classmate in the deadlock case, the use of MySQL version 5.6.15 to test the cause of deadlock.

Summarize

At this point, the MySQL based on the unique index of a single record of the delete operation concurrency, but also the cause of the deadlock, has been analyzed. In fact, the difficulty of analyzing this deadlock lies in understanding the Mysql/innodb mode of the lock, aiming at the difference of lock mode in different situations, and the deadlock prevention strategy of INNODB dealing with page lock and transaction lock. Knowing this, the deadlock analysis will appear clear.

Finally, we summarize some prerequisites of this kind of deadlock:

Delete operation, for the deletion of the equivalent query on the unique index; (delete under the scope, also will produce deadlock, but the deadlock scene, with this article analysis of the scene, different)

At least 3 (or more) concurrent deletion operations;

Concurrent delete operations, it is possible to delete to the same record, and ensure that the deleted records must exist;

The isolation level for the transaction is set to repeatable Read, and the Innodb_locks_unsafe_for_binlog parameter is not set (this parameter defaults to false);(Read committed isolation level, because gap locks are not added, There will be no next key, and therefore no deadlock will occur.

Using the InnoDB storage engine; (nonsense!) The MyISAM engine has no line locks at all.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.