MySQL lock mechanism collation

Source: Internet
Author: User
Tags lock queue

Auth:jin
date:20140506

The main reference to organize information
MySQL performance tuning and architecture design-seventh MySQL locking mechanism
Http://www.cnblogs.com/ggjucheng/archive/2012/11/14/2770445.html
Understanding mysql--Architecture and concepts
Http://www.cnblogs.com/yequan/archive/2009/12/24/1631703.html

I. MySQL lock type and application introduction
1. Lock type and application type
MySQL's lock system is relatively simple compared to other repositories, and its most obvious point is that different storage engines support different lock systems, but in general,
MySQL various storage engines use three types of locking machines: Row lock, page lock, and table lock. Among them, MyISAM mainly use the table lock,
The main use of the row lock is INNODB,BDB use page lock
① form Lock: Open the small, lock fast, do not appear dead lock, lock the size of large, the probability of the occurrence of a lock conflict is the highest, and the lowest degree. MyISAM
② Lock: Open the big, add lock slow, will appear dead lock, lock the size of the smallest, the probability of a locking conflict is the lowest, and the highest degree of development. Innodb NDB
③ page Lock: The opening and locking of the time bounded by the lock and the lock of the time, will appear deadlock; Lock the granularity of the boundary between the lock and the lock, and the development of General BDB

Each type of lock is optimized for each application scenario. Table locks may be suitable for Web applications (read and write less), while row-level locks may be more suitable for OLTP systems.

Online transaction processing OLTP and online analytical processing OLAP Introduction
Today's data processing can be broadly divided into two broad categories: online transaction processing OLTP (on-line transaction processing), online analytical processing OLAP (On-line Analytical Processing).
OLAP is the main application of Data Warehouse system, supports complex analysis operations, focuses on decision support, and provides intuitive and understandable query results.
OLTP is the main application of the traditional relational database, mainly basic, daily transaction processing, such as bank transaction.
An online transaction processing system (OLTP), also known as a transaction-oriented processing system, is characterized by the fact that the customer's original data can be transferred immediately to the computing center for processing, and the processing results are given in a very short period of time.
OLTP systems are designed to handle hundreds of transactions that are entered simultaneously. High real-time requirements. The amount of data is not very large. Transactions are generally deterministic, so OLTP is the data access for certainty. )

Second, table-level lock
(i) mechanisms
For Web applications (less read-write)
The locking mechanism used by the MyISAM storage engine is fully implemented by the table-level lock provided by MySQL.
There are two main types of table-level locking in MySQL: Write-lock and read-lock (read-only)
For write lock, the table locking method used by MySQL is the following principle:
* If there is no lock on the table, put a write lock on it.
* Otherwise, the lock request is placed in the write lock queue.
For read lock, the table locking method used by MySQL is the following principle:
* If there is no write lock on the table, put a read lock on it.
* Otherwise, the lock request is placed in the read lock queue.
When a lock is released, the lock can be obtained by the thread in the write lock queue, and then the thread in the lock queue is read. This means that if you have many updates on a table, the SELECT statement waits until there are no more updates.

In MySQL, these two locks are maintained mainly through four queues: two hold the read and write lock information in the current lock, and the other two hold the wait for read and write lock information, as follows:
Current Read-lock Queue (Lock->read)
Pending read-lock Queue (lock->read_wait)
Current Write-lock Queue (lock->write)
Pending write-lock Queue (lock->write_wait)

When a client requests a write lock, MySQL first checks whether the current Write-lock queue has already locked the same resource to the information present, and if the current Write-lock queue does not, then check pending write-lock Queue,
If found in the pending Write-lock queue, you also need to enter the waiting queue, if not found in pending Write-lock queue, then detect the current Read-lock queue,
If there is a lock present, you also need to enter the pending Write-lock queue.
If a write lock with the same resource locked in the current Write-lock queue is detected at the outset, it goes directly to the pending Write-lock queue.

The priority rules for write-lock requests in read requests and write-waiting queues are primarily determined by the following rules:
1. In addition to read_high_priority read lock, write write lock in Pending write-lock queue can block all other read locks;
2. Read_high_priority read-locked requests can block write locks in all Pending write-lock queue;
3. In addition to the write write lock, any other write locks in the Pending write-lock queue have a lower priority than read locks.

Table-level locks are more advantageous than row-level locks in the following cases:
1. Many operations are read tables.
2. Read and update on a strictly conditional index when the update or delete can be read with a separate index:
3. Update tbl_name SET column=value WHERE unique_key_col=key_value;
4. DELETE from Tbl_name WHERE unique_key_col=key_value;
5. SELECT and INSERT statements are executed concurrently, but there are very few UPDATE and DELETE statements.
6. Many scan tables and GROUP by operations on the whole table, but no tables are written.

(ii). View System lock Status
You can analyze table lock contention on the system by examining the table_locks_waited and table_locks_immediate state variables:
Mysql>show Global Status Like ' table% ';
+-----------------------+-------+
| variable_name | Value |
+-----------------------+-------+
| Table_locks_immediate | 34 |
| table_locks_waited | 0 |
+-----------------------+-------+
2 rows in Set (0.00 sec)
Table_locks_immediate indicates immediate release of table locks
Table_locks_ Waited represents the number of table locks that need to wait,
Table_locks_waited:
The numbers of times that a request for a table lock could not be granted IMM Ediately and a wait was needed.
If This is a and you have performance problems, you should first optimize your queries, and then either split your Table or tables or use replication.
should care about the value of table_locks_waited
If the value of table_locks_waited is higher, there is a more serious table-level lock contention condition. At this point, we need to do a further review of the application to determine where the problem lies.


Three, the lock model of InnoDB
InnoDB is a row lock and MyISAM is a table lock, and the InnoDB effect is better for applications with high concurrent writes.
There are two types of row-level locks for InnoDB:
(1) Shared lock,s: Allows a transaction to read one line, preventing other transactions from acquiring an exclusive lock on the same data set.
(2) Exclusive lock (exclusive Lock,x): Allows the transaction to obtain exclusive locks to update data, preventing other transactions from acquiring shared read locks and exclusive write locks of the same data set.
In addition, INNODB supports multi-granularity locking (multiple granularity locking), allowing simultaneous locking of records and tables. To do this, InnoDB introduces an intent lock (intention locks), which is for the table:
(1) Intent shared Lock (IS): A transaction is intended to add a row of shared locks to a data row, and the transaction must obtain the IS lock of the table before sharing it with a data row.
(2) Intent exclusive Lock (IX): The transaction intends to add an exclusive lock to the data row, and the transaction must obtain an IX lock on the table before it is added to the exclusive lock on the data row.

MariaDB
Show status like ' innodb%lock% ';
+-------------------------------+-------+
| variable_name | Value |
+-------------------------------+-------+
| Innodb_deadlocks | 0 |
| Innodb_row_lock_current_waits | 0 |
| Innodb_current_row_locks | 0 |
| Innodb_row_lock_time | 0 |
| Innodb_row_lock_time_avg | 0 |
| Innodb_row_lock_time_max | 0 |
| Innodb_row_lock_waits | 0 |
| Innodb_s_lock_os_waits | 2 |
| Innodb_s_lock_spin_rounds | 60 |
| Innodb_s_lock_spin_waits | 2 |
| Innodb_x_lock_os_waits | 0 |
| Innodb_x_lock_spin_rounds | 0 |
| Innodb_x_lock_spin_waits | 0 |
+-------------------------------+-------+
Rows in Set (0.00 sec)
Online version 5.1
Mysql> Show status like ' innodb%lock% ';
+-------------------------------+----------+
| variable_name | Value |
+-------------------------------+----------+
| Innodb_row_lock_current_waits | 0 |
| Innodb_row_lock_time | 15086472 |
| Innodb_row_lock_time_avg | 493 |
| Innodb_row_lock_time_max | 51941 |
| Innodb_row_lock_waits | 30562 |
+--------------------------

Iv. InnoDB when to use the table lock
http://hi.baidu.com/bubu600/item/1528fe50b0810edcd48bac1d
http://blog.csdn.net/xiao7ng/article/details/5034013
For InnoDB tables, row-level locks should be used in most cases, because transactions and row locks are often the reason why we chose the InnoDB table. However, you can also consider using table-level locks in individual special transactions.
The first case is that the transaction needs to update most or all of the data, and the table is larger, and if you use the default row lock, not only is the transaction inefficient, but it may cause other transactions to wait for a long time and lock the conflict, in which case you might consider using a table
Lock to increase the execution speed of the transaction.
The second scenario is that the transaction involves multiple tables, which is more complex and is likely to cause deadlocks, causing a large number of transactions to be rolled back. It is also possible to consider the tables involved in a one-time locking transaction, thus avoiding deadlocks and reducing the cost of the database due to transaction rollback.
Of course, these two kinds of transactions in the application can not be too much, otherwise, you should consider using the MyISAM table.
Under InnoDB, the following two points should be noted for using table locks.
(1) Using lock tables Although it is possible to add a table-level lock to InnoDB, it must be stated that the table lock is not managed by the InnoDB storage engine layer, but is the responsibility of the previous layer of ──mysql server.
Only if the autocommit=0, Innodb_table_locks=1 (the default setting), the InnoDB layer to know the MySQL plus table lock, MySQL server can also sense InnoDB add row lock, in this case, InnoDB to automatically recognize the death of a table-level lock
Otherwise, InnoDB will not be able to automatically detect and process this deadlock
(2) in the lock tables to the InnoDB table to be aware that, to set the autocommit to 0, or MySQL will not add locks to the table, before the end of the transaction, do not use unlock tables to release the table lock, because unlock tables will implicitly commit the transaction;
A commit or rollback cannot release a table-level lock added with lock tables, and a table lock must be released with unlock tables. The correct way to see the following statement:
For example, if you need to write a table T1 and read from table T, you can do this as follows:
SET autocommit=0;
LOCK TABLES T1 WRITE, T2 READ, ...;
[Do something with tables T1 and T2 here];
COMMIT;
UNLOCK TABLES;


Five, deadlock
The MyISAM table lock is deadlock free, because MyISAM always gets all the locks needed at once, either all satisfied, or waits, so there is no deadlock. In InnoDB, however, locks are progressively obtained in addition to a single SQL-composed transaction, which
It is possible to determine that a deadlock occurs in the InnoDB.
After a deadlock occurs, InnoDB is typically automatically detected, and a transaction is freed and rolled back, another transaction gets locked, and the transaction continues to complete. However, in cases involving an external lock or a table lock, the InnoDB does not automatically detect the deadlock, which
Need to be resolved by setting the lock wait timeout parameter innodb_lock_wait_timeout. It is necessary to note that this parameter is not only used to solve the deadlock problem, in case of high concurrent access, if a large number of transactions because the required locks can not be obtained immediately
While hanging, it consumes a lot of computer resources, causing serious performance problems and even dragging across databases. We can prevent this from happening by setting the appropriate lock wait timeout threshold.
In general, deadlocks are an application design problem, and most deadlocks can be avoided by adjusting the business process, database object design, transaction size, and SQL statements that access the database. The following is an example to introduce several common ways to avoid deadlocks
Method.
(1) In the application, if a different program accesses multiple tables concurrently, it is necessary to agree to access the table in the same order, which can greatly reduce the chance of deadlock generation. In the following example, since the two session accesses two tables in different order, a
The chances of a deadlock are very high! However, if accessed in the same order, deadlocks can be avoided.
(2) When the program processes the data in batches, if the data is sorted beforehand, it is guaranteed that each thread will process the record in a fixed order, which can greatly reduce the possibility of deadlock.
(3) In a transaction, if you want to update the record, you should request a sufficient level of lock, that is, an exclusive lock, instead of requesting a shared lock, and then requesting an exclusive lock, because when the user requests an exclusive lock, other transactions may have obtained the same record of the total
Locks, which can cause lock collisions and even deadlocks.
Although the design and SQL optimizations described above can greatly reduce deadlocks, deadlocks are difficult to avoid altogether. Therefore, it is a good programming habit to always capture and handle deadlock exceptions in program design.
If a deadlock occurs, you can use the show INNODB status command to determine why the last deadlock occurred. Returns the details of the deadlock-related transaction, such as the SQL statement that caused the deadlock, the lock that the transaction has acquired, and what locks are waiting for the lock to
And the transactions that were rolled back. The causes of deadlock and the improvement measures are analyzed accordingly.
Here is the main excerpt from < mysql--database development, optimization and management maintenance > 20.3 InnoDB lock problem in this book

When checking engine status with show InnoDB status, a deadlock problem was found or viewed Slow-log

Resolve a deadlock problem case
Failure instances
Modify the storage engine-myisam modify to innodb:http://hi.baidu.com/dmkj2008/item/ab239196528fa8b9cc80e554
Statement improvements to solve the InnoDB engine situation
http://blog.csdn.net/zhangzhao100/article/details/7981755

Six, the operation of the lock
LOCK TABLES
Tbl_name [as alias] {READ [LOCAL] | [Low_priority] WRITE}
[, Tbl_name [as alias] {READ [LOCAL] | [Low_priority] WRITE}] ...
UNLOCK TABLES
Single table read-only lock
mysql> lock tables Tbl_ftpuser read;
mysql> unlock tables;

Global read-only lock
Mysql> FLUSH TABLES with READ LOCK;
mysql> unlock tables;

Vii.. Optimization
A MyISAM Table Lock Optimization Recommendations
When optimizing the MyISAM storage engine lockout problem, the most important thing is how to increase the degree of concurrency
1. Shorten the lock time
To reduce the complexity of the query by two, the complex query is split into several small query distributions;
As much as possible to establish a sufficiently efficient index, so that data retrieval faster;
Try to get the MyISAM storage engine table to hold only the necessary information, control the field type;
Optimize MyISAM table data files with the right opportunities;
2, the separation can be parallel operation
The MyISAM storage engine has a parameter option that controls whether the concurrent Insert feature is turned on: Concurrent_insert, which can be set to 0,1 or 2. The three values are specified as follows:
Concurrent_insert=2, regardless of whether the middle part of the MyISAM Storage engine's table data file exists because of the free space left by deleting data, it allows concurrentinsert at the tail of the data file;
Concurrent_insert=1, when there is no free space in the middle of the MyISAM Storage engine table data file, it can be concurrentinsert from the tail of the file;
Concurrent_insert=0, Concurrentinsert is not allowed regardless of whether the middle part of the MyISAM Storage engine's table data file exists because there is no free space left to delete the data.
3, the rational use of read and write priority
In the various locking analysis sections in this chapter, we learned that table-level locking for MySQL has different priority settings for read and write, and by default the write priority is greater than the read priority level. So, if we can decide on the differences between the different system environments
The priority of the write. If our system is a read-oriented, and to prioritize the query performance, we can set the system parameter option Low_priority_updates=1, the write priority is set to a lower priority than read, you can let the tell
MySQL tries to process read requests first. Of course, if our system needs limited guaranteed data write performance, then we can not set the Low_priority_updates parameter.
Here we can fully take advantage of this feature, set the Concurrent_insert parameter to 1, even if the data is less likely to be deleted, if the temporary waste of a small amount of space is not particularly concerned about, will Concurrent_insert
The parameter is set to 2 to try. Of course, there is space in the middle of the data file, in the waste of space, but also the need to read more data when the query, so if the deletion is not very small, it is recommended to
Concurrent_insert set to 1 is more appropriate


Two Innodb row Lock Optimization Recommendations:
As much as possible to make all data retrieval through the index to complete, so as to avoid InnoDB because the index key to lock and upgrade to table-level lock;
The reasonable design index, lets the InnoDB when the index key above locks the time to be as accurate as possible, reduces the locking range as far as possible, avoids causes the unnecessary lock and affects other query execution;
Minimizing the range-based data retrieval filter to avoid locking down records due to the negative effects of gap locks;
Try to control the size of the transaction, reduce the amount of locked resources and lock the length of time;
Use lower-level transaction isolation as much as possible in the context of the business environment to reduce the additional cost of MySQL for implementing transaction isolation levels;

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.