Mysql lock shared
#一. Overview
The > database locking mechanism simply means that the database is designed to ensure the consistency of the data and to make the various shared resources be ordered by concurrent access. For any kind of database need to have a corresponding locking mechanism, mysql lock,so MySQL nature is no exception. MySQL database because of its own architecture, the existence of a variety of data storage engine, each storage engine for the application scenario characteristics are not the same, in order to meet the needs of their specific application scenarios, each storage engine locking mechanism for each of the specific scenarios to optimize the design, Therefore, the locking mechanism of each storage engine also has a big difference. MySQL each storage engine uses three types (levels) of locking mechanisms: table-level locking, row-level locking, and page-level locking.
Mysql update lock
# # Table-level locking (Table-level)
> table-level locking is the maximum granularity locking mechanism in MySQL's storage engine. The most important feature of this locking mechanism is that the implementation logic is very simple and the system has the least negative effect. So it's fast to get locks and release locks. Because table-level locks lock the entire table at once, it is good to avoid the deadlock problem that plagues us.
Of course, the biggest negative effect of locking grain size is that the probability of contention for locked resources will be the highest, resulting in great compromise.
Some non-transactional storage engines, such as myisam,memory,csv, are primarily used for table-level locking.
Mysql show locks
# # Row-level lock (Row-level)
> Row-level locking is the most important feature is that the size of the lock object is very small, but also the major database management software implemented by the smallest lock granularity. Because the locking granularity is small, the probability of a locked resource contention is minimal, which gives the application the greatest possible concurrency and increases the overall performance required for high concurrency applications.
While it is possible to have greater advantages over concurrent processing capabilities, row-level locking also brings a number of drawbacks. Because of the small granularity of the locked resources, there is more to be done each time a lock and release lock is acquired, and the resulting consumption is much more natural. In addition, row-level locking is also most likely to occur with deadlocks.
The primary use of row-level locking is the InnoDB storage engine.
Mysql select lock
# # Page-level lock (Page-level)
> page-level locking is one of the more unique locking levels in MySQL and is not too common in other database management software. Page-level locking is characterized by a lock granularity between row-level locking and table-level locks, so the resource overhead required to acquire a lock, and the concurrency processing power that is available, are also in between. In addition, as with page-level locking and row-level locking, deadlocks can occur.
In the process of database resource locking, with the decrease of the granularity of the lock resource, the amount of memory required to lock the same amount of data is more and more, and the implementation algorithm is more and more complex. However, as the granularity of locked resources decreases, the likelihood of an application's access request encountering lock waits decreases and the overall concurrency of the system increases.
The primary use of page-level locking is the BerkeleyDB storage engine.
Database locks mysql
# # Summary
> In general, the features of the 3 types of locks in MySQL can be broadly summarized as follows:
Table-level Lock: The cost is small, lock fast, no deadlock, locking granularity is large, the probability of lock conflict is highest, the concurrency is the lowest;
Row-level Lock: High overhead, locking slow, deadlock, locking granularity is the least, the probability of lock conflict is the lowest, concurrency is the highest;
Page locks: overhead and lock times are bounded between table and row locks, deadlock occurs, locking granularity bounds between table and row locks, and concurrency is common.
Apply: From the lock point of view, table-level locks are more suitable for the query-based, only a small number of applications to update data by index criteria, such as Web applications, and row-level locks are more suitable for a large number of index conditions to update a small number of different data, and concurrent queries, such as some online transaction processing (OLTP) systems.
# Two. Memento lock
> because the locking mechanism used by the MyISAM storage engine is fully implemented by the table-level locking provided by MySQL, we will use the MyISAM storage engine as the sample storage engine.
# # MySQL table-level lock Lock mode
> table-level locks for MySQL have two modes: table-Shared read lock and table write lock. Lock mode Compatibility:
Read operations on the MyISAM table do not block other users from reading requests to the same table, but block write requests to the same table;
Writes to the MyISAM table will block other users from reading and writing to the same table;
There is a serial between the read and write operations of the MyISAM table and the write operation. When a thread obtains a write lock on a table, only the thread holding the lock can update the table. Read and write operations on other threads wait until the lock is released.
# How to add a table lock
> MyISAM will automatically add read locks to all tables involved before executing the query statement (SELECT), and will automatically write locks to the table involved before performing the update operation (update, DELETE, INSERT, etc.), which does not require user intervention, so Users generally do not need to explicitly lock the MyISAM table directly with the Lock Table command.
# MyISAM table Lock Optimization Recommendations
> for MyISAM storage engines, although using table-level locking is less expensive than implementing row-level locks or page-level locks during lock implementations, the lock itself consumes the least amount of resources. However, due to the granularity of the lock, the contention of the locked resource is more than that of other locking levels, which can reduce the concurrency processing power to a large extent. Therefore, when optimizing the MyISAM storage engine locking problem, the most important thing is how to increase the concurrency. Because the locking level is not possible to change, we first need to make the lock as short as possible, and then let the possible concurrent operation as possible concurrency.
# # # Query table-level lock contention condition
> MySQL internally there are two sets of special state variables to record system internal lock resource contention:
"' SQL
Mysql> Show status like ' table% ';
+----------------------------+---------+
| variable_name | Value |
+----------------------------+---------+
| Table_locks_immediate | 100 |
| table_locks_waited | 11 |
+----------------------------+---------+
```
> There are two state variables that record the MySQL internal table-level lock, and two variables are described below:
Table_locks_immediate: The number of times a table-level lock is generated;
Table_locks_waited: The number of waits to occur when table-level lock contention occurs;
Both status values are recorded from the start of the system, and a corresponding event occurs with a number plus 1. If the table_locks_waited state value is relatively high, then it indicates that the table-level lock contention in the system is serious, and it is necessary to further analyze why there are more lock resource contention.
# # # Shorten lockout time
> How to make the lockout time as short as possible? The only way is to get our query execution time as short as possible.
A) Reduce the complexity of the query by two and divide the complex query into several small query distributions;
b) Make the index as efficient as possible, so that the data can be retrieved more quickly;
c) Try to get the MyISAM storage engine table to hold only the necessary information and control the field type;
d) Optimize MyISAM table data files with the right opportunities.
# # # Separation capable of parallel operation
> When it comes to MyISAM's table locks, which are read-and-write blocking table locks, some people might think that the MyISAM storage engine can only be fully serialized on the table, and there is no way to parallelize. Let's not forget that the MyISAM storage engine has a very useful feature, which is the Concurrentinsert (concurrent insertion) feature.
The MyISAM storage engine has a parameter option that controls whether the concurrent Insert feature is turned on: Concurrent_insert, which can be set to 0,1 or 2. The three values are specified as follows:
concurrent_insert=2, whether there is no hole in the MyISAM table, it is allowed to insert records concurrently at the end of the table;
Concurrent_insert=1, if there are no holes in the MyISAM table (that is, rows in the middle of the table that are not deleted), MyISAM allows one process to read the table while another process inserts records from the end of the table. This is also the default setting for MySQL;
Concurrent_insert=0, concurrent insertions are not allowed.
You can use the Concurrency insertion feature of the MyISAM storage engine to resolve lock contention for the same table queries and insertions in your app. For example, setting the Concurrent_insert system variable to 2 always allows concurrent insertions, and by periodically executing the Optimize Table statement in the system's idle time to defragment the space and reclaim the middle hole resulting from the deletion of the record.
# # # Reasonable use of read-write priority
> MyISAM storage Engine is read and write blocking, then, a process request a read lock of a MyISAM table, while another process requests the same table of write lock, how does MySQL handle it?
The answer is that the write process gets the lock first. Furthermore, even if the read request goes to the lock waiting queue, and the write request is reached, the write lock is inserted before the read lock request.
This is because table-level locking for MySQL has different priority settings for both read and write, and by default the write priority is greater than the read priority.
So, if we can determine the priority of reading and writing based on the differences in the respective system environment:
By executing the command set Low_priority_updates=1, the connection reads higher than the write priority. If our system is a read-oriented, you can set this parameter, if the main write, then do not set;
Reduce the priority of the statement by specifying the Low_priority property of the Insert, UPDATE, DELETE statement.
Although the above method is either update first or query first method, but still can use it to solve the query of relatively important applications (such as user logon system), read lock waiting for serious problems.
In addition, MySQL also provides a compromise method to adjust the read-write conflict, that is, to set the system parameter Max_write_lock_count a suitable value, when a table read lock reached this value, MySQL temporarily reduced the priority of the write request, to the reading process must obtain the opportunity to lock.
It is also important to emphasize that some long-running query operations will also make the write process "starved", so the application should try to avoid long-running query operations, do not always want to use a SELECT statement to solve the problem, because this seemingly clever SQL statement, often more complex, the execution time is longer, If possible, the SQL statement can be "decomposed" by using measures such as an intermediate table, so that each step of the query can be completed in a short time, thereby reducing the lock conflict. If complex queries are unavoidable, you should try to schedule them during the database idle time, such as some periodic statistics that can be scheduled for nightly execution.
# three. Row-level locking
> Row-level locking is not the way that MySQL locks itself, but is implemented by other storage engines themselves, such as the well-known InnoDB storage engine and MySQL's distributed storage Engine ndbcluster, all of which implement row-level locking. Considering that row-level locking is implemented by each storage engine, and the implementation is different, InnoDB is the most widely used storage engine in the current transactional storage engine, so here we mainly analyze the locking characteristics of InnoDB.
# # InnoDB lock mode and implementation mechanism
> Considering that row-level locking is implemented by each storage engine, and the implementation is different, InnoDB is the most widely used storage engine in the transactional storage engine, so here we analyze the locking characteristics of InnoDB.
In general, InnoDB's locking mechanism has many similarities with Oracle databases. InnoDB row-level locking is also divided into two types, shared and exclusive locks, and in the implementation of the locking mechanism in order to allow row-level locking and table-level locking coexistence, InnoDB also used the concept of intent-lock (table-level locking), there are intent to share the lock and the intent of the exclusive lock two.
When a transaction needs to add a lock to a resource of its own, it can add a shared lock if it encounters a shared lock that is locking the resource it needs, but cannot add an exclusive lock. However, if you encounter a resource that you need to lock into an exclusive lock, you can only wait for the lock to release the resource before you can acquire the lock resource and add your own lock. The function of intent lock is that when a transaction needs to acquire a resource lock, the transaction can need to lock the row's table with an appropriate intent lock if it encounters its own required resource that has been occupied by an exclusive lock. If you need a shared lock, add an intent shared lock on the table. If you want to add an exclusive lock on a row (or some rows), first add an intent exclusive lock to the table. Intent shared locks can coexist multiple simultaneously, but only one exists for an intent exclusive lock. So, it can be said that the InnoDB lock mode can be divided into four kinds: Shared lock (S), exclusive lock (X), Intent shared lock (IS) and intent exclusive Lock (IX), we can use the following table to summarize the above four of the coexistence of the logical relationship:
| :-: | Shared Lock (S) | Exclusive lock (X) | Intent shared Lock (IS) | Intent exclusive Lock (IX)
:-: | :-: | :-: | :-: | :-: |
| Shared Lock (S) | Compatible | Conflict | Compatible | Conflict |
| Exclusive lock (X) | Conflict | Conflict | Conflict | Conflict |
| Intent shared Lock (IS) | Compatible | Conflict | Compatible | Compatible |
| Intent exclusive Lock (IX) | Conflict | Conflict | Compatible | Compatible |
> If the lock mode of a transaction request is compatible with the current lock, INNODB grants the requested lock the transaction, whereas if the two are incompatible, the transaction waits for the lock to be released. The
Intent Lock is innodb automatically and does not require user intervention. For update, Delete, and INSERT statements, InnoDB automatically adds an exclusive lock (X) to the data set involved, and InnoDB does not add any locks for the normal SELECT statement, and the transaction can be displayed to the recordset with shared or exclusive locks.
' MySQL
Shared lock (S): SELECT * FROM table_name WHERE ... Lock in SHARE MODE
Exclusive Lock (X): SELECT * FROM table_name WHERE ... For UPDATE
'
> with Select ... In SHARE mode, a shared lock is used primarily to confirm the existence of a row of records when data dependencies are needed, and to ensure that no one is doing an update or delete operation on the record.
However, if the current transaction also requires an update to the record, it is most likely to cause a deadlock, and for applications that require an update operation after locking the row records, you should use SELECT ... The for Update method obtains an exclusive lock.
# # InnoDB Line Lock Implementation method
> InnoDB row locks are implemented by locking the index entries on the index, only if the data is retrieved through the index criteria, InnoDB uses row-level locks, otherwise innodb will use a table lock
In practice, it is important to pay special attention to this feature of the InnoDB row lock, otherwise, it may lead to a lot of lock conflicts, which can affect the concurrency performance. Here are some practical examples to illustrate.
(1) The InnoDB does use a table lock instead of a row lock when querying without an index condition.
(2) Since the MySQL row lock is for the index plus lock, not for the record plus lock, so although it is access to the record, but if you use the same index key, there will be a lock conflict.
(3) When a table has multiple indexes, different transactions can use different indexes to lock different rows, and InnoDB uses row locks to lock the data, whether it is using a primary key index, a unique index, or a normal index.
(4) Even if the index field is used in the condition, but whether the index is used to retrieve the data is determined by MySQL by judging the cost of different execution plans, if MySQL thinks that the full table scan is more efficient, such as for some small tables, it will not use the index, in which case InnoDB will use the table lock. Instead of a row lock. Therefore, when parsing a lock conflict, don't forget to check the SQL execution plan to verify that the index is actually used.
# # Gap Lock (Next-key Lock)
> When we retrieve data with a range condition rather than an equal condition, and request a shared or exclusive lock, InnoDB locks the index entry for the qualifying existing data record;
For records in which the key value is within the range but does not exist, called "gap", InnoDB will also lock the "gap", which is called the Gap Lock (Next-key lock).
Cases:
If there are only 101 records in the EMP table, the value of the Empid is,..., 100,101 respectively, the following SQL:
"MySQL
Mysql> SELECT * from emp where empid > + for update;
```
> is a retrieval of a range of criteria, InnoDB not only locks up records that meet the conditional Empid value of 101, but also locks out "gaps" where empid is greater than 101 (these records do not exist).
InnoDB the purpose of using clearance locks:
(1) To prevent Phantom reading to meet the requirements of the relevant isolation level. For the above example, if a gap lock is not used, if the other transaction inserts any record with empid greater than 100, then this transaction will occur if the above statement is executed again;
(2) In order to meet the needs of its recovery and replication.
Obviously, when using range criteria to retrieve and lock a record, even if some nonexistent key value is also locked by an innocent, it is not possible to insert any data in the Lock key value range when locked. In some scenarios this can be very damaging to performance.
In addition to the negative impact of the gap lock on the performance of InnoDB, there are several other major performance pitfalls in how the lock is implemented by indexing:
(1) When query cannot take advantage of the index, InnoDB discards the use of row-level locking instead of table-level locking, resulting in lower concurrency performance;
(2) When the index used by query does not contain all the filters, the index key used by the data retrieval may have some parts that are not part of the query's result set, but will also be locked because the gap lock is locked by a range, not a specific index key;
(3) When query uses the index to locate data, it is locked if the same index key is used but the data row accessed is different (the index is only part of the filter).
Therefore, in the actual application development, especially the concurrent inserting more and more applications, we should try to optimize the business logic, try to use equal conditions to access the updated data, to avoid the use of scope conditions.
It is also necessary to note that, in addition to using the clearance lock when the InnoDB is locked by the range condition, the InnoDB will also use a gap lock if the equality condition is used to request a lock for a record that does not exist.
# # Dead Lock
> As mentioned above, the MyISAM table lock is deadlock free, because MyISAM always gets all the locks needed at once, either all satisfied, or waits, so there is no deadlock. In InnoDB, however, in addition to a single SQL-composed transaction, the lock is progressively acquired, and when two transactions need to acquire an exclusive lock held by the other to continue the transaction, the cyclic lock wait is a typical deadlock.
In InnoDB's transaction management and locking mechanism, there is a mechanism for detecting deadlocks, which can detect the existence of a damn lock in a short time after a deadlock has been generated in the system. When InnoDB detects that a deadlock has been generated in the system, InnoDB will choose to roll back the smaller transactions in the two transactions that generated the deadlock by the appropriate decision, allowing the other larger transaction to complete successfully.
What does the InnoDB do to determine the size of the transaction for the standard? This problem is also mentioned in the official MySQL manual, which, in effect, calculates the size of two transactions by calculating the amount of data inserted, updated, or deleted by each of the two transactions after InnoDB discovers a deadlock. In other words, the more records the firm changes, the more it will not be rolled back in the deadlock.
One thing to note, however, is that when the deadlock scenario involves more than InnoDB storage engines, InnoDB is not able to detect a damn lock, which can only be resolved by locking the timeout limit parameter innodb_lock_wait_timeout.
It is necessary to note that this parameter is not only to solve the deadlock problem, if the concurrent access is high, if a large number of transactions due to the inability to obtain the required locks immediately suspended, it can consume a lot of computer resources, causing serious performance problems, or even drag across the database. We can prevent this from happening by setting the appropriate lock wait timeout threshold.
In general, deadlocks are an application design problem, and most deadlocks can be avoided by adjusting the business process, database object design, transaction size, and SQL statements that access the database. Here are some examples of common ways to avoid deadlocks:
(1) In the application, if a different program accesses multiple tables concurrently, it is necessary to agree to access the table in the same order, which can greatly reduce the chance of deadlock generation.
(2) When the program processes the data in batches, if the data is sorted beforehand, it is guaranteed that each thread will process the record in a fixed order, which can greatly reduce the possibility of deadlock.
(3) In a transaction, if you want to update the record, you should request a sufficient level of lock, that is, an exclusive lock, instead of requesting a shared lock, and then requesting an exclusive lock, because when the user requests an exclusive lock, other transactions may have acquired the same record of the shared lock, resulting in a lock conflict, or even deadlock.
(4) Under Repeatable-read isolation level, if two threads simultaneously record the same condition with select ... For UPDATE plus an exclusive lock, two threads will be locked successfully if the condition record is not met. The program found that the record does not yet exist and tries to insert a new record, and if two threads do so, a deadlock occurs. In this case, you can avoid the problem by changing the isolation level to read COMMITTED.
(5) When the isolation level is read Committed, if two threads first Execute select ... For UPDATE, determine if there is a record that matches the criteria, and if not, insert the record. At this point, only one thread Cheng Nen the insert succeeds, the other threads will have a lock wait, and when the 1th thread commits, the 2nd thread will fail with the primary key, but the thread will get an exclusive lock if it goes wrong. If there is a 3rd thread to apply for an exclusive lock, there will be a deadlock. In this case, you can do the insert operation directly before capturing the primary key weight exception, or when encountering a primary key weight error, always perform an exclusive lock obtained by the rollback release.
# # When to use a table lock
> for InnoDB tables, row-level locks should be used in most cases, because transactions and row locks are often the reason why we chose the InnoDB table. However, you can also consider using table-level locks in individual special transactions:
(1) Transactions need to update most or all of the data, the table is relatively large, if the use of the default row lock, not only the transaction is inefficient, and may cause other transactions long time lock wait and lock conflict, in this case, you can consider using table locks to improve the execution speed of the transaction.
(2) transactions involving more than one table, more complex, it is likely to cause deadlocks, resulting in a large number of transaction rollback. It is also possible to consider the tables involved in a one-time locking transaction, thus avoiding deadlocks and reducing the cost of the database due to transaction rollback.
Of course, these two kinds of transactions in the application can not be too much, otherwise, you should consider using the MyISAM table.
Under InnoDB, the following two points should be noted for using table locks.
(1) Using lock tables Although it is possible to add a table-level lock to InnoDB, it must be stated that the table lock is not managed by the InnoDB storage engine layer, but is owned by the previous layer of ──mysql server, only if autocommit=0, innodb_table _locks=1 (the default setting), the InnoDB layer to know the MySQL plus table lock, MySQL server can also sense InnoDB plus row lock, in this case, InnoDB to automatically identify the deadlock involving table-level locks, otherwise, InnoDB will not be able to automatically detect and process this deadlock.
(2) When using the lock tables to InnoDB table lock should be noted that to set the autocommit to 0, or MySQL will not add lock table, before the end of the transaction, do not use unlock tables to release the table lock, because unlock tables will implicitly commit the transaction Commit or rollback does not release a table-level lock added with lock tables, and the table lock must be released with unlock tables. The correct way to see the following statement:
For example, if you need to write a table T1 and read from table T, you can do this as follows:
```
SET autocommit=0;
LOCK TABLES T1 WRITE, T2 READ, ...;
[Do something with tables T1 and T2 here];
COMMIT;
UNLOCK TABLES;
```
# # InnoDB row Lock Optimization recommendations
>, but it is much better than MyISAM table-level locking in overall concurrency processing capability. When the system concurrency is high, the overall performance of InnoDB and MyISAM will have a more obvious advantage compared. However, InnoDB's row-level locking also has its own fragile side, and when used improperly, it may make InnoDB's overall performance not only higher than MyISAM, but may even be worse.
(1) to make reasonable use of InnoDB's row-level locking, to avoid weaknesses, we must do the following work:
A) as much as possible to make all data retrieval through the index to complete, so as to avoid InnoDB because the index key to lock and upgrade to table-level lock;
b) Reasonable design of the index, so that the InnoDB on the index key to be as accurate as possible, to reduce the locking range as far as possible, to avoid causing unnecessary locking and affect the execution of other query;
c) To minimize the scope-based data retrieval filtering conditions, to avoid the negative impact of the gap lock caused by locking the record should not be locked;
d) Try to control the size of the transaction, reduce the amount of locked resources and the length of lock time;
e) Use lower-level transaction isolation where possible in a business environment to reduce the additional cost of MySQL because of the transaction isolation level.
(2) due to the innodb of the row-level locking and transactional, so there will definitely be a deadlock, the following are some of the more commonly used to reduce the probability of deadlock generation of small suggestions:
A) similar to the business module, as far as possible according to the same order of access to prevent the creation of deadlocks;
b) In the same transaction, as far as possible to lock all the resources needed to reduce the deadlock generation probability;
c) for business segments that are very prone to deadlocks, try using the upgrade lock granularity to reduce the probability of deadlock generation through table-level locking.
(3) The contention for row locks on the system can be analyzed by examining the Innodb_row_lock state variables:
"MySQL
Mysql> Show status like ' innodb_row_lock% ';
+-------------------------------+-------+
| variable_name | Value |
+-------------------------------+-------+
| Innodb_row_lock_current_waits | 0 |
| Innodb_row_lock_time | 0 |
| Innodb_row_lock_time_avg | 0 |
| Innodb_row_lock_time_max | 0 |
| Innodb_row_lock_waits | 0 |
+-------------------------------+-------+
```
The row-level lock state variable for > InnoDB not only records the number of lock waits, but also the total duration of the lock, the average duration of each time, and the maximum length of time, plus a non-cumulative status that shows the number of waits that are currently waiting to be locked. The description of each state volume is as follows:
Innodb_row_lock_current_waits: The number currently waiting to be locked;
Innodb_row_lock_time: The total length of time from system boot to now lock;
Innodb_row_lock_time_avg: The average time spent on each wait;
Innodb_row_lock_time_max: The time it takes to start from the system and wait for the most common time;
Innodb_row_lock_waits: The total number of waits after the system has been started;
For these 5 state variables, it is important to innodb_row_lock_time_avg (waiting average duration), innodb_row_lock_waits (total number of Waits), and Innodb_row_lock_ Time (wait for the total length) of these three items. Especially when the number of waits is high, and each time it waits is not small, we need to analyze why there is so much waiting in the system and then start to specify the optimization plan based on the results of the analysis.
If the lock contention is found to be more serious, such as innodb_row_lock_waits and INNODB_ROW_LOCK_TIME_AVG values are higher, you can also set the InnoDB Monitors to further observe the lock conflict of the table, data rows, etc. and analyze the reason for lock contention.
Locks conflicting tables, data rows, and so on, and analyzes the reason for lock contention. Here's how:
"MySQL
Mysql> CREATE TABLE Innodb_monitor (a INT) Engine=innodb;
```
> You can then use the following statement to view:
"MySQL
Mysql> show engine InnoDB status;
```
The > Monitor can stop viewing by issuing the following statement:
"MySQL
mysql> drop table Innodb_monitor;
```
> After setting up the monitor, there will be detailed information about the current lock waiting, including table name, lock type, lock record, etc., so as to facilitate further analysis and problem determination. There may be readers who ask why they should create a table called Innodb_monitor first. Because creating the table actually tells InnoDB that we are starting to monitor his details, then InnoDB will log the more detailed transaction and locking information into the MySQL errorlog so that we can do further analysis later. When the monitor is turned on, the monitored content is logged to the log every 15 seconds by default, and the. err file becomes very large if opened for a long time, so after the user confirms the cause of the problem, remember to delete the monitor table to turn off the monitor, or by using "--console" Option to start the server to close the Write log file.
MySQL lock mechanism