Recently, I encountered a fault related to the full disk, and my colleagues also found that the operation of hang was not responded frequently due to the full disk, so I took the time to study these two cases.
Let's take a look at the official statement.
a every minute see whether there enough write the row. there enough ,it continues * minutes it writes an entry the , warning about the condition.
In fact, MySQL itself does not perform any operations. As the official document says, it only checks whether there is free space every minute and writes error logs once every 10 minutes.
However, when the disk is full again, the binlog cannot be updated, the redo log cannot be updated, and the data in all buffer pools cannot be flushed. If the server restarts unfortunately, or if the instance is killed, it will inevitably lead to data loss, which is almost certain. Therefore, it is best to release a certain amount of space before refreshing dirty data.
1. select
First, after experience and practical tests, the select operation will not be caused by the full disk, that is, all select operations will run normally.
2. insert
After a failed test, it is found that when the disk is full, it will not be stuck after the first insert, but will be stuck after n.
By checking the error log, it is found that the problem of getting stuck is related to disk flushing.
usrlocalmysql.libexecmysqld: writing usrlocalmysql.libexecmysqld: writing
To verify whether the inference is correct, we set sync_binlog to 1. In this case, the first insert entry gets stuck, and an error is reported in the error log to indicate that the binlog writing fails. It seems that the problem is indeed related to disk flushing.
Currently, three parameters are known to be related to disk flushing, namely sync_binlog, innodb_flush_log_tr_commit, and duoblewrite.
3. show slave status
After the slave database is tested, the Operation will get stuck. This is mainly because the LOCK_active_mi lock needs to be obtained to execute the show slave status command, and then the mi-> data_lock, however, when the disk is full, the data in io_thread cannot be written to the relay log. As a result, io_thread holds the mi-> data_lock lock, which leads to a deadlock.
Therefore, when the disk is full, the show slave status operation will get stuck.
4. show status
The test is normal, but if the show slave status operation is executed first, the show status will also be stuck. This is because LOCK_status needs to be locked to execute show status, and LOCK_active_mi must be locked because the status contains slave status. If the show slave status is restricted, the LOCK_active_mi lock will not be released for io_thread due to the mi-> data_lock deadlock problem. At this time, the show status and show slave status compete for the same LOCK_active_mi lock, which also forms a deadlock.
Therefore, when the disk is full, if you first execute show slave status and then execute show status, all the operations will get stuck.
Ps: 3 and 4 results can be referred to, this blog "" http://my.oschina.net/llzx373/blog/224175