The solution to the read/write mutex lock problem of Mysql (MyISAM) has recently encountered a serious read/write lock problem due to the increase in database read requests, the master server quickly completes the write operation, but the slave database will be severely blocked by the update and insert statements synchronized from the master and slave databases due to a large number of select queries, resulting in all Mysql
The solution to the read/write mutex lock problem of Mysql (MyISAM) has recently encountered a serious read/write lock problem due to the increase in database read requests, the master server quickly completes the write operation, but the slave database will be severely blocked by the update and insert statements synchronized from the master and slave databases due to a large number of select queries, resulting in all Mysql
Mysql (MyISAM) read/write mutex lock Solution
Recently, due to the increase in database read requests, there was a serious read/write lock problem. Due to Master/Slave separation, the master server quickly completed the write operation, however, a large number of select queries in the slave database will be severely blocked by the update and insert statements synchronized from the master database and the slave database, resulting in a rapid increase in the load of all Mysql databases.
Because there is no way to add a read server in a short period of time, we have made some configuration for Mysql to sacrifice the real-time data, in exchange for the life security of all servers. The specific adjustments and ideas are as follows:
MyISAM reading operations are very efficient. Once a large number of read/write concurrency occurs, the efficiency of MyISAM will decrease in comparison with InnoDB, and the data storage methods of MyISAM and InnoDB are also significantly different: normally, in MyISAM, new data is appended to the end of the data file. However, after some UPDATE and DELETE operations, the data file is no longer consecutive, there are many holes in the data file. When new data is inserted, the default settings will show you whether the holes can accommodate new data. If yes, save the new data directly to the hole. Otherwise, save the new data to the end of the data file. This is done to reduce the size of data files and reduce the generation of file fragments. This is not the case in InnoDB. In InnoDB, because the primary key is cluster, data files are always sorted by primary key. If auto-increment ID is used as the primary key, the new data is always at the end of the data file.
After learning about these basic knowledge, let's talk about Several configuration options that are easily overlooked by MyISAM:
Concurrent_insert:
In general, read/write operations in MyISAM are serial, but when querying and inserting the same table, to reduce the frequency of lock competition, according to the settings of concurrent_insert, myISAM can process queries and inserts in parallel:
When concurrent_insert = 0, concurrent insertion is not allowed.
When concurrent_insert = 1, concurrent insertion is allowed for tables without holes. The new data is at the end of the data file (default ).
When concurrent_insert = 2, concurrent insertion at the end of the data file is allowed regardless of whether the table has holes.
In this case, it is very cost-effective to set concurrent_insert to 2. As for the resulting file fragments, You can regularly use the optimize table syntax to OPTIMIZE them.
Max_write_lock_count:
By default, the write operation has a higher priority than the read operation. Even if the read request is sent first and then the write request is sent, the write request is processed first, then process the Read Request. This causes a problem: Once I send several write requests, all read requests will be blocked until all write requests are processed. Max_write_lock_count:
Max_write_lock_count = 1
With this setting, when the system processes a write operation, the write operation will be suspended to give the read operation a chance.
Low-priority-updates:
We can also simply reduce the write operation Priority and give the read operation a higher priority.
Low-priority-updates = 1
In summary, concurrent_insert = 2 is absolutely recommended. As for max_write_lock_count = 1 and low-priority-updates = 1, it depends on the situation. If you can reduce the priority of write operations, use low-priority-updates = 1; otherwise, use max_write_lock_count = 1.
?
Http://www.jb51.net/article/28404.htm