MySQL myisam solution for concurrent read/write
Mysql myisam solves the problem of concurrent read/write. MyISAM is very efficient when reading operations are dominant. Once a large number of read/write concurrency occurs, the efficiency of MyISAM will decrease in comparison with InnoDB, and the data storage methods of MyISAM and InnoDB are also significantly different: normally, in MyISAM, new data is appended to the end of the data file. However, after some UPDATE and DELETE operations, the data file is no longer consecutive, there are many holes in the data file. When new data is inserted, the default settings will show you whether the holes can accommodate new data. If yes, save the new data directly to the hole. Otherwise, save the new data to the end of the data file. This is done to reduce the size of data files and reduce the generation of file fragments. This is not the case in InnoDB. In InnoDB, because the primary key is cluster, data files are always sorted by primary key. If auto-increment ID is used as the primary key, the new data is always at the end of the data file.
After learning about these basic knowledge, let's talk about Several configuration options that are easily overlooked by MyISAM:
Concurrent_insert:
In general, read/write operations in MyISAM are serial, but when querying and inserting the same table, to reduce the frequency of lock competition, according to the settings of concurrent_insert, myISAM can process queries and inserts in parallel:
When concurrent_insert = 0, concurrent insertion is not allowed.
When concurrent_insert = 1, concurrent insertion is allowed for tables without holes. The new data is at the end of the data file (default ).
When concurrent_insert = 2, concurrent insertion at the end of the data file is allowed regardless of whether the table has holes.
In this case, it is very cost-effective to set concurrent_insert to 2. As for the resulting file fragments, You can regularly use the optimize table syntax to OPTIMIZE them.
Max_write_lock_count:
By default, the write operation has a higher priority than the read operation. Even if the read request is sent first and then the write request is sent, the write request is processed first, then process the Read Request. This causes a problem: Once www.bkjia.com sends several write requests, all read requests will be blocked until all write requests are processed. Max_write_lock_count:
Max_write_lock_count = 1
With this setting, when the system processes a write operation, the write operation will be suspended to give the read operation a chance.
Low-priority-updates:
We can also simply reduce the write operation Priority and give the read operation a higher priority.
Low-priority-updates = 1
In summary, concurrent_insert = 2 is absolutely recommended. As for max_write_lock_count = 1 and low-priority-updates = 1, it depends on the situation. If you can reduce the priority of write operations, use low-priority-updates = 1; otherwise, use max_write_lock_count = 1.