I. MySQL commonly used storage engines include InnoDB and MyISAM as well as memory engines, but the most common is the InnoDB engine and MyISAM engine, the following are recorded and compared:
Here are some questions to consider:
- Does your database require foreign key support?
- Does your database require transactional support?
- Does your database require full-text indexing?
- How much data do you have in your database?
- What query patterns do you often use?
Thinking about these questions can help you find a more appropriate direction, but this is not absolute. If you need foreign key processing, then you have to choose InnoDB, if you need full-text indexing, then MyISAM may be a good choice, because the system built this full-text index, however, in fact, we do not actually test 2 million data, so even if we use the InnoDB engine, You can also use Sphinx to complete the index.
The size of the data is also an important factor in influencing selection, and InnoDB is more suitable for processing large amounts of high concurrency data because of its good transaction log and failback processing. The size of the database determines the length of time it takes to recover from a failure, which can be faster, but MyISAM will take several hours or even days to recover, which is a disaster!
Your habit of manipulating database tables can also be a factor that has a significant impact on performance. For example, COUNT () can be very fast in the MyISAM table, and it can be painful under the InnoDB table. While the primary key query will be quite fast under InnoDB, it is important to be careful that if our primary key is too long it can cause performance problems. A large number of inserts statements will be faster under MyISAM, but updates will be faster under innodb-especially when concurrency is large.
- Transactions not supported: MyISAM's engine does not support transactions, so scenarios that are required for things are not appropriate
- Table-level locking: Locking mechanism is a table-level index, which makes the implementation cost of locking very small, but also greatly reduces the concurrency performance
- Read/write blocking: When reading data, blocking write data, and when writing data, will also block the reading of data
- Only the index is cached: The size of the cache data index can be set through key_buffer_size, but the data block is not cached, which increases the swap read with IO
- Usage scenarios:
- No transaction support required (transaction not supported)
- Relatively few data modifications (read and write blocking each other)
- To read the main
- Relatively low concurrency (locking mechanism)
- Data is not always a high-quality requirement
- Best Practices:
- Try to index (caching mechanism)
- For relatively static data, the use of Query_cache can greatly improve access efficiency
- The count of MyISAM is very efficient only when the full table is scanned, and the count of other conditions is required for actual data access.
- Break down some of the larger SQL executions, reduce SQL execution time, and reduce blocking
- Reduce concurrency, some high concurrency scenarios are queued by application
- Good transactional support with acid properties.
- Support for row-level locking, foreign key support
- The ability to cache indexes and data with very efficient index caching features.
- The entire table and primary key are stored in a cluster way, forming a tree of balance.
- All Secondry index will hold the primary key information
- Applicable scenarios:
- Suitable for high concurrency of large amounts of data, data consistency requires particularly high
- Need things to support (better things to support)
- Row-level locking is a good fit for high concurrency, but you need to ensure that queries are done by indexing
- The memory of the hardware device is relatively large, it can better put the index and data block of the data into memory, thus improve the cache utilization of memory and reduce the IO of the disk.
- Best Practices:
- Cache all data and indexes as much as possible, thus improving response times.
- Avoid primary key updates, as this can result in a lot of data movement.
- In large batches of small inserts, try to control things yourself instead of using autocommit Auto-submit
- The primary key is as small as possible, avoiding the secondary of a large space burden on index
MySQL database engine selection from head to toe optimization (reprint)