1. For complex queries such as multi-join queries such as reports, use caching as much as possible, and when using caching, be aware that now (), Randy (), Currdate (), and so on, are unable to use the cache, for example: SELECT username from user WHERE Signup_date >= curdate (), you can use variables instead of functions (PHP);
2. For complex queries, you can use explain to interpret SQL optimization, from data volume, index, and so on.
3. Use limit 1 for the SQL that determines the return of a single piece of data;
4. With regard to indexes, the use of like '%username% ' is unable to apply the index, username can be established fulltext index to improve efficiency; Two table Associations use index associations as much as possible, noting that the associated field data types must be consistent, otherwise the index cannot be applied;
For indexes: List some common tips
Negative condition query cannot use index;
Leading fuzzy queries cannot use indexes, except for fulltext type indexes, and non-leading fuzzy queries such as like ' username% ' can apply indexes;
Column calculations cannot apply an index, such as SUM (column);
The order of the combined indexes does not affect index firings, such as combining index a b c, using B a C, B a,a hitting the index, and when creating a multicolumn index, the most frequently used column in the WHERE clause is placed on the leftmost side, depending on the business requirements;
Indexes are not indexed on frequently updated columns because index modifications incur more overhead than normal columns.
5. With respect to random bars, it is inefficient to use the Select from table order by rand () limit, and Rand cannot be applied to order! The random number of bars can be obtained using a custom scheme, for example:
Select username from table T1 join (select round (Rand () * (select MAX (ID) from table) as ID) as T2 where T1.id>=t2.id Lim it 20;
6. Absolutely no direct SELECT *! You can use SELECT COUNT (*) to get the amount of data, or return specific fields such as: Select Username from ...;
7. It is recommended to use PROCEDURE analyse () to optimize the structure of your table, especially when you have a lot of fields.
8. Insert and delete operations for big data, split into multiple data operations, no. Consequences.... Tut tut.
9. For MySQL InnoDB engine, support things, mainly using redo log and undo log to achieve the rollback and submission of things;
Any modifications to the database will be written to a cache called buffer pool, by a dedicated refresh thread to refresh the modified data periodically to disk to persist data, but when the process of refreshing, the database is not normal interruption, can cause data loss, in order to avoid such data loss, MySQL introduces re-do log to record data modifications that can recover data to a correct state after an abnormal interrupt occurs, but increases the overhead of writing re-do log.
Pick an example:
Suppose there are 2 values, A and B, respectively, and a value of
1. Start transaction;
2. Record a=1 to undo log;
3. Update A = 3;
4. Record a=3 to redo log;
5. Record b=2 to undo log;
6. Update B = 4;
7. Record B = 4 to redo log;
8. Flush the redo log to disk
9. Commit
In any one step of 1-8 the system goes down, the transaction is not committed, and the transaction does not have any effect on the data on the disk. If the outage is between 8-9, you can choose to roll back after recovery, or you can choose to continue the transaction commit because redo log is persisted at this time. If the system goes down after 9 and the data changed in the memory map is not too late to brush back to the disk, then after the system recovers, the data can be brushed back to disk according to redo log.
So, redo log actually guarantees the persistence and consistency of the transaction, while the undo log guarantees the atomicity of the transaction.
10. The base data set should be as few as possible to avoid full table data queries.
To be continued ..... The quack ...
MySQL optimization scheme (collation)