One, index
1.Innodb index is using B + Tree
2. Try to simplify the where condition, e.g. do not show where ID + 3 = 5, which cannot use the index
3. When the index is large, a column can be redundant to simulate a hash index
4. Small tables do not need to use indexes, large tables need to use chunking technology, and do not index
5. Selectivity of the index = number of repetitions/total quantity
The higher the selectivity, the higher the efficiency, the unique index selectivity is 1, the best efficiency
For Blob,text, a long, varchar-type column, the prefix index must be used.
The trick is to choose a long enough prefix to ensure high selectivity, but not too long
Create prefix Index: (city joins prefix index of length 7)
ALTER TABLE Sakila.city_demo ADD KEY (city (7))
Disadvantage of prefix index, cannot do order by and GROUP by
Suffix index: MySQL does not support reverse indexing, but can be reversed after the string is stored, and based on this index, you can maintain the index through the trigger
6. Multi-column index
When you intersect multiple columns (and), you need a multicolumn index instead of multiple separate single-column indexes
If you see an index merge in explain, you should check the query and form structure.
Index merge can be closed by parameter Optimizer_switch
7. Overlay Index
If an index contains the values of all the fields that need to be queried, we call it the overwrite index
Because MyISAM only indexes in memory, there are serious performance issues with overwriting indexes
The overlay index is particularly useful for InnoDB because of the InnoDB clustered index
In addition, full-text indexing can only be done with the B-tree index
When using the overwrite index, the using index is displayed in the extra in EXPLAIN
Query optimization
The general optimization method has two
1. Verify that the application is retrieving a large amount of data that is more than needed, which usually means accessing too many rows
But sometimes it's possible to access too many columns.
2. Verify that the MySQL server layer is analyzing a large number of data rows that are more than needed
solution, add limit,
If database resources are tight, consider replacing hibernate with MyBatis
Removing all the columns will make the optimizer unable to complete such optimizations as overwriting index scans, such as Hibernate
However, retrieving the query cache for all columns is more beneficial than multiple independent query caches that only get partial columns
Every time I see a SELECT * Please wonder if you really need to take it all out
Duplicate query for the same data: Please cache this data, for example, in the session.
The simplest of three metrics to measure query overhead:
Response Time,
Number of rows scanned
Number of rows returned
These three metrics are recorded in a slow log of MySQL, so check the slow log
If you find that the query needs to scan a large number of rows of data, but return a small number of rows, then try the following tips to optimize it
1, use the index overlay scan to put all required columns into the index
2. Change the table structure, such as using a separate summary table
3. Rewrite this query, various optimizations
Sometimes you might consider dividing a complex query into small queries, if you can reduce the amount of work
For example, delete old data, every time you delete a point, you can avoid locking a lot of data at once
Benefits of decomposing associated queries
1. Higher cache efficiency
2. Sequential query with the ID of the returned data is more efficient than random association with join
Cons: Multiple lines in one statement, increasing connection overhead
Sorting optimization
In any case, sorting is a high-cost operation, so from a performance perspective, avoid sorting as much as possible, or avoid sorting large amounts of data
MySQL Performance tuning notes