How does mysql increase the query speed when processing millions of data? bitsCN.com
How can mysql increase the query speed when processing millions of data?
Recently, due to work needs, I began to pay attention to the optimization methods of select query statements for Mysql databases.
In actual projects, it is found that when the data volume of the mysql table reaches the million level, the query efficiency of common SQL statements decreases linearly. if there are many query conditions in the where clause, the query speed is simply intolerable. I tested to execute a conditional query on a table containing more than 4 million records (with indexes). the query time was as high as 40 seconds. I believe that such a high query latency will drive anyone crazy. Therefore, it is very important to improve the SQL statement query efficiency. The following are 30 SQL query statement optimization methods widely used on the Internet:
1. try to avoid using it in the where clause! = Or <> operator. Otherwise, the engine will discard the index for full table scanning.
2. to optimize the query, try to avoid full table scanning. First, consider creating an index on the columns involved in where and order.
3. try to avoid null value determination on the field in the where clause. Otherwise, the engine will discard the index and perform full table scanning, for example:
Select id from t where num is null
You can set the default value 0 on num to make sure that the num column in the table does not have a null value, and then query it like this:
Select id from t where num = 0
4. try to avoid using or in the where clause to connect to the condition. Otherwise, the engine will discard the index and perform a full table scan, for example:
Select id from t where num = 10 or num = 20
You can query it as follows:
Select id from t where num = 10
Union all
Select id from t where num = 20
5. the following query will also cause a full table scan: (the percentage sign cannot be prefixed)
Select id from t where name like '% c %'
To improve efficiency, you can consider full-text search.
6. use in and not in with caution. otherwise, full table scan may occur, for example:
Select id from t where num in (1, 2, 3)
For continuous values, you can use between instead of in:
Select id from t where num between 1 and 3
7. if a parameter is used in the where clause, a full table scan is performed. Because SQL parses local variables only at runtime, the optimizer cannot postpone the selection of the Access Plan to runtime; it must be selected at compilation. However, if an access plan is created during compilation, the value of the variable is still unknown and thus cannot be used as an input for index selection. The following statement performs a full table scan:
Select id from t where num = @ num
You can change it to force query to use the index:
Select id from t with (index name) where num = @ num
8. avoid performing expression operations on fields in the where clause as much as possible. this will cause the engine to discard the use of indexes for full table scanning. For example:
Select id from t where num/2 = 100
Should be changed:
Select id from t where num = 100*2
9. avoid performing function operations on fields in the where clause as much as possible. this will cause the engine to stop using the index for full table scanning. For example:
Select id from t where substring (name, 1, 3) = 'abc'-name id starting with abc
Select id from t where datediff (day, createdate, '2017-11-30 ') = 0-'2017-11-30' generated id
Should be changed:
Select id from t where name like 'ABC %'
Select id from t where createdate> = '2014-11-30 'and createdate <'2014-12-1 ′
10. do not perform functions, arithmetic operations, or other expression operations on the left side of "=" in the where clause. Otherwise, the system may not be able to correctly use indexes.
11. when using an index field as a condition, if the index is a composite index, you must use the first field in the index as the condition to ensure that the system uses the index, otherwise, the index will not be used, and the field order should be consistent with the index order as much as possible.
12. do not write meaningless queries. if you need to generate an empty table structure:
Select col1, col2 into # t from t where 1 = 0
This type of code will not return any result set, but will consume system resources, should be changed to this:
Create table # t (...)
13. Replacing in with exists is often a good choice:
Select num from a where num in (select num from B)
Replace the following statement:
Select num from a where exists (select 1 from B where num = a. num)
14. not all indexes are valid for queries. SQL queries are optimized based on the data in the table. when a large amount of data is duplicated in the index column, SQL queries may not use indexes, for example, if a table contains sex fields, male and female are almost half of each other, indexing sex does not play a role in query efficiency.
15. the more indexes, the better. indexes can certainly improve the efficiency of the select statement, but also reduce the efficiency of insert and update, because the index may be rebuilt during insert or update operations, therefore, you need to carefully consider how to create an index, depending on the actual situation. It is recommended that the number of indexes in a table be no more than six. if there are too many indexes, consider whether the indexes on some columns that are not frequently used are necessary.
16. update the clustered index data column should be avoided as much as possible, because the order of the clustered index data column is the physical storage order of the table records. Once the column value changes, the order of the entire table record will be adjusted, it will consume a considerable amount of resources. If the application system needs to frequently update the clustered index data column, consider whether to create the index as a clustered index.
17. use numeric fields whenever possible. If fields containing only numerical information are not designed as numeric fields, the query and connection performance will be reduced and the storage overhead will be increased. This is because the engine compares each character in the string one by one during query and connection processing, and only one comparison is required for the number type.
18. use varchar/nvarchar as much as possible to replace char/nchar, because the first step is to reduce the storage space of variable-length fields, which can save storage space. Secondly, for queries, searching in a relatively small field is obviously more efficient.
19. do not use select * from t anywhere, replace "*" with a specific field list, and do not return any fields that are not used.
20. try to use table variables instead of temporary tables. If the table variable contains a large amount of data, note that the index is very limited (only the primary key index ).
21. avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.
22. temporary tables are not unavailable. using them appropriately can make some routines more effective. for example, when you need to reference large tables or a data set in common tables repeatedly. However, for one-time events, it is best to use the export table.
23. when creating a temporary table, if a large amount of data is inserted at one time, you can use select into instead of create table to avoid creating a large number of logs to increase the speed. if the data volume is small, to ease system table resources, create table first and then insert.
24. if a temporary table is used, you must explicitly delete all temporary tables at the end of the stored procedure. truncate the table first, and then drop the table, this prevents system tables from being locked for a long time.
25. avoid using a cursor whenever possible because the efficiency of the cursor is poor. if the cursor operation has more than 10 thousand rows of data, you should consider rewriting.
26. before using the cursor-based or temporary table method, you should first find a set-based solution to solve the problem. The set-based method is generally more effective.
27. like a temporary table, the cursor is not unavailable. Using a FAST_FORWARD cursor for a small dataset is usually better than other row-by-row processing methods, especially when several tables must be referenced to obtain the required data. A routine that includes "sum" in the result set is usually faster than a cursor. If this is allowed during development, you can try both the cursor-based method and the set-based method to see which method works better.
28. set nocount on at the beginning of all stored procedures and triggers, and set nocount off at the end. You do not need to send the DONE_IN_PROC message to the client after executing each statement of the stored procedure and trigger.
29. avoid returning large data volumes to the client whenever possible. if the data volume is too large, consider whether the appropriate requirements are reasonable.
30. avoid large transaction operations as much as possible to improve the system concurrency capability.
BitsCN.com