30 Common methods for optimizing SQL statement queries in MySQL

Source: Internet
Author: User
Tags numeric mysql in one table
This article is a MySQL in the optimization of SQL statements query commonly used 30 methods for detailed analysis of the introduction, the need for a friend under the reference

1. To optimize the query, you should try to avoid full table scans, and first consider indexing the where and the columns involved.

2. Avoid using the!= or <> operator in the WHERE clause, otherwise the engine discards the use of the index for a full table scan.

3. Avoid determining null values for fields in the WHERE clause, or cause the engine to discard the use of indexes for full table scans, such as:

Select ID from t where num is null

can set the default value of 0 on NUM to ensure that the NUM column in the table has no null value and then query like this:

Select ID from t where num=0

4. You should try to avoid using or to join conditions in the WHERE clause, or it will cause the engine to discard the use of the index for a full table scan, such as:

Select ID from t where num=10 or num=20

can query this way:

Select ID from t where num=10


Select ID from t where num=20

5. The following query will also cause a full table scan:

Select ID from t where name like '%abc% '

to improve efficiency, you can consider Full-text search.

6.in and not in should also be used with caution, otherwise it will result in a full table scan, such as:

Select ID from t where num in (1,2,3)

For a continuous number, you can use between instead of in:

Select ID from t where num between 1 and 3

7. If you use parameters in the WHERE clause, you can also cause a full table scan. Because SQL resolves local variables only at run time, the optimizer cannot defer the selection of the access plan to the runtime, which must be selected at compile time. However, if an access plan is established at compile time, the value of the variable is still unknown and cannot be selected as an entry for the index. The following statement will perform a full table scan:

Select ID from t where num= @num

can instead force queries to use indexes:

Select ID from T with (index name) where num= @num

8. You should try to avoid the expression of a field in the Where clause, which causes the engine to discard the use of the index for a full table scan. such as:

Select ID from t where num/2=100

should read:

Select ID from t where num=100*2

9. You should try to avoid functional operations on fields in the WHERE clause, which causes the engine to discard the use of indexes for full table scans. such as:

Select ID from t where substring (name,1,3) = ' abc '--name an ID beginning with ABC

Select ID from t where DATEDIFF (day,createdate, ' 2005-11-30 ') =0--' 2005-11-30 ' generated ID

should read:

Select ID from t where name like ' abc% '

Select ID from t where createdate>= ' 2005-11-30 ' and createdate< ' 2005-12-1 '

10. Do not perform functions, arithmetic operations, or other expression operations on the left side of "=" in the WHERE clause, otherwise the system may not be able to use the index correctly.

11. When using an indexed field as a condition, if the index is a composite index, the first field in the index must be used as a condition to ensure that the index is used by the system, otherwise the index will not be used, and the order of the fields should be consistent with the index order as much as possible.

12. Do not write meaningless queries, such as the need to generate an empty table structure:

Select Col1,col2 into #t from T where 1=0

This type of code does not return any result sets, but consumes system resources and should be changed to this:


13. A lot of times it's a good choice to use exists instead of in:

Select num from a where num in (select num from b)

is replaced with the following statement:

Select num from a where exists (select 1 from b where num=a.num)

14. Not all indexes are valid for queries, SQL is optimized for queries based on table data, and SQL queries may not take advantage of indexes when there is a large amount of data duplication, such as a table with fields Sex,male, female almost half, So even indexing on the sex does not work for query efficiency.

15. Index is not the more the better, although the index can increase the efficiency of the corresponding select, but also reduce the efficiency of insert and UPDATE, because the insert or update is likely to rebuild the index, so how to build the index needs careful consideration, depending on the circumstances. It is best not to have more than 6 indexes on one table, and if too many you should consider the need for indexes built on columns that are infrequently used.

16. You should avoid updating clustered indexed data columns as much as possible, because the order of the clustered indexed data columns is the physical storage order of the table records, which can be expensive if the column value changes will result in an adjustment to the order of the entire table record. If the application system requires frequent updates of clustered indexed data columns, consider whether the index should be built as a clustered index.

17. Use numeric fields as much as possible, and if the fields containing only numeric information are not designed to be character type, this reduces query and connection performance and increases storage overhead. This is because the engine compares each character of the string one at a time while processing queries and connections, and it is enough for a numeric type to be compared once.

18. Use Varchar/nvarchar instead of Char/nchar as much as possible, because the first variable long field storage space is small, can save storage space, second for query, in a relatively small field search efficiency is obviously higher.

19. Do not use SELECT * from t anywhere, replace "*" with a specific field list, and do not return any fields that are not used.

20. Use table variables to replace temporary tables as much as possible. If the table variable contains a large amount of data, note that the index is very limited (only primary key indexes).

21. Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.

22. Temporary tables are not unusable and they can be used appropriately to make some routines more efficient, for example, when you need to refer to a dataset in a large table or a common table repeatedly. However, for one-off events, it is best to use an export table.

23. When creating a new temporary table, if the amount of data inserted at a time is large, you can use SELECT INTO instead of CREATE table, to avoid causing a large number of log to improve speed, if the amount of data is small, in order to ease the resources of the system table, you should create table first, and then insert.

24. If you use a temporary table, be sure to explicitly delete all temporary tables at the end of the stored procedure, first truncate the table, and then drop the table, which avoids the longer locking of the system tables.

25. Avoid using cursors, because cursors are less efficient and should be considered for rewriting if the cursor is manipulating more than 10,000 rows of data.

26. Before using a method based on a cursor or a temporary table method, you should first look for a set based solution to solve the problem, and a set based approach is usually more efficient.

27. As with temporary tables, cursors are not unusable. Using Fast_forward cursors for small datasets is usually better than other row-by-line methods, especially if you have to refer to several tables to get the data you need. Routines that include "totals" in the result set are typically faster than those used with cursors. If development time allows, both a cursor based approach and a set based approach can be tried to see which method works better.

28. Set the set NOCOUNT on at the beginning of all stored procedures and triggers, and set NOCOUNT off at the end. You do not need to send a DONE_IN_PROC message to the client after executing each statement of the stored procedure and trigger.

29. Avoid returning large amounts of data to the client, if the amount of data is too large, we should consider whether the corresponding demand is reasonable.

30. Try to avoid large business operation, improve system concurrency capability.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.