When querying database records using SQL statements, there are many different ways to query the same content.
Still, although you can get the same results using a variety of methods, if you use a different approach, the execution benefits are quite different. Therefore, we have to consider carefully, if you want to query the same results, which statement to use, the implementation of the benefit is better.
This is the optimization of the SQL statement.
The following optimization statements are for my SQL database.
1, to optimize the query, should try to avoid full table scan, first of all should consider the where and order by the columns involved in the index.
2, should try to avoid the null value of the field in the Where clause to judge, otherwise it will cause the engine to abandon the use of the index for a full table scan, such as:
Select from where is NULL you can set the default value of 0 on NUM, make sure that the NUM column in the table does not have a null value, and then query: Select from Twhere num=0
3. Try to avoid using the! = or <> operator in the WHERE clause, or discard the engine for a full table scan using the index.
4, should try to avoid using or in the WHERE clause to join the condition, otherwise it will cause the engine to abandon the use of the index for a full table scan, such as:
Select from where num= or num= can be changed to the following query: Select from T where num= Ten UNION ALL select from T where num=
5, in and not in also to use caution, otherwise it will cause a full table scan, such as:
Select ID from t where num in
For consecutive values, you can use between instead of in:
Select ID from t where num between 1 and 3
6, the following query will also cause a full table scan:
Select ID from t where name like '%abc% '
To be more efficient, consider full-text indexing.
7, if the use of parameters in the WHERE clause, also causes a full table scan. Because SQL resolves local variables only at run time, the optimizer cannot defer the selection of access plans to run time; it must be selected at compile time. However, if an access plan is established at compile time, the value of the variable is still unknown and therefore cannot be selected as an input for the index. The following statement will perform a full table scan:
Select ID from t where [email protected]
You can force the query to use the index instead:
Select ID from T with (index name) where [email protected]
8. You should try to avoid expression operations on the field in the Where clause, which causes the engine to discard full table scans using the index. Such as:
Select ID from t where num/2=100
should read:
Select ID from t where num=100*2
9, should try to avoid in the WHERE clause function operations on the field, which will cause the engine to abandon the use of the index for a full table scan. Such as:
SelectId fromTwhereSUBSTRING (name,1,3)='ABC' //Query the ID in the T table whose name starts with ABC (name is a field) SelectId fromTwhereDateDiff (Day,createdate,'2010-11-30')=0 //Query the ID generated by ' 2010-11-30 ' in the T table (CreateDate is a field)should read:SelectId fromTwhereName like'abc%' SelectId fromTwherecreatedate>='2010-11-30'and createdate<'2010-12-1'
10. Do not perform functions, arithmetic operations, or other expression operations on the left side of "=" in the WHERE clause, or the index may not be used correctly by the system.
11. When using an indexed field as a condition, if the index is a composite index, you must use the first field in the index as a condition to guarantee that the system uses the index, otherwise the index will not be used, and the field order should be consistent with the index order as much as possible.
12, do not write some meaningless queries, such as the need to generate an empty table structure:
Select Col1,col2 into #t from T where 1=0
This type of code does not return any result sets, but consumes system resources and should be changed to this:
CREATE TABLE #t (...)
13, a lot of times with exists instead of in is a good choice:
Select from where inch (Select from B) is replaced with the following statement : Select from where exists (select1fromwhere num=a.num)
14, not all indexes are valid for the query, SQL is based on the data in the table to query optimization, when the index column has a large number of data duplication, SQL query may not use the index, such as the table has a field sex,male, female almost half, So even if you build an index on sex, it doesn't work for query efficiency.
15, the index is not the more the better, although the index can improve the efficiency of the corresponding select, but also reduce the efficiency of insert and UPDATE, because the INSERT or update when the index may be rebuilt, so how to build the index needs careful consideration, depending on the situation. The number of indexes on a table should not be more than 6, if too many you should consider whether some of the indexes that are not commonly used are necessary.
16, as far as possible to avoid updating clustered index data columns, because the order of clustered index data columns is the physical storage order of table records, once the column value changes will result in the order of the entire table records adjustment, it will consume considerable resources. If your application needs to update clustered index data columns frequently, you need to consider whether the index should be built as a clustered index.
17, try to use numeric fields, if only the value of the field is not designed as a character type, which will reduce the performance of query and connection, and increase storage overhead. This is because the engine compares each character in a string one at a time while processing queries and joins, and it is sufficient for a numeric type to be compared only once.
18, as far as possible to use Varchar/nvarchar instead of Char/nchar, because the first variable long field storage space is small, you can save storage space, and secondly for the query, in a relatively small field search efficiency is obviously higher.
19. Do not use SELECT * from t anywhere, replace "*" with a specific field name, and do not return any fields that are not available.
20. Try to use table variables instead of temporary tables. If the table variable contains a large amount of data, be aware that the index is very limited (only the primary key index).
21. Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.
22. Temporary tables are not unusable, and they can be used appropriately to make certain routines more efficient, for example, when you need to repeatedly reference a dataset in a large table or a common table. However, for one-time events, it is best to use an export table.
23. When creating a temporary table, if you insert a large amount of data at one time, you can use SELECT INTO instead of CREATE table to avoid causing a lot of log to improve the speed, if the amount of data is small, in order to mitigate the resources of the system table, create table First, Then insert.
24. If a temporary table is used, be sure to explicitly delete all temporary tables at the end of the stored procedure, TRUNCATE table first, and then drop table, which avoids longer locking of the system tables.
25. Avoid using cursors as much as possible, because cursors are inefficient and should be considered for overwriting if the cursor is manipulating more than 10,000 rows of data.
26. Before using a cursor-based method or temporal table method, you should first look for a set-based solution to solve the problem, and the set-based approach is usually more efficient.
27. As with temporary tables, cursors are not unusable. Using Fast_forward cursors on small datasets is often preferable to other progressive processing methods, especially if you must reference several tables to obtain the required data. Routines that include "totals" in the result set are typically faster than using cursors. If development time permits, a cursor-based approach and a set-based approach can all be tried to see which method works better.
28. Set NOCOUNT on at the beginning of all stored procedures and triggers, set NOCOUNT OFF at the end. You do not need to send a DONE_IN_PROC message to the client after each statement that executes the stored procedure and trigger.
29, try to avoid large transaction operation, improve the system concurrency ability.
30. Try to avoid returning large data to the client, if the amount of data is too large, should consider whether the corresponding demand is reasonable
optimization of SQL Server query methods