Processing Tens above data to improve the speed of the Query method ...

Source: Internet
Author: User
Tags one table
How to handle tens above data to improve query speed:
1. The use of!= or <> operators in the WHERE clause should be avoided as far as possible, otherwise the engine discards the use of the index for a full table scan.
2. To optimize the query, you should try to avoid full table scan, first consider the where and the order by the columns involved in the establishment of the index.
3. The null value of the field in the WHERE clause should be avoided as far as possible, or it will cause the engine to discard the use of the index for a full table scan, such as:
Select ID from t where num is null
You can set the default value of 0 on NUM to ensure that the NUM column in the table does not have a null value and then query this way:
Select ID from t where num=0
4. You should try to avoid using or to join conditions in the WHERE clause, or it will cause the engine to discard the use of the index for a full table scan, such as:
Select ID from t where num=10 or num=20
You can query this way:
Select ID from t where num=10
UNION ALL
Select ID from t where num=20
5. The following query will also result in a full table scan: (The percent sign cannot be placed before)
Select ID from t where name like '%abc% '
To improve efficiency, you can consider Full-text search.
6.in and not in should also be used with caution, otherwise it will result in a full table scan, such as:
Select ID from t where num in (1,2,3)
For consecutive values, you can use between instead of in:
Select ID from t where num between 1 and 3
7. The use of parameters in the WHERE clause also results in a full table scan. Because SQL resolves local variables only at run time, the optimizer cannot defer the selection of the access plan to the runtime, which must be selected at compile time. However, if an access plan is established at compile time, the value of the variable is still unknown and cannot be selected as an entry for the index. The following statement will perform a full table scan:
Select ID from t where [email=num= @num]num= @num [/email]
You can use the index instead of forcing the query:
Select ID from T with (index name) where [email=num= @num]num= @num [/email]
8. You should try to avoid the expression of fields in the WHERE clause, which will cause the engine to discard the use of indexes for full table scans. Such as:
Select ID from t where num/2=100
should read:
Select ID from t where num=100*2
9. Avoid functional operations of fields in the WHERE clause, which causes the engine to discard the use of indexes for full table scans. Such as:
Select ID from t where substring (name,1,3) = ' abc ' –name an ID beginning with ABC
The ID generated by the Select ID from t where DATEDIFF (day,createdate, ' 2005-11-30 ') =0– ' 2005-11-30 '
should read:
Select ID from t where name like ' abc% '
Select ID from t where createdate>= ' 2005-11-30 ' and createdate< ' 2005-12-1 '
10. Do not perform functions, arithmetic operations, or other expression operations on the left side of "=" in the WHERE clause, otherwise the system may not be able to use the index correctly.
11. When using an indexed field as a condition, if the index is a composite index, the first field in the index must be used as a condition to ensure that the index is used by the system, otherwise the index will not be used, and the order of the fields should be consistent with the index order as much as possible.
12. Do not write some meaningless queries, such as the need to generate an empty table structure:
Select Col1,col2 into #t the from T where 1=0
This type of code does not return any result sets, but consumes system resources and should be changed to this way:
CREATE TABLE #t (...)
13. It is a good choice to use exists instead of in.
Select num from a where num in (select num from B)
Replace with the following statement:
Select num from a where exists (select 1 from b where num=a.num)
14. Not all indexes are valid for queries, SQL is optimized for queries based on table data, and SQL queries may not take advantage of indexes when there is a large amount of data duplication in the index column, such as a table with fields Sex,male, female almost half, So even indexing on the sex does not work for query efficiency.
15. Index is not the more the better, although the index can increase the efficiency of the corresponding select, but also reduce the efficiency of insert and UPDATE, because the insert or update is likely to rebuild the index, so how to build the index needs careful consideration, depending on the circumstances. It is best not to have more than 6 indexes on one table, and if too many you should consider the need for indexes built on columns that are infrequently used.
16. Avoid updating clustered indexed data columns as much as possible, because the order of the clustered indexed data columns is the physical storage order of the table records, which can be expensive if changes in the column values result in the order of the entire table record. If the application system requires frequent updates of clustered indexed data columns, consider whether the index should be built as a clustered index.
17. Use numeric fields as much as possible, if the fields containing only numeric information are not designed to be character type, this will reduce the performance of queries and connections, and increase storage overhead. This is because the engine compares each character of the string one at a time while processing queries and connections, and it is enough for a numeric type to be compared once.
18. Use Varchar/nvarchar as much as possible instead of Char/nchar, because the first variable long field storage space is small, can save storage space, second for the query, in a relatively small field search efficiency is obviously higher.
19. Do not use SELECT * from t anywhere, use a specific field list instead of "*", and do not return any fields that are not used.
20. Use table variables as far as possible instead of temporary tables. If the table variable contains a large amount of data, note that the index is very limited (only primary key indexes).
21. Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.
22. Temporary tables are not unusable and they can be used appropriately to make some routines more efficient, for example, when you need to repeatedly reference a dataset in a large table or a common table. However, for one-off events, it is best to use an export table.
23. When creating a new temporary table, if the amount of data inserted at a time is large, you can use SELECT INTO instead of CREATE table, to avoid causing a large number of log to improve speed, if the amount of data is small, in order to ease the resources of the system table, you should create table first, and then insert.
24. If you use a temporary table, be sure to explicitly delete all temporary tables at the end of the stored procedure, first truncate the table, and then drop the table, which avoids the longer locking of the system tables.
25. Avoid the use of cursors, because the efficiency of the cursor is poor, if the cursor operation of more than 10,000 rows of data, then you should consider rewriting.
26. Before using a method based on a cursor or a temporary table method, you should first look for a set based solution to solve the problem, and a set based approach is usually more efficient.
27. As with temporary tables, cursors are not unusable. Using Fast_forward cursors for small datasets is usually better than other row-by-line methods, especially if you have to refer to several tables to get the data you need. Routines that include "totals" in the result set are typically faster than those used with cursors. If development time allows, both a cursor based approach and a set based approach can be tried to see which method works better.
28. Set the set NOCOUNT on at the beginning of all stored procedures and triggers, and set NOCOUNT off at the end. You do not need to send a DONE_IN_PROC message to the client after executing each statement of the stored procedure and trigger.
29. Try to avoid the return of large data to the client, if the amount of data is too large, we should consider whether the corresponding demand is reasonable.
30. Try to avoid large business operations, improve system concurrency capability.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.