SQL Tuning Recommendations

Source: Internet
Author: User

1. To optimize the query, to avoid full table scanning, first consider the where and order by the columns involved in the index.


2. Avoid null-valued fields in the WHERE clause, which will cause the engine to discard full-table scans using the index, such as:

Select ID from t where num is null

It is best not to leave the database null, and to populate the database with not NULL as much as possible.

Comments, descriptions, comments, and so on can be set to NULL, others, preferably not using NULL.

Do not assume that NULL does not require space, such as: char (100) type, when the field is established, the space is fixed, regardless of whether the insertion value (NULL is also included), is occupied 100 characters of space, if it is varchar such a variable length field, NULL does not occupy space.


You can set the default value of 0 on NUM, make sure that the NUM column in the table does not have a null value, and then query:

0


3. Try to avoid using the! = or <> operator in the WHERE clause, or discard the engine for a full table scan using the index.

4. Try to avoid using or in the WHERE clause to join the condition, if a field has an index and a field is not indexed, it will cause the engine to discard using the index for a full table scan, such as:

Select ID from t where num= or Name = ' admin '

You can query this:

Select ID from t where Name = ' admin '


5. In and not are also used with caution, otherwise it will cause a full table scan, such as:

Select ID from t where num in (1,2,3)

For consecutive values, you can use between instead of in:

1 3

A lot of times it's a good choice to replace in with exists:

Select num from a where num in (select num from B)

Replace with the following statement:

1 from B where Num=a.num)

6. The following query will also cause a full table scan:

Select ID from t where name like '%abc% '

To be more efficient, consider full-text indexing.

7. If you use a parameter in the WHERE clause, it also causes a full table scan. Because SQL resolves local variables only at run time, the optimizer cannot defer the selection of access plans to run time; it must be selected at compile time. However, if an access plan is established at compile time, the value of the variable is still unknown and therefore cannot be selected as an input for the index. The following statement will perform a full table scan:

Select ID from t where num = @num

You can force the query to use the index instead:

Select ID from the T with (index name) where num = @num

8. You should try to avoid expression operations on the fields in the WHERE clause, which will cause the engine to discard the full table scan using the index. Such as:

Select ID from t where num/

should read:

2 *


9. You should try to avoid function operations on the fields in the WHERE clause, which will cause the engine to discard the full table scan using the index. Such as:

Select ID from t where substring (name,1,3) = ' abc '-–name starts with ABC Idselect ID from t where DATEDIFF ( Day,createdate, '2005--0--'2005---- The generated ID

should read:

Select ID from t where name like ' abc% ' select ID from t where createdate >= ' 2005-11-30 ' and CreateDate < ' 2005-12-1 ‘


do not perform functions, arithmetic operations, or other expression operations on the left side of the "=" in the WHERE clause, or the index may not be used correctly by the system.

one. When using an indexed field as a condition, if the index is a composite index, you must use the first field in the index as a condition to guarantee that the system uses the index, otherwise the index will not be used, and the field order should be consistent with the index order as much as possible.

do not write some meaningless queries, such as the need to generate an empty table structure:

1=0

This type of code does not return any result sets, but consumes system resources and should be changed to this:
CREATE TABLE #t (...)

. UPDATE statement, if you change only 1, 2 fields, do not update all fields, otherwise frequent calls can cause significant performance consumption, along with a large number of logs.

for a table join with multiple large data volumes (where hundreds of are larger), you must first split the page and join again, otherwise the logic reads very high and performance is poor.

Select COUNT (*) from table, so that count without any conditions causes a full table scan, and without any business meaning, it must be eliminated.


-The index is not as good as the more the better, although the index can improve the efficiency of the corresponding select, but also reduce the efficiency of insert and UPDATE, because the INSERT or update when the index may be rebuilt, so how to build the index needs careful consideration, depending on the situation. The number of indexes on a table should not be more than 6, if too many you should consider whether some of the indexes that are not commonly used are necessary.

-You should avoid updating the clustered index data columns as much as possible, because the order of the clustered index data columns is the physical storage order of the table records, which can cost considerable resources once the column values change to the order in which the entire table is recorded. If your application needs to update clustered index data columns frequently, you need to consider whether the index should be built as a clustered index.

-Use a numeric field as much as possible, and if a field that contains only numeric information is not designed as a character type, this can degrade query and connection performance and increase storage overhead. This is because the engine compares each character in a string one at a time while processing queries and joins, and it is sufficient for a numeric type to be compared only once.

.Use Varchar/nvarchar instead of Char/nchar as much as possible, because the first variable-length field has a small storage space and can save storage space, and secondly, in a relatively small field, search efficiency is obviously higher for queries.

-Do not use SELECT * from t anywhere, replace "*" with a specific field list, and do not return any fields that are not available.

+Try to use table variables instead of temporary tables. If the table variable contains a large amount of data, be aware that the index is very limited (only the primary key index).

A. Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources. Temporary tables are not unusable, and they can be used appropriately to make certain routines more efficient, such as when you need to repeatedly reference a dataset in a large table or a common table. However, for one-time events, it is best to use an export table.

atWhen you create a new temporary table, if you insert a large amount of data at one time, you can use SELECT INTO instead of CREATE table to avoid causing a large number of logs to increase speed, and if the amount of data is small, create a table and insert it in order to mitigate the resources of the system tables.

-If a temporary table is used, be sure to explicitly delete all temporary tables at the end of the stored procedure, TRUNCATE table first, and then drop table, which avoids longer locking of the system tables.

-Avoid using cursors as much as possible, because cursors are inefficient and should be considered for overwriting if the cursor is manipulating more than 10,000 rows of data.

-Before using a cursor-based method or temporal table method, you should first look for a set-based solution to solve the problem, and the set-based approach is generally more efficient.

-. As with temporary tables, cursors are not unusable. Using Fast_forward cursors on small datasets is often preferable to other progressive processing methods, especially if you must reference several tables to obtain the required data. Routines that include "totals" in the result set are typically faster than using cursors. If development time permits, a cursor-based approach and a set-based approach can all be tried to see which method works better.


set NOCOUNT on at the beginning of all stored procedures and triggers, set NOCOUNT OFF at the end. You do not need to send a DONE_IN_PROC message to the client after each statement that executes the stored procedure and trigger.

try to avoid large transaction operations and improve system concurrency.

try to avoid returning large amounts of data to the client, and if the amount of data is too large, you should consider whether the corresponding requirements are reasonable.

SQL Tuning Recommendations

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.