Some common performance problems in SQL Server:
1. When optimizing the query, try to avoid full table scanning. First, consider creating an index on the columns involved in where and order.
2. Try to avoid using left join and null values. Left join consumes more resources than inner join because it contains data that matches null (non-existent) data. Therefore, you can rewrite the query so that the query does not use any inner join, the returned result is returned.
Assume there are two example tables:
Product (product_id int not null, product_type_id int null ,...), in the product table, product_id is an integer greater than 0. product_type_id is associated with the product_type table, but can be empty because some products have no category.
Product_type (product_type_id not null, product_type_name null,...), product category table
If you want to associate two tables and then query the product content, you will immediately think of using inner join. However, there is a way to avoid using inner join:
Add a record in product_type: 0 ,'',..., set product_type_id of product to not null. If the product does not have a category, set product_type_id to 0, so that you can use inner join for the query.
3. Try to avoid using it in the WHERE clause! = Or <> operator. Otherwise, the engine may discard the index for full table scanning.
4. Try to avoid using or in the WHERE clause to connect conditions. Otherwise, the engine may discard the index and perform full table scanning,
For tables T, key1, and key2 with indexes, the following stored procedure is required:
5. Use in and not in with caution, for example:
7. If a parameter is used in the WHERE clause, a full table scan is performed. Because SQL parses local variables only at runtime, the optimizer cannot postpone the selection of the access plan to runtime; it must be selected at compilation. However, if an access plan is created during compilation, the value of the variable is still unknown and thus cannot be used as an input for index selection. The following statement performs a full table scan:
11. when using an index field as a condition, if the index is a composite index, you must use the first field in the index as the condition to ensure that the system uses the index, otherwise, the index will not be used, and the field order should be consistent with the index order as much as possible.
12. Do not write meaningless queries. If you need to generate an empty table structure:
15. the more indexes, the better. Although the index can improve the efficiency of the SELECT statement, it also reduces the efficiency of insert, update, and delete, because the index may be rebuilt during insert or update operations, therefore, you need to carefully consider how to create an index, depending on the actual situation. It is recommended that the number of indexes in a table be no more than 6. If there are too many indexes, consider whether the indexes on some columns that are not frequently used are necessary.
16. update the clustered index data column should be avoided as much as possible, because the order of the clustered index data column is the physical storage order of the table records. Once the column value changes, the order of the entire table record will be adjusted, it will consume a considerable amount of resources. If the application system needs to frequently update the clustered index data column, consider whether to create the index as a clustered index.
17. use numeric fields whenever possible. If fields containing only numerical information are not designed as numeric fields, this will reduce query and connection performance and increase storage overhead. This is because the engine compares each character in the string one by one during query and connection processing, and only one comparison is required for the number type.
18. use varchar/nvarchar to replace Char/nchar as much as possible, because the storage space of the variable-length field is small first, it can save storage space (a fixed-length field requires a fixed-length storage space (7.0 and later versions) even if the data is null). Secondly, for queries, searching in a relatively small field is obviously more efficient, and more records may be stored on each page (8 KB, this can also reduce I/O consumption and improve performance.
19. Do not use select * from t anywhere, replace "*" with a specific field list, and do not return any fields that are not used.
20. Try to use table variables instead of temporary tables. If the table variable contains a large amount of data, note that the index is very limited (only the primary key index ).
21. Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.
22. Temporary tables are not unavailable. Using them appropriately can make some routines more effective. For example, when you need to reference large tables or a data set in common tables repeatedly. However, it is best to use the export table for one-time events.
23. when creating a temporary table, if a large amount of data is inserted at one time, you can use select into instead of create table to avoid creating a large number of logs to increase the speed. If the data volume is small, to ease system table resources, create table first and then insert.
24. if a temporary table is used, you must explicitly delete all temporary tables at the end of the stored procedure. First truncate the table and then drop the table, so that the system table can be locked for a long time.
25. Avoid using a cursor whenever possible, because the efficiency of the cursor is poor. If the cursor operation has more than 10 thousand rows of data, you should consider rewriting.
26. before using the cursor-based or temporary table method, you should first find a set-based solution to solve the problem. The set-based method is generally more effective.
27. Like a temporary table, the cursor is not unavailable. Using a fast_forward cursor for a small dataset is usually better than other row-by-row processing methods, especially when several tables must be referenced to obtain the required data. A routine that includes "sum" in the result set is usually faster than a cursor. If the development time permits, you can try both the cursor-based method and the set-based method to see which method works better.
28. Set nocount on at the beginning of all stored procedures and triggers, and set nocount off at the end. You do not need to send the done_in_proc message to the client after executing each statement of the stored procedure and trigger.
29. Avoid large transaction operations as much as possible to improve the system concurrency capability. When both constraints and triggers are used to accomplish the same function, use constraints first.
30. Avoid returning a large amount of data to the client as much as possible. If the data volume is too large, we should consider whether the corresponding requirements are reasonable.