The focus of performance optimization is to identify location problems, get a better understanding of the major performance killers, and be able to pinpoint problems faster and focus on the possible causes.
The SQL Server performance killers are mainly concentrated in the following categories:
1.1 Low-quality indexes
Low-quality indexes are usually the largest performance killers of SQL Server, and for a query that lacks indexes, SQL Server needs to handle a large amount of reads and computations, which can cause significant overhead on disk, memory, cups, and significantly increase query execution time.
1.2 Inaccurate statistical information
Statistics are the distribution of data in a column referenced by a predicate, which is stored in a histogram; a histogram is a statistical structure that shows the frequency of the data distributed in different classifications. The validity of the index depends entirely on the statistics of the indexed columns, such as there is no statistics, the SQL SERVER built-in query optimizer cannot accurately estimate the number of rows affected by the query, and the query optimizer is very inefficient.
1.3 Excessive blocking and deadlock
SQL SERVER is fully used for atomicity, consistency, isolation, and persistence, so the database engine ensures that concurrent transactions are properly isolated from each other. By default, the data that one firm sees is the state before or after another transaction is modified-it does not see an intermediate state.
Because of this isolation, blocking occurs in the database when multiple transactions concurrently access a common resource in a compatible way. A deadlock occurs when two resources attempt to upgrade or extend a locked resource and conflict with another. The query engine determines the process that rolls back the lowest cost and chooses it as a deadlock victim. The victim needs to submit the request again to perform properly. This results in a longer overhead period.
1.4 No DataSet-based operations
T-SQL is a scripting language for datasets that manipulate data on a dataset. This requires that we consider the issue from the data column instead of thinking about it from the data line. Instead of using cursors and loops in operations, you need to use more connections to subqueries.
1.5 Low-quality query design
The validity of the index depends on the SQL query statement written, and if SQL reads too many rows from one table or the specified filter returns more than the desired large result set, the index becomes invalid. In order to better use the index, you must write high-quality SQL query statements and take the number on demand.
1.6 Low-quality database design
The database should be properly normalized to improve the performance of database retrieval and reduce blocking. An unreasonable design can lead to repeated storage of data, a transition to a normalized database will result in a very high number of connections required to read the data; a reasonably normalized database is the cornerstone of a quality query.
1.7 Excessive fragments
The basic unit of data storage is the page, because frequent page splits make it possible to include empty areas of data that cannot be stored in the page called fragmentation, which can result in an increase in the number of read operations that affect performance (one page is read at a time).
1.8 Non-reusable execution plan
In order to execute queries efficiently, SQL SERVER generates an optimized execution plan at compile time, which is cached in memory and therefore reusable. However, if the query is designed to not insert variable values and the same query is resubmitted with different variable values, SQL Server rebuilds the new execution plan, which consumes the first-line performance. Therefore, SQL SERVER caches or reuses execution plans to submit SQL queries for performance optimizations.
1.9 Low-quality execution plan
A bad execution plan can sometimes be a real killer, and a bad plan is often caused by a process called parameter sniffing, which comes from the process that the query optimizer uses to determine the best plan based on statistics. It is important to understand statistical information.
1.10 Frequent recompilation plans
The recompilation of stored procedures results in a recurring build of the execution plan, which is costly to perform, so the stored procedure is not designed to be compiled as a general rule.
1.11 Incorrect use of cursors
Cursors are non-collection operations that add significant overhead to SQL SERVER and use dataset-based operations whenever possible.
1.12 Error Configuring the database log
To achieve optimal performance, SQL SERVER relies heavily on efficient access to database logs. Therefore, a reasonable log configuration method is also important.
1.13 Excessive use or misconfiguration of tempdb
Each instance of SQL SERVER has only one tempdb, because operations that involve user objects such as temporary tables and table variables, system objects such as cursors or hash tables for connections, and sort and row versioning use the tempdb database. So tempdb can sometimes become a bottleneck. All of these and other possible actions can cause space, i/0, and contention issues in tempdb. Therefore, the correct configuration of tempdb also has a significant impact on query performance.
Identify SQL Server Performance killers