SQL optimization process

Source: Internet
Author: User
Tags server memory

Database-level optimization solution:
When SQL query is slow
At the database level, optimization methods are generally used to reduce the number of visits, efficient SQL, index creation, Table Partitioning, and good database table design.
Scenario 1: the execution of a function is slow. You can use SQL Profile to retrieve the SQL statement. If you want to view the SQL tables, if the data volume of the queried tables is less than 50 thousand,
1. Check whether the query SQL is called in a loop statement. If you want to change it to an SQL statement, try to minimize access to the database.
Case 2: The data volume of the queried table is large (more than 0.5 million)
1. Check the SQL statement Syntax:
1.1 avoid using: like '% keyword %'
1.2. Use pagination to reduce the amount of data queried.
1.3 avoid using functions on index fields. When using built-in functions or expressions on index columns, the optimizer cannot use the indexes of these columns and try to rewrite the expressions as much as possible. For example:
1.3.1, change au_fname + ''+ au_lname = 'johnson White' to au_fname = 'johnson 'AND au_lname = 'white'
1.3.2, SUBSTRING (au_lname, 1, 2) = 'wh 'changed to au_lname like 'wh %'
1.3.3, DATEPART (year, hire_date) = 1990 and datepart (quarter, hire_date) = 1 changed to ire_date> = '2014/1/123' AND hire_date <'2014/1/123'
1.3.4, CONVERT (CHAR (10), hire_date, 101) = '2017/123' changed to hire_date> = '2017/123' AND hire_date <'2017/123456'
1.4. Try to use the "union all" or "union" statement instead of the "or" keyword. Use the "union all" statement prior to the union statement because the union statements are sorted after the table join, then, the repeated statements are removed, which may consume resources for large result sets.
1.5. Try to use join query instead of subquery
1.6 Add set nocount on at the beginning of the stored procedure and set nocount off at the end. If the Stored Procedure contains statements that do not return much actual data, or if the procedure contains a Transact-SQL loop, network communication traffic will be greatly reduced. (set nocount: prevents the returned messages in the result set from displaying row counts affected by Transact-SQL statements or stored procedures)

2. Create an index for the queried Fields
For example, select * from doc where objid = '4028819e181e984c01181f5874f703f1 '. If the query is slow, you need to create an index on the objid of the doc table.
3. Create partitions for large tables
Generally, you can create partitions for large tables based on whether to stop (isfinished), whether to delete (isdeleted), and module partitions (doc, cusr, etc.

Step 1. Optimize the application workload
The first step to optimize application performance is to optimize the workload. The optimization steps listed in this part of optimization methodology can solve many common performance and scalability problems. These optimizations can help reduce the impact of performance bottlenecks caused by special design or inefficient implementation, and ensure that system resources can be fully and effectively utilized. For example, to solve problems such as inefficient query plans or inefficient caching, the SQL Server cache mechanism will be used more efficiently, thus reducing I/O operations as a whole.
■ Compile/re-compile-database, CPU
Determine if significant CPU competition exists. If so, pay attention to the T-SQL statements that recompile too many times, which occupy a large amount of CPU resources. If the SQL code recompilation times are large in the application, consider the following optimization method:
● Evaluate the functions of related statements and separate the data modification code from the data definition command.
● Solve outdated index statistics.
● Use variables or other logic to replace temporary tables. Microsoft's advice: frequent compilation/re-compilation will consume high CPU and disk I/O resources and increase the overall workload competition.
■ Inefficient query plan-database, CPU
Determine whether there is obvious CPU competition. If so, determine how the inefficiency query plan occupies excessive cpu resources. Whether there are database modes, application requirements, report tools used by users, or other conditions to facilitate the execution of inefficient queries in the production environment, and query using Hash connections and sorting operations, as a result, CPU and I/O are greatly consumed.
Step 2 reduce read/write Activity
Once your application code is optimized, the next optimal performance is to reduce the amount of read/write activity or I/O during application running. The most common application code error is to write inefficient data query operations; query returns a lot of data-too many columns or rows-SQLServer will have a large load. Whether the application design allows users to create their own (usually inefficient), does not limit the query results on each page, or uses nested queries for the back-end code, these queries return a lot of data (including queries written using views or table-valued functions), and your application as a whole may access more data than needed. In some cases, after checking your application code, you may realize that your code will return all the data in the underlying table to meet the query needs! Analyze existing indexes and their maintenance modes to determine whether or not to add indexes. Analyzing the growth of database files will greatly reduce the amount of read/write activity of applications and release valuable disk resources.
■ Inefficient or missing indexes-db I/O
Determine whether there is obvious disk I/O competition. If so, analyze how missing or inefficient indexes cause disk I/O bottlenecks. DBAs must evaluate the application's SQL code to ensure that the statement is executed as efficiently as possible; this task usually requires the creation of indexes to extract data most effectively. If the SQL code of the application changes, access different tables or select more/different columns from the target table, the current index may not work. The need for analysis indicates that the SQL code is inefficient at using existing indexes or statements to scan and collect data using tables.
■ Disk I/O-database file growth-db I/O
Determine whether there is obvious disk I/O competition. If yes, pay attention to the databases that frequently use extended segments. DBAs should pay attention to the frequent use of extended database segments within a certain time window. When SQL Server increases the size of database files, files tend to be broken, and operations will consume a lot of CPU and I/O.
■ Disk I/O-database file configuration-db I/O
Determine whether there is obvious competition for disk I/O. If so, pay attention to how poorly configured database files lead to an increase in competition for database internal locks, thus forming a resource bottleneck, reduce competition between applications. DBAs should check the configuration of some database files that may lead to the competition, including:
● Data files and log files are configured on the same disk device.
● The number of database files is less than the number of available CPUs, especially the TempDB database.
● The number of database files is less than the number of available disk I/O devices.
Step 3 reduce competition
Now, the I/O access of the application has been optimized. The next step is to optimize the performance to ensure that high concurrency does not lead to an increase in competition for objects. Even if data access is optimized, the SQL Server engine that uses locks and latches synchronizes and protects data access, and blocks data under high loads. The intelligent transaction control logic ensures that the transaction is not executed for a long time, or only locks the appropriate data. Therefore, it is the key to achieving high concurrency. The use of appropriate transaction isolation layer can ensure that unnecessary read operation blocking is reduced, and the need to evaluate the lock prompt can ensure unnecessary lock persistence, which can greatly improve the application performance. To reduce or eliminate the latches, make sure that the application does not mix DDL and DML operations. Once you solve these problems, you should analyze how your application accesses data to determine whether data partitions can improve application performance.
■ Blocking lock-object competition-database lock
Determine whether there is obvious lock competition. If so, check the database tables that frequently compete with the lock to help identify the fault points and missing indexes. The application tends to access some specific tables in the database. When the isolation layer is set incorrectly, the transaction will be executed for a long time. As a result, the data cannot be accessed due to the involved indexes, and the processing conflicts or congestion may occur. Many application administrators are not aware of the extent to which the database is congested; we need to analyze and discover significant competition caused by frequent short-term lock accumulation.
■ Blocking lock-lock type-database lock
Determine whether there is obvious lock competition. If so, analyze the lock type based on the database. Some applications access different specific databases in different ways. The reason may be that different developers develop different code, or the requirements are constantly changing. According to the analysis results of different SQL Server lock types displayed in the database, the importance of comparative analysis of lock behavior and overall activity time is displayed, these will help application developers modify their application code correctly.
■ Memory buffer latches-database latches
Determine whether there is obvious competition for memory buffer latches. If so, many memory buffer latches are signs of I/O bottlenecks and hot pages. Because the memory buffer latches are not directly related to I/O competition, this is critical to the number of available memory for SQL Server.
■ Competition of Internal High-speed cache latches-database latches
Determine whether there is obvious competition for Internal High-speed cache latches. If so, identify where most of the competition exists. The Internal High-speed cache latches can be used in many different situations. The most common example is the competition of the Internal High-speed cache (not the buffer pool page), especially when the heap is used, text or both. If the competition between LOG and PAGELATCH_UP does not work, data partitioning can effectively ease the competition for Internal High-speed cache latches.
Step 4 solve resource bottlenecks
So far, you have ensured that your queries use the underlying system resources correctly and access data as efficiently as possible. Now you should determine whether there is a resource bottleneck that slows down your application. You can do a lot of tuning work on applications. In some cases, external factors are still the final obstacle to performance optimization. This part of optimization describes the bottlenecks of specific resources. For example, does SQL Server have enough memory to support good performance? Are there external applications that steal SQL Server Memory? Can your hard disk performance support your workload? Can your application record logs efficiently? Do you need to increase the log recording time? Finally, parallelism can help your query to run faster, or does SQL Server spend more time coordinating concurrent threads, thus causing more obstacles to concurrency? These aspects of application performance should be taken into account to ensure full use of underlying system resources and help determine which hardware needs to be resized.
■ Memory pressure-system memory
Determine whether there is obvious memory pressure. If so, analyze:
● External memory pressure can affect SQL Server performance. Many DBA and DBA managers do not understand the impact of improper configuration of virus detection software and installation of SQL server on an exchange Server.
● SQL Server does not have enough memory to achieve the ideal function. If the SQL Server cannot allocate enough memory to the cache, the average page life will be reduced, and the system memory paging swap will increase.
■ Log wait
Determine whether there are obvious log waits. If so, analyze the factors that slow down the SQL Server logging.
Step 5 baseline Deviation Analysis
Undoubtedly, the best comparison of application performance is its own performance. It is necessary to analyze where the application performance deviates from the behavior observed in the past. This means that you can quickly and easily see the scaling status of your application, and determine the impact of factors such as application construction and system changes on enterprise performance. We can analyze the deviation of various major resources. We need to analyze which major types of resources will be affected. Based on the logic and analysis metrics and other supported information provided by the analysis, we can quickly understand how to obtain the best performance of the application.
■ CPU usage deviation, CPU wait time deviation, I/O wait time deviation, latch wait time deviation, lock wait time deviation, workload Deviation
● Determine whether these indicators have changed significantly with typical usage in the past.
● These changes should be verified and confirmed to ensure that they will not become problematic changes.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.