SQL Server Profiler-performance tuning
Performance has enough reason to become a hot topic. Today's business is fiercely competitive, and if users think that an application is too slow, they will immediately turn to another vendor. To meet the user's requirements, SQL Trace loads the event classes that can be used to find and debug performance bottlenecks.
Performance monitoring techniques can be broadly divided into two categories: techniques used in knowledge of known failures and techniques used to find where a fault is located (or to find out if there is a failure). If you identify some of the problems with this failure, you can get more information in this area. Therefore, start with the 2nd technique to help pinpoint the fault area, and then discuss how to perform a more detailed analysis.
When starting a new Database Performance tuning project, the first thing to find out is which query is the least efficient. In other words, to determine the cause of the worst performance, you can find the best tuning effect. At this stage, do not track too much information, usually only the "Stored Procedures:RPC:Completed" and "TSQL:SQL:BatchCompleted" two events. These events are selected in the Tsql_duration template provided by the SQL Server Performance Analyzer. We recommend adding a default template to both of these events, with no read, write, and CPU columns selected in order to get a more complete description. It is also recommended that you select the TextData column instead of the (default) BinaryData column for the "Stored Procedures:RPC:Completed" event-which makes it easier to work with data later. Displays a complete set of events for a given event.
650) this.width=650; "title=" clip_image001 "style=" Border-top:0px;border-right:0px;border-bottom:0px;border-left : 0px; "alt=" clip_image001 "src=" http://s3.51cto.com/wyfs02/M01/56/A6/wKiom1SKYBaC6WPeAAIi0-rTv8o371.jpg "border=" 0 "height=" 533 "/>
If you select an event, set a short-time filter in milliseconds on the Lifetime column. Most of the active OLTP systems used have an extremely large number of 0 MS queries, which are clearly not the best in terms of performance bottlenecks. Typically starts with a filter set to 100 milliseconds and then works from the beginning. The method is to increase the signal-to-noise ratio on each iteration, eliminate the smaller queries, and keep only those queries with higher potential for performance tuning. Depending on the application and server load, you typically run 10-15 minutes for each iteration trace, and then look at the results and increase the value moderately until you have only hundreds of events during the trace. This 10-15-minute number is too long for some particularly busy applications.
Another option is to run only the initial trace and then start filtering the results. The simple approach is to use the Ntile window function of SQL Server 2005, which divides the input rows into equal numbers of buckets. If you are looking at only one of the top 10% queries in a lifetime-based tracking table, you can use the following query:
SELECT *from (Select*,ntile (Ten) over (ORDER by Duration) bucketfrom tracetable) Xwhere Bucket = 10
Note: The execution of a large number of seemingly small (even 0-millisecond) queries that comprise an application can also cause performance failures, but this problem generally requires a system-wide solution by removing the useless interface, rather than being tuned through Transact-SQL queries. If you do not know the operation of a particular application, it is also difficult to find this type of problem through profiling, so this is not discussed here.
If it is found difficult to limit the number of returned events obtained to a controllable level (which is a common problem on busy systems), you will have to make some adjustments to the results to make the output more aggregated. The results obtained from the SQL Trace contain raw text data for each query, which includes all the parameters that are actually used. To further analyze the results, the data should be loaded into a table in the database and then aggregated, for example, to derive the average lifetime or number of logical reads.
The problem is if you successfully aggregate the raw text data returned by the SQL Trace results. Knowing the benefits of actual parameters is useful for re-generating performance problems, but it is a good idea to aggregate these results with a query "form" before trying to determine which query should be processed first. For example, the following two queries all belong to the same form, use the same tables and columns, differ only on the parameters used by the WHERE clause, but because their text is different, it is not possible to aggregate them:
Select *from sometablewhere somecolumn = 1---SELECT *from sometablewhere somecolumn = 2
To help resolve this problem, and to reduce these queries to a common form that can be aggregated, a CLR UDF is provided, with a slightly revised version (which can also handle null) as follows:
[Microsoft.SqlServer.Server.SqlFunction (Isdeterministic=true)] public static SqlString Sqlsig (SqlString querystring) {Return (SqlString) Regex.Replace (querystring.value,@ "([\s] (= <>!] (?! [^\]] +[\]](?:(?:(?:(?:(? # expression coming) (?:( [N])? (‘) (?: [^ '] ') * (')) (? # character) | (?: 0 x[\da-fa-f]*) (? # binary) | (?:[-+]? (?:(?: [\d]*\. [\d]*| [\d]+] (? # precise number) (?: [ee]?[ \d]*))) (? # imprecise number) | (?:[~]? [-+]? (?: [\d]+)] (? # Interger) | (?: [Nn][uu][ll][ll]) (? # null)) (?: [\s]? [\+\-\*\/\%\&|\^] [\s]?) + (? # operatoers))) #,@ "$1$2$3#$4");}
The UDF finds most of the values that resemble parameters, substituting "#". After processing the above two queries with a UDF, the output should be the same:
Selet *from Sometablewhere somecolumn = #
To use the UDF to help with a tracking table to find the first few queries, you can start with some rows from the next query, which aggregates each common query form and gets the lifetime, read, write, and CPU averages:
Selectqueryform,avg (Duration), avg (Reads), AVG (writes), AVG (CPU) from (Selectdbo.fn_sqlsig (TextData) as queryform,l.* Duration as duration,l.* Reads as Reads,l.* writes as writes,l.* CPU as Cpufrom tracetablewhere TextData are not NULL) Xgro Up by Queryform
Here, you can further filter by means to find more queries.
If you decide to tune one or more queries, you can use SQL tracing to help with further analysis. For example, assume that the following stored procedures that can be created in the AdventureWorks database are isolated as a cause of failure:
CREATE PROCEDURE getmanagersandemployees@employeeid intasbeginset NOCOUNT onexec uspgetemployeemanagers @ Employeeidexec uspgetmanageremployees @EmployeeIDEND
To start a session to analyze what the stored procedure is doing, first open a new query window in SQL Server Management Studio and get the session spid with the @ @SPID function. Next, open the SQL Server Performance Analyzer, connect to the server, and select the calibration template.
650) this.width=650; "title=" clip_image002 "style=" Border-top:0px;border-right:0px;border-bottom:0px;border-left : 0px; "alt=" clip_image002 "src=" http://s3.51cto.com/wyfs02/M02/56/A6/wKiom1SKYBeinhkaAAHbTVvTTQc876.jpg "border=" 0 "height=" 536 "/>
The template adds sp:stmtcompleted to the combination of events that are used to obtain a more complete description of the server activity. This causes each call to return more data, so the SPID collected by the previous lock is used to filter the trace. Users may also want to add a performance analysis event that displays scheduled XML statistics to undo the query plan, along with the remaining information in the query. Shows a complete event selection screen for this type of work.
Note: Adding a display plan XML or deadlock graph event causes a new tab in the Tracking Performance dialog box named event extraction settings, which includes the option to automatically save any collected query plans or deadlock diagram XML to a text file, and to prevent reuse of them later if needed.
650) this.width=650; "title=" clip_image003 "style=" Border-top:0px;border-right:0px;border-bottom:0px;border-left : 0px; "alt=" clip_image003 "src=" http://s3.51cto.com/wyfs02/M00/56/A6/wKiom1SKYBmxVxakAAKuRSCe5_s460.jpg "border=" 0 "height=" 536 "/>
Next, continue to start the trace in SQL Server Performance Analyzer. Although most performance monitoring is typically done using server-side tracing, the cost of performance analyzers to table is small when a single SPID is used to process a single query, so you can take advantage of the UI for this kind of work. Shows the continuous output of the Performance Analyzer after starting the trace and running the @employeeid=21 query. Select the ability to display one of the plan XML events to highlight the feature. Together with each statement that is executed by the outermost stored procedure and all stored procedures that are called, the user can see a complete graphical query plan in the Performance Analyzer UI. This can be called an ideal helper to help users tune complex multi-tiered stored procedures.
650) this.width=650; "title=" clip_image004 "style=" Border-top:0px;border-right:0px;border-bottom:0px;border-left : 0px; "alt=" clip_image004 "src=" http://s3.51cto.com/wyfs02/M01/56/A6/wKiom1SKYBvgjKS3AASfPjI2BvY581.jpg "border=" 0 "height=" 519 "/>
650) this.width=650; "title=" clip_image005 "style=" Border-top:0px;border-right:0px;border-bottom:0px;border-left : 0px; "alt=" clip_image005 "src=" http://s3.51cto.com/wyfs02/M02/56/A6/wKiom1SKYBzDvd-6AAYKoEB5l_Q259.jpg "border=" 0 "height=" 633 "/>
SQL tracing is not actually tuned, but it can help you find queries that can cause failures and the components that need to work in those queries. However, its function is much more than the performance tuning.
Note: SQL Trace is not actually tuned, and SQL Server's Database Engine Tuning Advisor (DTA) tool can track an input file so that it can help users query faster in terms of indexes, statistics, and partitioning. If you use the DTA tool, make sure that you provide enough samples of the queries that the system typically processes. If the number of samples collected is too large, the result will be biased, and it is likely to lead to a low level of advice from DTA, and may even provide recommendations that lead to other performance failures in queries that have not yet entered the collection.
This article is from the SQL Server deep dives blog, so be sure to keep this source http://ultrasql.blog.51cto.com/9591438/1589152
SQL Server Profiler-performance tuning