Application Event probe optimization of SQL Server System (1)

Source: Internet
Author: User
Tags microsoft website

Overview

When your SQL Server database system runs slowly, you may know more or less that you can use the SQL Server Profiler (SQL event Profiler) tool for tracking and analysis. Yes, Profiler can be used to capture all statements sent to SQL Server and execution performance data of statements, such as the number of read/writes pages, CPU usage, and statement duration) for later analysis. However, this article does not introduce how to use the Profiler tool, but describes how to use read80trace. For more information about this tool, see the following section.) The tool uses a custom stored procedure to outline and analyze the Trace files captured by Profiler, finally, an exciting data analysis report is generated, allowing you to optimize the SQL Server database system.

This article is particularly useful to readers who need to analyze the performance of SQL Server large database systems, such as DBAs. In a database system with a large scale and complex application logic, the files generated by Profiler are often huge. For example, in Profiler, you can configure only to capture basic statement events, the size of the Trace file captured after two hours of running may be GB. The method described in this article can not only greatly save the time and money for analyzing Trace, but also free you from the massive amount of data in the Trace file, and let you know the access mode of the database system well, so as to know which type of statements have the greatest impact on performance and which types of statements need to be optimized.

Traditional methods and limitations of Profiler trace file Performance Analysis

First, let's talk about the access mode of the database system. In addition to using Trace files to solve problems such as deadlocks, blocking, and timeout, the most common and major function is to obtain the following important information from the Trace file:

1. statements that run most frequently

2. Key statements that affect system performance

3. Proportion of various statement groups and related performance statistics

The access mode mentioned in this article is the above three information. We know that the modules of the database system are basically fixed, and the access methods for each module to SQL Server are almost fixed. The specific menu and button are basically unchanged, within a long enough time, the various SQL Server access statements and their proportions are basically fixed. In other words, as long as the Profiler sampling time is long enough, I usually run for more than two hours), the Access Mode of the database system can be counted from the Trace file. Each database system has its own unique access mode. An important goal of analyzing the Profiler Trace file is to find the access mode of the database system. Once you get the access mode, you can be confident and confident in optimizing the system. Unfortunately, no tools have been available so far to conveniently obtain this information.

There are two traditional Trace analysis methods. One is to use the Profiler tool itself. For example, you can use the Filter function of Profiler to Filter out statements that run for more than 10 seconds, or sort the statements that consume the most CPU by CPU. The other is to import the Trace file into the database and then use T-SQL statements for statistical analysis. These two methods are effective for small Trace files. However, if the number of Trace files is large, for example, four trace files larger than MB), the two methods have great limitations. One of the limitations is that the analysis and statistics are not easy because of the huge file size, which often makes it impossible for you to outline the execution performance of all statements from a global perspective. You can easily get confused by some statements and spend your energy on them. In fact, it is not a key statement that really requires attention. The second limitation is that you find that although many statement modes are very similar but only have different execution parameters), there is no simple way to classify them together for statistics. In short, you cannot easily obtain the access mode of the database system, and you cannot achieve a high level of architecture during optimization. This is the limitation of traditional analysis methods. The Read80trace tool and custom stored procedures described below can overcome these limitations.

Read80trace tool introduction and its Normalization Function

Read80Trace is a command line tool. Using the Read80Trace tool can greatly save time on analyzing Trace files and achieve twice the result with half the effort. Read80Trace reads the Trace file, standardizes the statements, imports them to the database, and generates an HTML page for performance statistical analysis. In addition, Read80trace can generate an RML file, and then OSTRESS uses the RML file to replay all events in the Trace file in multiple threads. This is possible for statements that want to capture Profiler on another server. This article does not detail the Read80trace or OStress tools. If you are interested, please refer to the relevant documents. The relevant software can be downloaded from the Microsoft website. Note: The software name is RML)

I want to use the standardized functions of Read80Trace. What is standardization? It is to classify all statements with similar statement modes but different parameters. For example, there are several statements in the Trace:

Select * from authors where au_lname = 'white' select * from authors where au_lname = 'green' select * from authors where au_lname = 'carson' after standardization, the preceding statement looks like this: select * from authors where au_lname = {str}

With standardized statements, it is no longer difficult to calculate the access mode of the database system. When running Read80trace, I usually use the following command line:

Read80trace –f –dmydb –Imytrace.trc

The-F switch does not generate RML files because I do not need the replay function. The generated RML file is large. We recommend that you use the-F switch if you do not need to replay the file.

The-d switch tells read80trace to save the trace file processing result to the mydb database. The stored procedure we created later accesses the tables generated by read80trace in mydb for statistics. -The I switch specifies the trace file name to be analyzed. The Read80trace tool is very clever. If there is a series of Trace files generated by Profiler in this directory, such as mytrace. trc, mytrace1.trc, and mytrace2.trc, it reads them sequentially for processing.

In addition to the preceding descriptions, Read80trace has many interesting functions. For example, you can use the-I switch to enable Read80trace to read trace files from zip or CAB files without unzipping them yourself. All switches are described in details in Read80trace. chm. What I like most is the performance of read80trace. It takes less than one hour to analyze several trace files in GB size. My computer is an old machine with only MB of memory. I am very satisfied with this performance.

You may use read80trace to analyze the trace file generated by the stress test. I suggest you analyze the Trace files captured from the production environment. Because many stress testing tools cannot really simulate the real environment, the trace files they obtain cannot reflect the actual situation. Even some stress testing tools execute self-written statements cyclically, and do not reflect the accurate access mode. We recommend that you only use the trace generated by stress testing as a reference.

Analyze data after Normalize Using Stored Procedures

With standardized statements, you can use stored procedures for statistical analysis. The basic idea of analysis is to make group by statistics on the Reads, CPU, and Duration of all statements in the same mode to get the access mode information:

1. The total number of executions of a specific statement, average number of read pages reads)/average CPU time/average execution time.

2. The proportion of such statements in all statements, such as execution times, reads, and CPU.

The definition and description of a stored procedure are as follows:

Create procedure usp_GetAccessPattern 8000 @ duration_filter int =-1 -- input parameter, you can filter the statistics based on the statement execution time as begin/* first obtain the total performance data of all statements */declare @ sum_total float, @ sum_cpu float, @ sum_reads float, @ sum_duration float, @ sum_writes floatselect @ sum_total = count (*) * 0.01, which is the total number of all statements. @ Sum_cpu = sum (cpu) * 0.01, -- this is the CPU time consumed by all statements @ sum_reads = sum (reads) * 0.01, -- this is the number of Reads consumed by all statements, 8 K. @ Sum_writes = sum (writes) * 0.01, -- this is the number of Writes consumed by all statements, in 8 K. @ Sum_duration = sum (duration) * 0.01 -- this is the total execution time of all statements. From tblBatches -- this is the table generated by Read80Trace, including all the statements in the Trace file. Where duration >=@ duration_filter -- whether to filter by execution time/* then perform Group by to obtain the proportion occupied by certain statements */Select ltrim (str (count (*) exec_stats, ''+ str (count (*)/@ sum_total, 4,1) + '%' ExecRatio, ltrim (str (sum (cpu) + ': '++ ltrim (str (avg (cpu) cpu_stats,' + str (sum (cpu)/@ sum_cpu, 4, 1) + '%' CpuRatio, ltrim (str (sum (reads) + ':' + ltrim (str (avg (reads) reads_stats, ''+ str (sum (reads)/@ sum_reads, 4, 1) + '%' ReadsRatio, -- ltrim (str (sum (writes)) + ':' + Ltrim (str (avg (writes) -- writes_stats, ''+ str (sum (writes)/@ sum_writes, 4,1) + '%) ', ltrim (str (sum (duration) +': '+ ltrim (str (avg (duration) duration_stats, ''+ str (sum (duration) /@ sum_duration, 4, 1) + '%' DurRatio, textdata, count (*)/@ sum_total tp, sum (cpu)/@ sum_cpu cp, sum (reads) // @ sum_reads rp, sum (duration)/@ sum_duration dp into # queries_staticstics from/* tblUniqueBatches table stores all standardized statements. */(Select reads, cpu, duration, writes, convert (varchar (2000), NormText) textdata from tblBatches inner join tblUniqueBatches on tblBatches. hashId = tblUniqueBatches. hashid where duration> @ duration_filter) B group by textdata -- this group by is very important. It classifies and counts statements. Print 'top 10 order by cpu + reads + duration' select Top 10 * from # queries_staticstics order by cp + rp + dp descprint 'top 10 order by cpu 'select top 10 * from # queries_staticstics order by cp descprint 'top 10 order by reads' select Top 10 * from # queries_staticstics order by rp descprint 'top 10 order by duration' select top 10 * from # queries_staticstics order by dp descprint 'top 10 order by batches' select Top 10 * from # queries_staticstics order by tp descEnd

Considering that the output result is too long, writes is removed from the storage process. This is because most database systems are dominated by Reads. You can easily modify the stored procedure to include write.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.