SQL Server Profiler Tools

Source: Internet
Author: User
Tags management studio sql server management sql server management studio

I. Introduction to SQL Profiler Tools

SQL Profiler is a graphical interface and a set of system stored procedures that function as follows:

    • Graphical monitoring of SQL Server queries;
    • Collect query information in the background;
    • Analyze performance;
    • Diagnosing a problem like a deadlock;
    • Debug T-SQL statements;
    • Simulated replay of SQL Server activity;

You can also use SQL Profiler to capture activities that are performed on an instance of SQL Server. Such an activity is called Profiler tracking.

  1. Profiler tracking

From start = "All Programs =" The Microsoft SQL Server 2008 = "Performance tool opens the Profiler tool, or you can open the SQL Server Management studio= tool = SQL Server Profiler."

Then select File = "new =" to track open a connection window, select the server instance that will be tracked and then connect. Open the following Tracking Properties dialog box.

  

If you have many traces, you can provide a trace name to help categorize later. Different trace templates can help establish a track for different purposes.

When the Tracking Properties window is open, click the Event Selection tab to provide a more detailed definition of the trace.

  

  2. Events

An event that manifests the various activities performed in SQL Server. These activities can be simply categorized as event classes, cursor events, lock events, stored procedure events, and T-SQL events are common event classes.

For performance analysis, an event that is primarily interested in resource stress levels for various activities performed on SQL Server. Resource pressure mainly includes the following content:

    1. What kind of CPU usage does SQL activity involve?
    2. How much memory was used?
    3. How many i/0 operations are involved?
    4. How long did the SQL activity execute?
    5. How often does a particular query execute?
    6. What kinds of errors and warnings do queries face?

The following events are given to trace the end of the query:

Event class Event Description
Stored procedures rpc:completed RPC Completion Events
sp:completed Stored Procedure Completion Events
sp:stmtcompleted An SQL statement completion event in a stored procedure
T-SQL sql:batchcompleted T-SQL Batch completion events
sql:stmtcompleted A T-SQL statement completion event

The RPC event indicates that the stored procedure was executed using the remote Procedure Call (RPC) mechanism through the OLE DB command. If a database application executes a stored procedure using a T-SQL EXECUTE statement, the stored procedure is converted to a SQL batch instead of an RPC. RPC requests are typically faster than execute requests because they bypass many statement parsing and parameter handling in SQL Server.

T-SQL consists of one or more T-SQL statements. Statements or T-SQL statements are also separate and discrete in stored procedures. Capturing individual statements with sp:stmtcompleted or sql:stmtcompleted events can be expensive operations, depending on the number of individual statements. Assume that each stored procedure in the system contains and has only one T-SQL statement. In this case, the complete set of statements is fairly small. Now it is assumed that there are multiple statements in the procedure, and some of these procedures use other statements to invoke other procedures. Collecting all this extra data now becomes a very powerful load on the system. In the production machine must be used with caution.

Now go back to the event selection panel and only the events that have been selected will be displayed. If you want to display all the events available for selection, simply select the Show All Events Radio box, add a trace event, look for an event under an event class in the Events column, and click the check box to the left of it; to remove unwanted events, uncheck the event selection box.

Light classification has a lot of said:

  

Some other events related to performance diagnostics are given below:

Event class Event Description
Security Audit (Safety audits) Audit Login (Login audit) Log connections to a database when a user connects to SQL Server or disconnects
Audit Logout (deregistration audit)
Sessions (session) ExistingConnection (Existing connection) Represents all users connected to SQL Server between the start of the trace
Cursors (cursor) CursorImplicitConversion (cursor implicit conversion) Indicates that the type of cursor created differs from the requested type
Errors and Warnings (Errors and warnings) Attention (note) Indicates a request interruption due to client revocation query or database connection corruption
Exception (Exception) Indicates that an exception occurred in SQL Server
Execution Warning (execution warning) Indicates a warning occurred during query or stored procedure execution
Hashed Warning (hash warning) Indicates an error occurred in the hash operation
Missing column Statistics (row statistics missing) Class statistic loss that indicates that the optimizer requires a deterministic processing strategy
Missing JOIN predicate (Connection assertion lost) Indicates that the query was executed without a connection assertion in two tables
Sort Warning (Sort warning) Indicates that there is no appropriate memory to perform a sort operation in a query such as Select
Locks (Lock) Lock:deadlock (Dead Lock) Marked the advent of deadlocks
Lock:deadlock Chain (Dead chain) Displays the query chain that generated the deadlock
Lock:timeout (lock timeout) Indicates that the lock has exceeded its timeout parameter, which is set by the Setlock_timeout timeout_perious (MS) command
Stored procedures (Stored procedure) Sp:recompile (re-compiling) Indicates that the execution plan used for a stored procedure must be recompiled because the execution plan does not exist, a forced recompilation, or an existing execution plan cannot be reused

Sp:starting (Start)

Sp:stmtstarting (statement start)

Represents the beginning of an SQL statement in a sp:stmtstarting stored procedure and stored procedure, respectively. They are useful for identifying the start list because an operation causes the attention event to fail.
Transactions (Things) SqlTransaction (SQL Transaction) Provides information about database transactions, including transaction start/end times, transaction persistence events, and more

  3. Event columns

Events are expressed in different characteristics (called data columns). The data column shows the non-pass characteristics of an event, such as the class of the event, the SQL statement used for the event, the resource cost of the event, and the source of the event.

Data columns Description
EventClass (event Class) Event type, such as sql:statementcompleted
TextData The SQL statement used by the event, such as SELECT * FROM person
Cpu The CPU overhead of the event (in MS), such as for a SELECT statement, cpu=100 indicates that the statement performs a 100ms
Reads The number of logical read operations performed for an event. For example, for a SELECT statement, reads=800 indicates that the statement requires 800 logical read operations
Writes The number of logical write operations performed for an event
Duration Execution time of the event (MS)
SPID The SQL Server process identifier used for the event
StartTime Time when the event started

These are commonly used data columns, and there are some less commonly used data columns:

    • BinaryData (binary data)
    • Integerdata (integer data)
    • EventSubClass (Event sub-Class)
    • DatabaseID (database identifier)
    • ObjectID (object identifier)
    • INDEXID (index Identifier)
    • TransactionID (transaction identifier)
    • Error (Errors)
    • EndTime (end time)

Column data can be rearranged to suit your own style, to control the placement of column data, click the Organize Columns button, and the following dialog box opens. You can click the up and down buttons to modify the position of the column, and moving the column into groups means that it will become a total column.

  4. Column Filters

In addition to defining events and data columns for one profiler trace, you can define a variety of filtering conditions. These conditions help narrow the output of the trace, which is often a good idea. A list of commonly used filters is given below.

Event Filter Condition Instance Use
ApplicationName (application name) Not Like:sql Profiler Filters the events generated by the profiler. This is the default behavior
DatabaseID (database identifier) Equals:<id of the database to monitor> Filters the events generated by a particular database. Database Id:select db_ic (' Northwind ')
Duration (Duration) Greater than or Equal:2 For performance analysis, tracking is often captured for a large workload, and in large traces, many event logs have a smaller duration (Duration) than those of interest. Filter this event log because there is little room to optimize these SQL activities
Reads (read operand) Greater than or equal "2 Filtering events with smaller read operations
SPID

Equals:<database users to Monitor>

Locate a query sent by a specific database user

Here's how to set the filter column:

  

5. Tracking Template

SQL Server Profiler can create a tracking template with custom events, data columns, and filters, then define a new trace and then reuse the trace template to capture a trace. The process of defining a new trace template is similar to defining a new trace by following these steps:

    1. Create a new trace.
    2. Define events, data columns, and filters as before.
    3. Save the trace definition as a tracking template from the file = Save As menu.

SQL Server Profiler will automatically add the new template to its list of templates.

To create a new template:

  

To save a template:

  

View:

  

  6. Tracking data

After you have defined the trace, clicking the Run button will start capturing the event and displaying it on the screen, you can see a series of scrolling events, you can see the system running on the screen we call SQL TV, you can control the tracking more or less like a DVD player, you can pause, start, and stop tracking using the buttons on the toolbar , you can even pause tracking and modify it in the studio.

Once you have completed the capture of SQL Server activity, you can save the trace output as a trace file or a trace table. The trace output saved to the trace file is a native format that can be opened by the profiler to parse the SQL query. Saving the trace output as a table can also cause the profiler to parse its SQL query with a SELECT statement on the tracking table.

The specific action is file = "Save As =" Tracking table. Select the databases and tables you want to deposit, and then you can execute the various SQL queries like normal tables.

second, the automation of tracking

The profiler GUI simplifies the collection of profiler traces. Unfortunately, this simplicity has its price. The profiler tool captures events that enter in-memory buffers for feedback to the GUI via the network. The GUI relies on the network, and network traffic can slow down the system and cause the buffer to be filled. This will affect the performance of the server to a lesser extent. Further, when the buffer is filled, the server will start discarding events to avoid severely impacting server performance.

  1. Using the GUI to capture the trace

You can create a scripted trace in two ways-manually or using a GUI. The easiest way to easily meet all of the script's requirements is to use the profiler tool's GUI, which requires the following steps:

    1. Define a trace;
    2. Click File = "Export =" script trace definition;
    3. The target server type must be selected, SQL server2005/2008;
    4. No file name, and save it;

These do not go all the script commands required to generate all the step traces and output them to a file.

To manually start a new trace using Management Studio:

    1. Open the file;
    2. Replace Insertfilenamehere with the relevant name and path of the system;
    3. Executes the script, which returns a single-column result set with Traceid;

You can automate the execution of this script through the SQL Agent, and you can even run the script from the command line using the SQLCMD.exe program. Regardless of which method is used, the script initiates the trace. If no trace stop time is defined, you must manually stop the trace using Traceid.

  2. Capturing traces using stored procedures

Looking at the script defined in the previous section, you will see a series of commands in a particular order bar:

    • Sp_trace_create: Create a trace definition;
    • Sp_trace_setevent: Adding events and events to the trace;
    • Sp_trace_setfilter: Apply the filter to the trace;

Once the SQL Trace has been defined, the trace is stopped. Because SQL Trace runs continuously as a back-end process, the Managerment studio session does not need to remain open. You can use the SQL Server built-in function fn_trace_getinfo to determine which traces are running, as follows:

From:: fn_trace_getinfo (default);

Output diagram:

  

In the output of the Fn_trace_getinfo function, the number of different traceid represents the number of active traces on SQL Server.

The third column (value) indicates whether the trace is running (value=1) or stopped (value=0). You can stop a specific trace, such as traceid=1, by executing a stored procedure sp_trace_setstatus, as follows:

1,0;

After the trace is stopped, its definition must be performed sp_trace_setstatus shutdown and removed from the server as follows:

1,2;

In order to verify that the trace stopped successfully, re-executes the fn_trace_getinfo function and determines that the output of the function does not contain the traceid.

This technique creates a trace file that is the same format as the trace file created by Profiler. Therefore, this trace file can be analyzed in the same way as the files created by the profiler.

Use the stored procedures outlined in the previous section to capture SQL traces, avoiding the overhead associated with the profiler GUI. It also provides greater flexibility for managing SQL tracking plans than the profiler tools.

three, combined tracking and performance Monitor output

If you automate the Performance Monitor capture to a file, it also automates the profiler data capture to a file. They cover the same time period, so they can be used together in the SQL Profiler GUI. Make sure that the trace has starttime and endtime data fields, follow these steps:

    • Open the trace file (as long as you have saved as = "Trace File");
    • Click on file = to import performance data;
    • Select the imported Performance Monitor file;

Doing the above will open the dialog box shown below, which allows you to choose to include Performance Monitor counters.

Once you have selected the counters you want to include, click the OK button to open Profiler and Performance Monitor data together. You can now start working with trace data and Performance Monitor data together. If you select a time in the top window, it places a red line in Performance Monitor that shows when the event occurred in the data. Instead, you can click Performance Monitor data to indicate that the events of that time will be selected. These properties work well and will be used regularly during the tuning process to identify bottlenecks and pressure points and to identify specific queries that cause these pressures.

Iv. key points of SQL Profiler usage

SQL Profiler uses the following recommendations:

    • Limit the number of events and data columns;
    • Discard startup events for performance analysis;
    • Limit the output size of the trace;
    • Avoid sorting online data columns;
    • Remote running proflier;

1. Limiting events and data columns

When tracking SQL queries, you can filter events and data columns to determine which SQL activities should be captured. Selecting more events results in a lot of tracking overhead. Data columns do not add too much overhead because they are only attributes of an event class. Therefore, it is important to know the cause of each event you want to track, and to select events based on necessity.

Minimizing the number of captured events avoid SQL Server wasting valuable resource bandwidth to generate all the events. You should be careful when capturing events such as locks and execution plans, because these events can make the trace output very large and slow down SQL Server performance.

Filtering is divided into two stages: pre-filtering is performed by SQL Server, and post-filtering is performed by the user. Pre-filtering is the online phase of capturing SQL Server activity, and pre-filtering provides multiple overflows:

    • Reduces the performance impact of SQL Server because a limited amount of time is generated;
    • Reduce the tracking output size;
    • Simplify the filter operation, first because there are fewer events to capture;

The only downside to pre-filtering is the possibility of losing important information that is needed in a thorough analysis.

  2. Discard the startup events used for performance analysis

The information that is used for performance analysis surrounds the resource overhead of a query. It is not possible to sp:stmtstarting such a startup event because the I/O volume, CPU load, and the duration of the query can be calculated only after the event has completed. Therefore, you do not need to capture a startup event when you are tracking a slow-running query for performance analysis. This information is provided by the corresponding completion event.

Under what circumstances is it appropriate to capture a startup event? You should catch a startup event when you expect some SQL queries to end execution because of an error, or to frequently discover attention events. The attention event generally indicates that the user canceled the query or the query timed out, possibly because the query ran for too long.

  3. Limit trace Output size

In addition to pre-filtering events and data columns, other filtering conditions also limit the size of the trace output. Similarly, limiting the size of the events that are of interest in the overall system state of interest may be lost. However, if you focus on expensive queries, filters are helpful.

Through filters, you can filter the query that executes the event "=2 or logical Read quantity" =100, because queries that consume too little are basically not optimized.

  4. Avoid sorting online data columns

During profiling, you typically sort on different data columns (such as duration, CPU, Reads) to determine the query that has the largest number of corresponding numbers. If you sort offline, you can reduce the profiler activity that must be performed when interacting with SQL Server. The method for sorting the captured SQL trace output is as follows:

    • Capture tracking, do not do any sorting or grouping;
    • Save as trace output to a trace file;
    • Open the trace file and sort or group the trace file output on a specific data column as needed;

  5. Run Profiler remotely

It is generally not a good idea to run test tools directly on a production server. Profiler has a large user interface, so it's better to run it on other machines. Like System Monitor, Profiler should not run through a Terminal Services session because the main part of the tool is still running on the server. When the trace output is collected directly into a file, it is saved on a local file that is running on the profiler. This is still more resource intensive than running the profiler as a server-side trace through system stored procedures. Using system stored procedures is still the best option.

  6. Restricting the use of certain events

Some events are more expensive than other events. The cost of statement completion events can be very large due to the nature of the queries that are generated. It is important to use caution, especially on systems that have already encountered stress: Showplan XML events, Performance:showplan XML, Performance:showplan XML for Query Compile and Performance:showplan XML STATISTICS prifile. Although these events may be useful, do not use them on production machines.

v. Query performance metrics without profiler

Creating a trace can collect a lot of data for later use, but this collection can be costly and must await results.

If you want to capture the performance metrics of your system immediately, especially with regard to query performance, then the dynamic management view sys.dm_exec_query_stats is officially needed. If you also need a history of query runs and their individual costs, then tracking is still a better tool. However, you can get this information from sys.dm_exec_query_stats if you only need to know the longest running query or the most physical read operation.

Because Sys.dm_exec_query_stats is just a view, you can simply query it and get information about the query plan statistics on the server.

Column Describe
Plan_handle A pointer to a reference execution plan
Creation_time Time the schedule was created
Last_execution time Query the last scheduled time used
Execution_count Number of times the plan has been used
Total_worker_time The CPU time that is planned to be used from the creation
Total_logical_reads The number of read operations that are scheduled to be used from the creator
Total_logical_writes The number of write operations that are scheduled to be used from the creator
Query_hash A binary hash that can be used to identify queries with similar logic
Query_plan_hash A two jinzhihash that can be used to identify plans with similar logic

To filter the information returned from sys.dm_exec_query_stats, you need to connect it to other dynamic management functions, such as sys.dm_exec_sql_text, to display the query text associated with the plan Sys.dm_query_ Plan displays the execution plan used for the query. Once connected to other DMF, you can limit the database or process you want to filter.

SQL Server Profiler Tools

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.