Use Cases
I remember when I was tuning for a company, the person in charge sent me a bunch of business T-SQL scripts, I am still calm in the face of massive scripts, although I do not understand the internal complex business, however, we have to focus on the key "slowness" of the problem. We will filter them out based on the "slowness" of the query and optimize them one by one. Won't the problem be solved quickly? Three days later, the person in charge held my hand with tears. The buddy worked hard and the query response was improved.
Tracking provider
SQL Server provides tracking methods for both of us: one is that a physical file can be saved in a local or UNC network path) and the other is a row set. You should be familiar with the latter
This tool is available in SSMS> SQL Profile
For more information, see DIFFAndSame (row set, file provider ).
Both use Buffer-like methods to store the current event data, obviously to reduce the IO pressure, so that the event data can be not blocked or omitted as far as possible, when the Buffer reaches a certain amount, it may be flushed to the disk or sent to the terminal client of the network) to display the monitoring row set.
An important guarantee of the way physical files store monitoring results is that no event can be missed, once IO is slowed down, it may affect the execution of the entire T-SQL.
- SELECT * FROM sys.dm_os_wait_stats
- WHERE wait_type IN ('SQLTRACE_LOCK','IO_COMPLETION');
I use this statement to monitor the impact of TRACE and IO completion on my current machine, and the IO situation of one of my customers:
- wait_type
- waiting_tasks_count
- wait_time_ms
- max_wait_time_ms
- signal_wait_time_ms
-
-
- IO_COMPLETION
- 66030898
- 24377499
- 3634
- 418960
-
-
- SQLTRACE_LOCK
- 12007
- 175943
- 1001
- 1281
Because I have performed a lot of filtering, this value is acceptable, and the impact is not very big.
In fact, we are most familiar with the method of row result set, which is directly presented to us using the SQL Server Profile monitoring GUI. However, I do not recommend it. First, if the Buffer is full, it has a certain delay, and may discard the cleared cache and continue to accept the event, but the event is not sent to the Client, if you do not write it to a physical file, it will naturally be lost. For example, the SQL Server Profile is monitored on the DB Server. Because high-load machines are used for display, events may be lost. In addition, in fact, it is to accept a large enough Buffer for large block write operations, the performance is better than the row set.
Row set)
Confidentiality principles
SQL Server's security features automatically filter private data, such as passwords. I executed the following statement in my SSMS:
- EXEC sp_password 'pp','pp1','sa';
This is the system sp to change the sa account password, I opened the SQL Server Profile-> select the T-SQL monitoring template
Then execute the above stored procedure, and the monitoring result is:
Monitoring Result: -- * sp_password ----------------------------
SQL Server Profile
The SQL Server Profile GUI tool has many advantages. First, it reduces the complexity of our monitoring and allows us to quickly establish monitoring. In tracking attributes, you can choose MSSQL templates for us, including common T-SQL, T-SQL Duration, T-SQL Locks templates to monitor all the queries of the current DB run, the time consumed by all queries and the locked status.
In tracking properties-> select events, we can select the desired events, and all events are defined in MSDN-> click the column filter to customize filtering and sort noise interference factors.
I chose a time-consuming filter condition = 500)
For other templates, you can refer to the MSDN manual and try it yourself: SQL Server 2008 R2 local MSDN
Server tracking and physical collection
SQL Server Profile only encapsulates some stored procedures. I prefer to define common scripts and save the monitoring results on the local machine for a large amount of analysis and archiving.
Of course involves 4 stored procedures, although it is very troublesome to set the filtering script, but the SQL Server Profile can use the file-> export can export the monitoring script means that we do not need to write a complex T-SQL script, however, we recommend that you familiarize yourself with these stored procedures:
Sp_trace_create defines a trail. The created trail is queried in sys. traces.
S_trace_setevent
Sp_trace_setfilter
Sp_trace_setstatus: sp_trace_setstatus @ traceid, 0 stop, sp_trace_setstatus @ traceid, and 2 remove the trace. This will cause sys. traces to fail to query the trace.
In fact, the entire tracking is relatively simple. I have a common script here:
To monitor files that exceed the specified number of seconds and the database's batch processing and stored procedure statements exceed 5 MB, A rolover is executed and _ 1, _ 2 is added according to the file name. trc tracking result ):
- CREATE PROC [dbo].[sp_trace_sql_durtion]
- @DatabaseName nvarchar(128),
- @Seconds bigint,
- @FilePath nvarchar(260)
- AS
- BEGIN
- DECLARE @rc int,@TraceID int,@MaxFileSize bigint;
- SET @MaxFileSize = 5;
-
- EXEC sp_trace_create @TraceID OUTPUT,2,@FilePath,@MaxFileSize,NULL;
-
- IF @rc != 0
- RETURN;
-
- DECLARE @On bit;
- SET @On = 1;
-
- EXEC sp_trace_setevent @TraceID,10,35,@On;
- EXEC sp_trace_setevent @TraceID,10,1,@On;
- EXEC sp_trace_setevent @TraceID,10,13,@On;
- EXEC sp_trace_setevent @TraceID,41,35,@On;
- EXEC sp_trace_setevent @TraceID,41,1,@On;
- EXEC sp_trace_setevent @TraceID,41,13,@On;
-
- SET @Seconds = @Seconds * 1000000;
-
- EXEC sp_trace_setfilter @TraceID,13,0,4,@Seconds;
-
- IF @DatabaseName IS NOT NULL
- EXEC sp_trace_setfilter @TraceID,35,0,0,@DatabaseName
-
- EXEC sp_trace_setstatus @TraceID,1
- SELECT TraceID = @TraceID;
-
- END
The parameters are very clear, including the database name, the number of seconds for executing the event, and the saved path.
When we run this script for an event, we can quickly find a large amount of time-consuming T-SQL, we can
- SELECT * FROM fn_trace_gettable (N 'monitoring file path', 1 );
To view the results of the row method.
Similarly, creative readers can create their own monitoring locks and monitor deadlocks to save files. However, I suggest you reduce noise as much as possible, that is to say, what we want to achieve is in the Microsfot SQL Server 2005 Technical Insider: T-SQL program design, there is a regular, used to combine all similar statements, only SQL clr with specific values in the form of parameters, but I think there is a bug in that regular expression. When I leave it blank, I will write it for you, and I can also use it better.
Monitoring exception
In the previous series, exceptions captured by specific SQL events can be notified in a timely manner, but the specific exception information is not particularly detailed. So we can select the Error in the event to add all exceptions related to T-SQL batch processing and SP, for analysis, this trace is very helpful for us to monitor some exceptions !!! I created a tracking script, which is more than 5 MB rolover like the above tracking event script. We need to perform this tracking on a regular basis. Although it is not recommended to enable it for a long time, it is advantageous for our system to operate for a longer period of time to regularly monitor and handle exceptions.
- CREATE PROC [dbo].[sp_trace_sql_exception]
- @FilePath nvarchar(260)
- AS
- DECLARE @rc int,@TraceID int,@Maxfilesize bigint
- SET @maxfilesize = 5
-
-
- EXEC @rc = sp_trace_create @TraceID output, 2, @FilePath, @Maxfilesize, NULL
- IF (@rc != 0)
- RETURN;
-
- DECLARE @on bit
- SET @on = 1
- EXEC sp_trace_setevent @TraceID, 33, 1, @on
- EXEC sp_trace_setevent @TraceID, 33, 14, @on
- EXEC sp_trace_setevent @TraceID, 33, 51, @on
- EXEC sp_trace_setevent @TraceID, 33, 12, @on
- EXEC sp_trace_setevent @TraceID, 11, 2, @on
- EXEC sp_trace_setevent @TraceID, 11, 14, @on
- EXEC sp_trace_setevent @TraceID, 11, 51, @on
- EXEC sp_trace_setevent @TraceID, 11, 12, @on
- EXEC sp_trace_setevent @TraceID, 13, 1, @on
- EXEC sp_trace_setevent @TraceID, 13, 14, @on
- EXEC sp_trace_setevent @TraceID, 13, 51, @on
- EXEC sp_trace_setevent @TraceID, 13, 12, @on
-
- DECLARE @intfilter int,@bigintfilter bigint;
-
- EXEC sp_trace_setstatus @TraceID, 1
-
- SELECT TraceID=@TraceID
- GOTO finish
-
- ERROR:
- SELECT ErrorCode=@rc
-
- FINISH:
Perform the task on a regular basis, Comrades. Look for exceptions...
Default trail and black box trail
In sys. trace of TraceID = 1 in traces is the default trace of SQL Server. This trace is lightweight. Generally, the monitoring Server enables and stops, and objects are created and deleted, automatic growth of logs and data files and changes in other databases. It is the best to monitor the users who have no permission to delete the wrong table. Of course, do not use an account unless necessary !)
You can use
- EXEC sp_configure 'default trace enabled',0;
- RECONFIGURE WITH OVERRIDE;
To disable the default trail.
Black box tracking helps us diagnose database failures. We should also find it when searching sp_create_trace on MSDN.
Then we can create a similar stored procedure to quickly create a black box tracking, to help us diagnose some exceptions!
- CREATE PROCEDURE sp_trace_blackbox
- @FilePath nvarchar(260)
- AS
- BEGIN
- DECLARE @TraceID int,@MaxFileSize bigint
- SET @MaxFileSize = 25;
- EXEC sp_trace_create @TraceID OUTPUT,8,@FilePath,@MaxFileSize
- EXEC sp_trace_setstatus @TraceID,1;
-
- END
Here I provide the @ FilePath = NULL parameter, which is saved in the data folder of SQL Server by default.
End
This section describes the features of SQL Server Trace in detail. If you are interested, you can go to MSDN to study and monitor it, I also read the MSDN and Microsoft SQL Server 2005 tuning books. The following monitoring may tell you about DDL trigger monitoring, for Service brokers involved in C2 review and SQL Server Event Notifications, I will open a series to talk about Service brokers in detail.) The final conclusion may be about 2008 data collection monitoring.
Edit recommendations]