Introduced
Often people ask how the profile tool should be used? There is no way to get a poor performance SQL problem. Since I turned to MySQL myself for almost 2 years without using profile, suddenly profile became a little rusty and had to be re-acquainted. This article mainly provides a detailed introduction to the profile tool, including the use of tools and how to do so. Profile is a performance analysis monitoring tool that comes with SQL Server, and it can also generate the load data required for Database Engine Tuning Advisor analysis, such as developing the ability to debug a feature to collect execution SQL using profile is a very good way to Profile is primarily used for on-line real-time monitoring and data collection for later analysis use, which can save collected data into files and inserted into tables.
Tracking properties
first, the general
Saves the tracked record to the specified file.
1. Maximum file size
Specifies that a trace of the maximum file size stops saving trace information to the file when the maximum file size is reached. Use this option to group events into smaller, easier-to-manage files. In addition, restricting file size makes unattended tracking more secure because the trace stops after the maximum file size is reached. You can set the maximum file size for traces that are created through Transact-SQL stored procedures or by using the Server Profiler.
The maximum file size option is capped at 1 GB. Default maximum file size is 5 MB
Note: The size of the maximum file is not recommended to be too large, especially for files that need to be used by Database Engine Tuning Advisor, too large a trace file takes a long time to parse and because Database Engine Tuning Advisor also executes the collected payload files over and over again can sometimes lead to excessive load analysis failures, At the same time, the pressure on the server is too long for the business impact will be relatively large, the default size can be, while starting file rollover, multiple analysis.
2. Enable file rollover
If you use the file rollover option, SQL Server closes the current file and creates a new file when the maximum file size is reached. The new file has the same name as the original file, but the file name appends an integer to indicate its sequence. For example, if the original trace file is named Filename_1.trc, the next trace file is Filename_2.trc, and so on. If the name assigned to the new rollover file is already in use by an existing file, the existing file will be overwritten unless the existing file is a read-only file. By default, the file rollover option is enabled when you save trace data to a file.
3. Server processing trace data
Ensure that the server records each trace event, and if logging events can significantly degrade performance, clear the server to process the trace data so that the server no longer logs events. 4. Maximum number of rows
Specifies that traces with the maximum number of rows will stop saving trace information to the table when the maximum number of rows is reached. Each event constitutes a row, so this parameter sets the range of events that are collected. Setting the maximum number of rows makes unattended tracking easier to run. For example, if you need to start a trace that saves trace data to a table, and you want to stop tracing when the table becomes too large, you can make it stop automatically.
If you have specified and reached the maximum number of rows, you will continue to run the trace while running SQL Server Profiler, but the trace information is no longer logged. SQL Server Profiler will continue to display trace results until the trace is stopped
5. Enable Trace Stop time
After the trace stop time is enabled, the trace stops automatically until the specified time. Each tracking recommendation must be set to a tracking stop time to prevent forgetting to close the trace causing server space to be full, and the default trace is 1 hours.
Attention:
- Starting with SQL Server 2005, the server reports the duration of the event in microseconds (one out of 10,000 seconds or 10-6 seconds), reporting the CPU time used by the event in milliseconds (1 per thousand seconds or 10-3 seconds).
- In SQL Server 2000, the server reports the duration and CPU time in milliseconds.
- In SQL Server 2005 and later versions, the SQL Server Profiler graphical user interface displays the Duration column by default in milliseconds, but after the trace is saved to a file or database table, the duration is measured in microseconds The value is written to the column.
Ii. Choice of events
Select different tracking events for different traces, and you can see all the trace events by ticking "show all trace Events", with a total of 21 event classifications. The two most used classifications are stored procedures and TSQL, which are used primarily to record the stored procedures and SQL statements executed, and move the mouse over specific events to show the specific description of the event and event columns, followed by analysis of several common events and commonly used event columns.
1. Show All trace events
When checked, all events will be displayed.
2. Show all Columns
When checked, all columns will be displayed.
3. Column filtering
Add some conditions to the column, in fact, it can be understood in the TSQL statement where the condition is added, for the Shaping column directly enter a value, for the string column is equivalent to the same as like the use of a non-quoted percent fuzzy matching method. Tracking results will filter out records that do not meet the criteria by ticking "exclude rows that do not contain values."
4. Column Organization
Column organization can be understood as a TSQL statement inside a group by operation, you can put the same conditions together to heavy.
Event
1.sql:stmt*******
[sql:stmtstarting]: Logging When a TSQL statement is started
[sql:stmtcompleted]: Logging When a TSQL statement is completed
The difference between the two events is the same as the meaning of the word, stmtstarting is the beginning of the recording event is not concerned about what the event will do next, StmtCompleted is to record the event after the end of the beginning and end of the process of doing some of the actions such as some of the usual column "Duration" , "Cpu", "Reads", "writes", "EndTime" these columns appear in the StmtCompleted event. So if you need to collect records that don't care about the entire event process and only need to collect quantities then you can use the starting event such as recording a statement or the number of times a stored procedure is executed.
2.sql:batch******
[sql:batchstarting]: Logging when initiating TSQL batch processing
[sql:batchcompleted]: Logging when the TSQL batch is completed
This time I put together two SELECT statements to execute, and you can see from the batch event that the entire batch of SQL It records also includes related comments, while the entire batch of two tsql is logged as an event, The stmt event records a specific TSQL statement that records two TSQL statements as two records. It is also possible to find that two tsql duration add is less than the entire batch of duration, which is also normal for the entire batch to execute in SQL compilation analysis is certainly more time consuming than a single tsql, but the difference is also very small.
BatchCompleted events are used for Engine Tuning Advisor, while StmtCompleted is used to parse a single TSQL statement. Similarly, the starting events and completed events inside the stored classification are the same as those in TSQL.
Event columns
Enumerate the frequently Used event columns
TextData: text details, such as verbose execution of SQL statements, and so on.
ApplicationName: The name of the client application that connects Sqlsever.
ntusername: Windows user name
LoginName: SQL Server log in user name.
CPU: The CPU time the event consumes, in a graphical interface but in milliseconds (1 per thousand seconds or 10-3 seconds), in a text file or database table in a subtle (one out of 10,000-second or 10-6-second) unit.
Reads: The number of logical reads performed.
writes: Number of physical Disk writes.
Duration: The duration of the event, that is, the elapsed time displayed in the statistics, in a graphical interface but in milliseconds (1 per thousand seconds or 10-3 seconds), in a text file or database table in a subtle (one out of 10,000-second or 10-6-second)
ClientProcessID: Invokes the application process ID of SQL Server.
SPID: SQL Server allocates the database process ID for the connection, that is, the process ID that is recorded in Sys.processes.
StartTime: The start time of the event.
EndTime: The end time of the event.
dbusername: The SQL Server user name for the client.
DatabaseID: If use database is specified, it is the default database ID (that is, the database ID of master). So the column does not work very much.
Error: The wrong number of the event, usually the error number stored in the sysmessage.
ObjectName: The name of the object being referenced.
Iii. self-tracking template
The tool comes with a few more useful tracking templates, which can be resolved directly with your own tracking template, and you can also create custom tracking events and trace properties to save as templates for later use.
sp_counts: Calculates the number of stored procedures that have been run, and groups statistics by the name of the stored procedure, which can analyze the behavior of a time period stored procedure.
Standard: Records the time that all stored procedures and T-SQL statements are batch run, which can be used when you want to monitor regular database server activity, which is the default template for general tracking that needs to be resolved using the template.
TSQL: records the content and start time of all T-SQL statements that the client submits to SQL Server, which is typically used for program debugging.
tsql_duration: Records all T-SQL statement batch information that the client submits to SQL Server and the time (in milliseconds) it takes to execute the statements, grouped by time, which can be used to parse slow queries. The tracking records for this template can be used for Database Engine Tuning Advisor analysis.
tsql_grouped: Records all T-SQL batch statements submitted to SQL Server and their start times, grouped by submission client and logged-in user, which is used to analyze queries executed by a customer or user.
tsql_locks: Records all start and finish stored procedures and T-SQL statements, while logging deadlock information, which is used to track deadlocks.
tsql_replay: Records details about the issued T-SQL statement, which records the information required to replay the trace, which can be performed on optimizations such as benchmarks.
Tsql_sps: Records details about all stored procedures that are executed, which can analyze the constituent steps of a stored procedure. If you suspect that the stored procedure is being recompiled, add the Sp:recomple event
Tuning: Records information about the batch processing of storage and T-SQL statements and the amount of time (in milliseconds) required to execute them, using this template to produce trace output that can be used for Database Engine Tuning Advisor workloads to optimize indexes and optimize performance. This template is similar to tsql_druation and the latter is done in a time grouping.
Database Engine Tuning Advisor
1. If you need to use Database Engine Tuning Advisor to analyze trace event records, you must capture the following trace events:
rpc:completed
sql:batchcompleted
sp:stmtcompleted
You can also use the Starting version of these tracking events. For example, sql:batchstarting. Database Engine Tuning Advisor does not optimize other types of trace events.
2. Include LoginName column
Database Engine Tuning Advisor submits a display plan request during the optimization process.When a trace table or trace file that contains a LoginName data column is used as a workload, Database Engine Tuning Advisor impersonates the user specified in the LoginName. If you do not grant SHOWPLAN permissions to this user (which enables the user to execute and generate a display plan for the statements contained in the trace), Database Engine Tuning Advisor will not optimize these statements.
Avoid granting SHOWPLAN permissions to each user specified in the LoginName column of the trace
-
Create a new workload by deleting LoginName columns from an event that has never been optimized, and then save only the non-optimized events to the new trace file or trace table.
-
Resubmit the new workload without the LoginName column to Database Engine Tuning Advisor.
Database Engine Tuning Advisor optimizes the new workload because no logon information is specified in the trace. If a statement does not have a corresponding LoginName, Database Engine Tuning Advisor optimizes the statement by simulating the user who initiated the tuning session (the sysadmin fixed server role or a member of the db_owner fixed database role).
3. Database Engine Tuning Advisor cannot perform the following actions:
- It is recommended that the system tables be indexed.
- Adds or removes a unique index or forces the index of a PRIMARY KEY or UNIQUE constraint.
- Optimize single-user databases.
4. Database Engine Tuning Advisor has the following limitations:
- Database Engine Tuning Advisor collects statistics through data sampling. Therefore, running the tool repeatedly on the same workload may produce different results.
- Database Engine Tuning Advisor cannot be used to optimize indexes in a Microsoft SQL Server 7.0 or earlier database.
- If the maximum disk space specified for the tuning recommendation exceeds the free space, Database Engine Tuning Advisor uses the specified value. However, when you execute a suggested script to implement it, the script will fail if more disk space is not added first. You can specify the maximum disk space by using the- B option of the DTA utility, or you can specify the maximum disk space by entering a value in the Advanced Tuning Options dialog box.
- For security reasons, Database Engine Tuning Advisor cannot optimize the workloads that reside in the tracking tables on the remote server. To remove this restriction, you can select one of the following options:
- Use trace files instead of tracking tables.
- Copy the trace table to the remote server.
- When a constraint is enforced, such as a constraint that is enforced when the maximum disk space is specified for tuning recommendations (by using the- B option or the Advanced Tuning Options dialog box), Database Engine Tuning Advisor may be forced to delete some existing indexes. In this situation, the resulting Database Engine Tuning Advisor recommendation may generate a negative expected increase value.
- When you specify constraints that limit tuning time (by using the -A option of the DTA utility or by selecting limit tuning timeon the Optimization Options tab), Database Engine Tuning Advisor may exceed this time limit to generate accurate and expected improvement values and analysis reports for workloads that have been processed so far.
5. Database Engine Tuning Advisor may not provide recommendations in the following situations:
- The table being optimized contains less than 10 pages of data.
- The recommended index has insufficient value for the expected increase in query performance for the current physical database design.
- The user running Database Engine Tuning Advisor is not a member of the db_owner database role or the sysadmin fixed server role. Queries in the workload are analyzed in the security context of the user running Database Engine Tuning Advisor. The user must be a member of the db_owner database role.
6. Database Engine Tuning Advisor may not provide partitioning recommendations in the following situations:
- The xp_msver extended stored procedure is not enabled. This extended stored procedure is used to extract the number of processors and available memory on the server where the database is being tuned. Note that after you install SQL Server, this extended stored procedure is turned on by default. For more information, see Understanding Surface Area Configurator and xp_msver (Transact-SQL).
7. Performance Considerations
During the parsing process, Database Engine Tuning Advisor can consume a considerable amount of processor and memory resources. To avoid slowing down your production server, use one of the following policies:
- Optimizes the database when the server is idle. Database Engine Tuning Advisor may affect maintenance task performance.
- Use the test server/production server feature. For more information, see Mitigating the Production server tuning load.
- Specifies that Database Engine Tuning Advisor only analyzes the physical database design structure. Database Engine Tuning Advisor provides many options, but specify only the options you want.
Note: Because Database Engine Tuning Advisor performs a query analysis of the execution plan with the statement in the load record as a performance optimization, there is also pressure on the server. In particular, large load analysis may require analysis for one hours or more, which can continue to stress the server, so avoid using Engine Tuning Advisor for load analysis during peak business periods.
Instance
The following is a list of three cases, using Database Engine Tuning Advisor to analyze the case for tracking records optimized indexes, monitoring deadlocks, and creating custom tracking template cases.
Case 1: Optimizing the Index
1. Create test data
--Creating a test table CREATE TABLE [dbo]. [Book] ( [id] [int] NOT NULL PRIMARY KEY, [name] [varchar] () null);--insert 10W test data declare @id intset @id =1while @id <10 0000BEGININSERT into book values (@id, CONVERT (varchar), @id) SET @[email protected]+1end;
2. Create a Trace
The default trace template "Tuning" is used here
1. Create a good track and click to run, event selection here to keep the default
2. Execute SQL
SELECT * from book WHERE name= ' 10001 ';
Because the name field is not indexed, the query execution plan analysis returns the index that created the Name field, which is also analyzed by Engine Tuning Advisor
3. Stop tracking
You must stop tracing before you can use Database Engine Tuning Advisor to analyze load tracking.
4. Open Database Engine Tuning Advisor
Can be directly in the profile of the toolbar select Open, "file" Select just the trace file, "Load database" Select the database that needs to be optimized, "Select the database and table to optimize" also need to optimize the database related tables. The optimization option does not have a specific need to select default, then click "Start Analysis".
The Engine Tuning Advisor automatically generates the steps to create the index, and also gives the estimated percentage of performance that can be provided after the index is created, and if there are many table index suggestions you can tick the save suggestions to save as SQL file in the "Start Analysis" Next to the bar is a button that saves suggestions to save the proposal as a SQL file.
Suggestions:
1. The recommendations given by Database Engine Tuning Advisor are not all right, compare the execution frequency of the SQL to determine if the index needs to be created, such as my current SQL if I do this SQL only once, and then no longer executes the index is not necessary to create.
2. Modify the index name given by the Engine Tuning Advisor, the index name given by Database Engine Tuning Advisor is not intuitive enough to create the index, it is recommended to manually change, such as "Ix_book_name", "Index Table name _ Field description" rule.
3. The files used for analysis should not be too large otherwise the analysis may not be completed, do not analyze at the peak of the business.
Case 2: Monitoring deadlocks
1. Create a Trace
The template selects the "Tsql_locks" template that comes with it and runs the trace.
2. Execute SQL
Open two Session window sub-table executes the following SQL, first executed in Session 1 and then in Session 2 in 10S, the two sessions have their own exclusive lock and then to apply for the other side of the exclusive lock to cause a deadlock.
Session 1 Execution: Current session 1 is 62
BEGIN transactionupdate Book set name= ' a ' where id=10--delay 10s execute waitfor DELAY ' 0:0:10 ' UPDATE book set Name= ' a ' where id=100
Session 2 Execution: current session 2 Yes
BEGIN transactionupdate Book set name= ' B ' where id=100--delay 20 execute waitfor DELAY ' 0:0:20 ' UPDATE book set name= ' B ' where id=10
The error message returned by the MSMs client displays the current 62 session as the victim of a deadlock.
3. Trace Analysis Deadlock
The deadlock trace event uses graphics and visually returns a deadlock for two sessions, where 62 sessions use an X to indicate that the current session is the victim of a deadlock.
Case three: Create a custom tracking template
Standard template is a good reference template, for example, we monitor the execution of the statement can be referred to the standard template on its basis to modify the template saved to its own.
1. Create a TSQL statement trace
2. Create a tracking template
To stop the current TSQL trace, choose File-Save as tracking template to save as your own tracking template.
3. Column filtering
Currently the TSQL statement that filters the trace contains book, where the column filter is similar to the syntax of the where like.
When you reshape a column, you don't need a blur condition:
Note: If you want to remove a column filter, remember to delete the filter criteria and also remove the "Exclude rows without values" check, and remember to either remove the trace or include the filtered traces.
4. Column Organization
A column organization is a group by operation in which a column is grouped to display traces, similar to a select query. For example, I am currently grouping tracking by duration.
By grouping durations, the same duration is placed in a group.
Summary
Due to the limited space to enumerate some simple and commonly used operations, other methods of classification monitoring similar to the interest can be more research, profile is very practical and good interface monitoring tool This is a unique condition of SQL Server, should be skilled use.
SQL Server profile usage tips