A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
One of the biggest obstacles to getting started with extended Events is the need to use XQuery and XML knowledge to analyze data. Creating and running a session can be done in T-SQL, but the data will be converted to XML regardless of the target being used. This restriction has been removed in SQL Server 2012 and it has implemented the extended Events user interface. There is basically no change in SQL Server 2014, as we see in step 2, which provides the ability to create and manage sessions. In this order, we will explore the ability to capture the viewing and processing of data.View data
When the session is created and started, we have two options to view event data: Live data viewer and target data viewer.
To complete the example, you can create a xe_samplecapture session as shown in Listing 1. It captures Batch,statement and procdeure completion events and related event fields:
Query_hash-Identify queries with similar logic but different query plans
Query_plan_hash identifies a query plan that has a different query plan but has a similar logical tree
These two fields are respectively about the query fingerprint and the scheduled thumbprint. We use these fields primarily to analyze the Ad-hoc payload, looking for very similar but slightly different queries, such as having distinct values (difference values of their concatenated literals).
CREATEEVENT SESSION[xe_samplecapture] onSERVERADDEVENT sqlserver.rpc_completed (ACTION (sqlserver.database_id, Sqlserver.query_hash, Sqlserver.query_plan_hash ) WHERE([SQL Server].[Is_system] =(0 ) ) ),ADDEVENT sqlserver.sp_statement_completed (SETCollect_statement=(1) ACTION (sqlserver.database_id, Sqlserver.query_hash, Sqlserver.query_plan_hash)WHERE([SQL Server].[Is_system] =(0 ) ) ),ADDEVENT sqlserver.sql_batch_completed (ACTION (sqlserver.database_id, Sqlserver.query_hash, Sqlserver.query_pla N_hash)WHERE([SQL Server].[Is_system] =(0 ) ) ),ADDEVENT sqlserver.sql_statement_completed (ACTION (sqlserver.database_id, Sqlserver.query_hash, Sqlserver.query _plan_hash)WHERE([SQL Server].[Is_system] =(0 ) ) )ADDTARGET Package0.event_counter,ADDTARGET Package0.event_file (SETFileName=N'C:\temp\XE_SampleCapture', Max_file_size=( + ) ),ADDTARGET Package0.histogram (SETFiltering_event_name=N'sqlserver.sql_statement_completed', Slots=( -), Source=N'sqlserver.database_id' ),ADDTARGET Package0.ring_buffer (SETMax_events_limit=(10000), Max_memory=(4096 ) ) with(max_memory= 16384KB, Event_retention_mode=Allow_single_event_loss, Max_dispatch_latency= -SECONDS, Max_event_size= 0KB, Memory_partition_mode=NONE, Track_causality= on, Startup_state= OFF );GO
To generate some sample data, I executed the random load generation script that Jonathan Kehayias created for AdventureWorks (https://www.sqlskills.com/blogs/jonathan/ the-adventureworks2008r2-books-online-random-workload-generator/), it can generate multiple connections, create a number of stored procedures, and repeatedly execute them. You can download it at the bottom of the article. Your output may not be exactly the same as mine, but the results should be similar.Live Data Viewer
The Live Data Viewer is always available, and it simply displays the captured data, regardless of what type of target the session is set to, or even if no target is set. To open the viewer, just right-select any running session and select "Watch Live Data". You can use the Xe_samplecapture session created above, or you can use the System_health session if you prefer. Figure 1 shows some sample data.
You may not display any data after initialization, especially if you try to open a system_health session, because the speed at which events appear in the viewer depends primarily on the events and predicates that you collect. If you have a session that captures data quickly, the event is dispatched to other targets defined in the viewer and session once the event session's dispatch cycle arrives or the buffer is populated.
Scheduling Cycles and memory buffering
The Extended events engine first writes the event data stream to the intermediate buffer of the specified session. The scheduling period is a session-level option that defines how often the event data is written from memory to the target, in seconds.
Unless you export data, the data in the live viewer is not persisted. You must stop the live Viewer's data push before you can use this option (via the Stop data Feed in the Extended Events main menu). Select the Live Data Viewer window and go to Extended Events | Export to to select a destination.
You can stop the data push or close the viewer at any time, and in the previous step we mentioned that the viewer will stop automatically if detection has a negative impact on performance.Target Data Viewer
Another way to view session data is to right-click the target and then select View Target data ... Options. This menu displays different views based on the selected target.View Event_file Target data
If you select a Event_file target, the event data in the destination file that has been written to the hard disk is displayed. This information is not updated to show only the data that existed at the moment the file was opened.
For a event_file target, the benefit of comparing this option to implementing a data viewer is that it does not require SQL Server to write data streams to both the target and management Studio. It only reads data that has been consumed by the target.
The data captured by the Event_file target can be viewed in SQL Server Management Studio after the event session is stopped, as they will be saved on the hard disk with a. xel file. You can drag and drop a. xel file into SSMs, or use file | Open | File ... menu option to open a. xel file. The default file path for the system_health session is in the default error log folder.
This viewer looks very similar, except that the file name is not the same in the Windows window (Xe_samplecapture_0_131061823727010000.xel), the live viewer displays the live data and the target viewer display event_file.
Figure 3View Ring_buffer Target data
When using the Ring_buffer target, the View Target data option is not very helpful because it is displayed in the entire XML link format.
After clicking on the link, the data is displayed in XML format, but it is not very helpful for analysis.
When using the Ring_buffer goal, we need to analyze the event data using XQuery.View Event_counter, histogram, and pair_matching target data
The last three targets event_counter, histogram, and pair_matching require you to view the target data or use XQuery analytics data, which is the only two ways to view aggregated data. Using our xe_samplecapture session, or another session with a histogram target, right-click the histogram target and open the output shown in the Target data View, 6.
Histogram the target data view may be empty at the beginning, or there may be data. This viewer needs to be refreshed before the latest data is displayed. To manually follow it, you can right-click Refresh in the Histogram target window, or you can choose Refresh Interval to update the window periodically.
Histogram, Event_counter and pair_matching are three goals to keep data in memory, and once you stop the event session, the data in the middle will also be released, so the data in the view will disappear. If you need to save the data, you can export the. csv file by right-clicking the Copy option or right-clicking the export to CSV option (refer to Figure 7).
You can choose to copy or save the data currently displayed in the Target data viewer after you stop a session, but for these targets, once you refresh the viewer after you stop the session, the data will be lost.
Figure 8Display Options: Customizing columns
Regardless of the viewer, you can customize which columns are displayed. By default, only the event name and timestamp are displayed in the top panel, because these two columns are common to all events. When you select a column in the top panel, the detail panel displays all the captured fields. To see any field in the top panel, you can right-select the desired column in the Detail panel and select Show Column in Table. In Figure 9, we added the Duration field to the top panel.
We can repeat the above steps to add the fields we need to the top panel. These columns can be moved to and from the left and right by dragging the column names. Alternatively, you can Extended the Choose Columns on the Events toolbar ... button, or extended Events | Choose Columns ... menu options to select Columns or to configure the order of the columns. If you don't see the Extended Events toolbar, you can use view | Toolbars menu to select.
One of the tricks you can use in the viewer is to merge columns, in other words you can create a new column based on two or more columns already in place. This is useful for capturing events that are stored in different fields in a query statement. For example, the Sp_statement_completed event uses the statement field to store query statements, batch_completed events using Batch_text fields. If you add both fields to the top panel, you will find that the statement field for the batch_completed event is always null, and the Batch_text field for the sp_statement_completed event is also null.
Data in different fields can hinder our analysis, but if merging them into one column alleviates the problem. In Choose Columns ... In the lower right corner of the window you can find a merged columns. Select New, provide a name for the column, and select the column you want to merge, as shown in 12.
The new merge column is displayed in the viewer as a [QueryText] name. Note The Merge Columns option is designed for string connections, so if you choose a non-string data type, not all data will be displayed properly. For example, if you try to connect logical_reads and database_id, it displays logical_reads.
After you have customized the viewer based on the required fields, you can save the configuration via the Extended events menu, or through the Display Settings button, if needed.
The configuration file is saved in the. viewsetting file format, which is then opened by the same menu when you need to view or analyze the Event_file target later. This file can be shared or stored in a network address that can be accessed by multiple users. Finally, once all the columns you need are displayed in the top panel, if you want to remove the detail panel in the viewer, you can choose Extended Events | Show Details pane menu.Analyze data
The viewer provides an event view that we are familiar with. But unlike the profiler UI, where we have the ability to sort columns and filter data, the Extended Events Viewer provides better data analysis options, and we no longer need to export data to a table or use third-party tools.Sort
One of the simplest things you can do is sort the events. Just right-click on the column name and choose Sort Ascending or sorts descending (you can also sort by clicking on the column name and sorting again by clicking it again). Sorting the data on the UI is easiest when you want to quickly find the event with the longest execution time and the highest IO.Filter
Use the filters on the toolbar ... button (or Extended Events | Filter ... menu) to filter the data in the viewer. We can filter using the time range or any other field, including the global fields collected for the event. Further, we can create complex conditions by using and,or and other operations (=, <>, like, not likes, NULL, etc.), just as we do in predicates.
After the filter is applied, the result set can be saved in a new. xel file. If the original. xel file is closed without removing the filter, be aware that these filters are still in effect when you reopen the file. A good habit is to check the number of events currently displayed to determine if a filter is applied to the analysis, as shown in 15.
You can go through the Filters window or choose extended Events | The Clear all Filters menu to remove the filter criteria.
While we often need to filter the results to analyze the data, in some cases we also need to fetch data from multiple files for full analysis. As an example, when we troubleshoot in availability group across multiple nodes, you might set up a session for each node and then merge the target files from multiple nodes together for viewing. To accomplish this goal, select File via SSMs | Open | Merge Extended Events Files menu, locate the file path and add them to the window, then select OK.
The merged file will be parsed according to a result set output, and you can save it as a separate. xel file.Search
The search functionality is also available in the Profiler UI, which is especially useful for searching for characters in query statements. The Extended Events Viewer provides the same functionality as the telescope icon located in the toolbar (or select Extended Events | Find menu).
In the Find window you can search for any field that has been added to the top panel. If you do not add any extra fields to the top panel of the viewer, only the name and timestamp columns can be searched by default. You can search only one column at a time, and you may choose other standard search options, including Match case and whole word matching, and you can use wildcards or regular expressions.Tag line
When we find rows of data that are of interest to us, we can tag them with bookmarks in the viewer (users who are familiar with using the profiler may often write down numbers in the past, which has become the past). Select the line that we are interested in, and then click the Bookmark button in the toolbar (or click Toggle Bookmark in the context menu). The bookmark icon appears on the leftmost side of the viewer.
Once you've tagged all the lines of interest, you can use the forward and backward navigation buttons on the toolbar to quickly access them (no more endless scrolling of the mouse). These buttons allow you to easily browse through the entire file, stopping only the events that interest you. If you want to remove a bookmark, just right-click the row and select Toggle Bookmark. If you want to delete all bookmarks, you can use the Clear all Bookmarks button. Note that all bookmarks will be lost when you close the file and open it again.Group
When we combine columns, filter data, and find specified values in event data, it can be interesting and helpful for us to analyze the data, but they do not help us to find trends or anomalies in the data as a whole. Into the grouping, this analysis option is one of the reasons why the Extended Events viewer goes far beyond the profiler. For any field selection in the top panel of the viewer, you can right-click on it and select group by this column.
You can also use grouping ... button or Extended Events | Grouping ... option, their advantage is that you can easily use multiple columns as a grouping condition. In the viewer, after I have used one of the columns to group, if I right-click another column to group, the original grouping condition is removed. In many cases you may need to group on multiple columns, and using the Grouping window will allow you to select multiple columns and set the order of the columns.
Common grouping fields include the following:
Grouping on fields such as duration, logical_reads, or CPUs does not make any sense. Because they provide metrics for events, you are not looking for events that are completed in 20ms or that require a specific IO number. Instead, look for certain patterns or anomalies in a server, or in a particular program, or in a database event.
Attempts to group using statement (or Batch_text) are often encountered because we want to find those queries that are frequently executed. When we use SQL Trace to capture data, third-party tools can provide us with these analyses, such as Cleartrace and Readtrace. These tools format text data and aggregate it, and you can view the frequency of execution of a specified query or stored procedure at a time.
Unfortunately, Extended events does not have formatted data functionality. If you try to group on statement or Batch_text (or our defined querytext) field, you end up with a single row of all the data.
You need to group on the Query_hash or Query_plan_hash fields, not on the query text. These two fields are added to the Sys.dm_exec_query_stats management view in SQL Server 2008 and are often referred to as "query fingerprints".
A query with the same text has the same query_hash, so you can use it as a grouping condition to see how often a query is executed.
Once you can group the same query, you can then calculate the average, maximum, minimum, and so on for each grouping for additional analysis.Polymerization
The last analysis tool is to use aggregation options on the UI. You must use at least one of the fields to group before applying aggregations. For non-numeric fields, mathematical computations cannot be applied (e.g., Batch_text, database_id, Xml_report), and the only aggregation option available is count. For numeric fields, additional options include sum, MIN, MAX, and Avg. We can right-click on the field and select Calculate Aggregation ..., apply aggregation options. You can use the Aggregation in the toolbar ... button, or extended Events | Aggregation ... Menu.
Figure 24 shows the results after aggregation.
You can apply different aggregation methods for different fields, such as AVG applied to duration, Max applies to logical_reads, but only one aggregation option can be applied to a field at a time. That is, for a field you can only view avg or only Max. Here's an interesting workaround to apply different aggregation options for the same field. You can create a merged column to represent that one. For example, if you want to see the average of duration and want to see the maximum value of duration, you can first apply the AVG aggregation function to the Duration column, and then we create a merge column (slightly different name) on the duration. The other column selects a field that is always zero, such as connection_reset_option. Once you've created the merge column, you can apply Max on it.
When you close and reopen a file, the merged columns are persisted. However, grouping and aggregation are not possible, and they are not saved in the. viewsetting file.Summarize
In SQL Server 2012, the greatest victory for the user is that it introduces the target data viewer, which means that we do not need T-SQL and XQuery to parse the event data. Target Data Viewer Yes, we can see all the target event data except Ring_buffer (and Etw_classic_sync_target, which can only be viewed through the ETW tool).
In addition, the Extended events UI provides more important data analysis options than the profiler UI. In addition to searching and filtering, we can now perform complex analyses such as sorting, grouping, aggregating, and no longer need to import data into tables and use SQL for analysis, or use third-party tools such as Readtrace.
SQL Server Extended Events Advanced 3: Using the Extended events UI
Start building with 50+ products and up to 12 months usage for Elastic Compute Service