Two Methods for importing SQL tracking logs to the database using TRC

Source: Internet
Author: User
Reposted from blog.
First. Use the function to import data to a database. A. Use fn_trace_gettable to import data from a trail file to a row.

The following example showsSelect...Statement
FromClause Internal callFn_trace_gettable.

Use adventureworks;
Go
Select * into temp_trc
From fn_trace_gettable ('C: \ temp \ my_trace.trc', default );
Go

B. Use fn_trace_gettable to return a table with the identity column that can be loaded to the SQL Server table.

The following example showsSelect...This function is called in the statement and a table is returned.
Temp_trcInIdentityColumn.

USE AdventureWorks;GOSELECT IDENTITY(int, 1, 1) AS RowNumber, * INTO temp_trcFROM fn_trace_gettable('c:\temp\my_trace.trc', default);GO

Second, use the read80trace tool to complete (: http://www.microsoft.com/downloads/details.aspx? Familyid = 1271ab53-893a-4aaf-b4a6-9a8bb9669a8b & displaylang = EN)

This method is not a valid 8.0 trace file format. It is incorrect. I think it is caused by version 2005.

If SQL2000 is successful, use the following command

Read80trace-F-slocalhost-USA-ppassword-dtracedb-ie: \ profiler \ mytrace. in TRC,-F is the rml file generated by read80trace. This file is also very large. You don't need to generate it. I turned it off during runtime.-I is the location of the specified TRC file, if the directory contains mytrace_1.trc... mytrace_n.trc, read80trace will automatically read them. -dtracedb is the name of the database that loads the trace log data. Therefore, this user must have the permission to create data-u-p in-U, note the command format. There is no space in between-s and localhost. If it succeeds, the following file will be generated under the output directory of the installation directory to install msxml4.0 for normal display (I have installed 6.0 and cannot open it ), this report is the execution result of this stored procedure set ansi_nulls on
Set quoted_identifier on
Goalter proc [DBO]. [usp_getaccesspattern]
@ Duration_filter Int =-1 -- input parameter. statistics can be filtered based on the statement execution time.
As begin
/* First obtain the total performance data of all statements */
Declare @ sum_total float, @ sum_cpu float, @ sum_reads float, @ sum_duration float, @ sum_writes float
Select @ sum_total = count (*) * 0.01, -- this is the total number of all statements.
@ Sum_cpu = sum (CPU) * 0.01, -- this is the CPU time consumed by all statements
@ Sum_reads = sum (READS) * 0.01, -- this is the number of reads consumed by all statements, in 8 K.
@ Sum_writes = sum (writes) * 0.01, -- this is the number of writes consumed by all statements, in 8 K.
@ Sum_duration = sum (duration) * 0.01 -- this is the total execution time of all statements.
From tblbatches -- this is the table generated by read80trace, including all the statements in the trace file.
Where duration >=@ duration_filter -- whether to filter by execution time
/* Then perform group by to obtain the proportion occupied by certain statements */
Select ltrim (STR (count (*) exec_stats, ''+ STR (count (*)/@ sum_total, 4,1) + '%' execratio,
Ltrim (STR (sum (CPU) + ':' + + ltrim (STR (AVG (CPU) cpu_stats, ''+ STR (sum (CPU) /@ sum_cpu, 4, 1) + '%' cpuratio,
Ltrim (STR (sum (READS) + ':' + ltrim (STR (AVG (READS) reads_stats, ''+ STR (sum (READS)/@ sum_reads, 4, 1) + '%' readsratio,
-- Ltrim (STR (sum (writes) + ':' + ltrim (STR (AVG (writes) -- writes_stats, ''+ STR (sum (writes) /@ sum_writes, 4,1) + '% )',
Ltrim (STR (sum (duration) + ':' + ltrim (STR (AVG (duration) duration_stats, ''+ STR (sum (duration)/@ sum_duration, 4, 1) + '%' durratio,
Textdata, count (*)/@ sum_total TP, sum (CPU)/@ sum_cpu CP, sum (READS)/@ sum_reads RP, sum (duration)/@ sum_duration DP

Into # queries_staticstics from
/* All standardized statements are stored in the tbluniquebatches table. */
(Select reads, CPU, duration, writes, convert (varchar (2000), normtext) textdata from tblbatches

Inner join tbluniquebatches on tblbatches. hashid = tbluniquebatches. hashid where duration> @ duration_filter
) B group by textdata -- this group by is very important. It classifies and counts statements.
Print 'top 10 order by CPU + reads + duration'
Select top 10 * from # queries_staticstics order by CP + RP + dp DESC
Print 'top 10 order by CPU'
Select top 10 * from # queries_staticstics order by CP DESC
Print 'top 10 order by reads'
Select top 10 * from # queries_staticstics order by RP DESC
Print 'top 10 order by duration'
Select top 10 * from # queries_staticstics order by DP DESC
Print 'top 10 order by batches'
Select top 10 * from # queries_staticstics order by TP DESC
End
/*************************************** **********************/
If no import is successful, the error code is usually caused by insufficient columns. The error code is as follows: Warning: The following trace events were not captured: SQL: batchstarting, RPC: starting, SP: stmtstarting, SP: stmtcompleted, showplan statistics. review the Help file to ensure that you have collected the appropriate set of events
And columns for your intended analysis. error: event 10 does not contain the following required column (s): 15 warning: event 10 does not contain the following recommended column (s): 13, 16, 17, 18 error: event 12 does not contain the following required column (s): 15 warning: event 12 does not contain the following recommended column (s): 13, 16, 17, 18 error: event 14 does not contain the following required column (s): 15 error: Event 15 does not contain the following required column (s): 15 warning: event 15 does not contain the following recommended column (s): 1, 13, 16, 17, 18 error: performance analysis has been disabled because one or more events is missing required columns. consult the Help file for the recommended set of Trace events and necessary columns. reads completed-global error status 120 reader: Total read count: 3 read stall count: 0 accounting for 0 MS in Reader threadsignaling all workers to completehandling final iOS and closing open filesflushing and compressing spid filestotal events processed: 1. In this case, you need to add the following "data column" monitoring tsql: exec propared sqlsql: batchcompletedsql: batchstarting security audit: Audit loginaudit logout stored procedure: RPC: completedrpc: startingsp: completedsp: stmtcompletedsp: stmtstarting session: existingconnection performance: Show plan statistics data column databaseidendtimeobjectidnestlevelintegerdataeventclasstextdataapplicationnamentusernameloginnamecpureadswritesdurationclientpr ocessidspidstarttime

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.