Top (parameter)

Source: Internet
Author: User

Recently, a high consumption statement/process was optimized on the database server, and it was found that a stored procedure was optimized and still appeared in profiler tracking. Take the procedure execution statement out of the profiler trace file, open a query window (spid=144), SET statistics IO on, and turn on the execution of the statement in Profiler trace spid=144.
The following is the result of the execution of the query window, with only 17 logical reads:

Let's look at the results of the Profiler trace window again:

The CPU (1216), Reads (160206) Here are much higher than the logical reads in the query window. View the corresponding section of the stored procedure corresponding to the statement, only two table association query only:

What if the statements in the stored procedure are executed directly? First, the necessary parameters are declare and executed in the form of process parameter passing:

Next, replace the parameter with a specific value and execute it again:

There is no difference between the logical reads in the above two graphs, but we go to the Profiler trace window and execute the query statements in the stored procedure separately, if top N is passed as a parameter to its CPU, the reads is similar to the consumption of executing stored procedure, but it is much higher than the CPU and reads of the top specific value:

At this point we should think of their execution plan is different, from the profiler results can be seen that the consumption can be through a specific value, or at the end of the statement +option (RECOMPILE) to reduce.
Below we will @typeid=2 this section into the test stored procedure, and then separately see if there is a +option (recompile) at the end of the statement, execute this stored procedure and view the execution plan. First, we don't make any changes to the statements in @typeid=2 to simulate the execution of the original process:

We then add +option (RECOMPILE) at the end of the query statement to recompile when it executes to this statement:

Note that the indexes used are exactly the same, but their consumption in profiler tracking is quite different. The number of records returned by the recompile in the sort operation (228) is much less than the number of records returned by the sort operation of the original procedure (47331), which directly affects the number of executions of idx_userid on table U. Speculation is the number of executions of the U, resulting in two statements in the profiler tracking the consumption disparity.
In fact, there is no logical reading of table u in the message information of the query window, either executing the stored procedure or executing the statement alone:

According to the execution plan, there must be read operation to the U table,Why does it not appear when SET statistics IO on? Under what circumstances will this happen?
Optimization Experience
Using RML to analyze. trc files is really handy, and we can follow the steps below to analyze high-consumption statements on the database server:
1, open the server-side tracking to collect data for a period of time (such as 18:00~ the next day 06:00,cpu>=33, adjusted according to the situation)
2. Analyze the tracking data using the RML tool (. TRC), a lot of stored procedure calls or dynamic statements are the same overall, but the parameters will be different, the tool will classify them
3, according to the classification, priority to the total consumption of large statement optimization
4, RML analysis report to display, support export to EXECL, Word, PDF, or you can use the statement directly from the library to get the statement that meets the requirements
The following code extracts [statistics of the most expensive statements by the same type of statement]:

--Import trace Data using RML CMD promptReadtrace-I "F:\TROUBLESHOOTING\TRACE\HOSTNAME_INSTANCENAME_HIGHCPU33_AFTER.TRC"-O "F:\TroubleShooting\Trace\output"-S127.0.0.1,7777"-D "Perfanalysis"-E UseperfanalysisGO--Query TrackingSelect MIN(StartTime),MAX(EndTime),COUNT(*) fromFn_trace_gettable (N'F:\TROUBLESHOOTING\TRACE\HOSTNAME_INSTANCENAME_HIGHCPU33_AFTER.TRC',default)whereCpu is  not NULLSelect *  fromreadtrace.tbltimeintervalsSelect Top Ten *  fromReadtrace.tblbatchpartialaggsSelect Top Ten *  fromreadtrace.tbluniquebatches--the first is to press Hashid to get the total consumption of CPU, Duration, Reads, writes the first n statements--add dbid, Loginnameid, and calculate the average consumption later on demandSelect *, Row_number () Over(Order  bySum_cpu_msdesc) asQuerynumber from (        Select     --A.hashid,            sum(completedevents) asexecutes,sum(TOTALCPU) asSum_cpu_ms,sum(TOTALCPU)/sum(completedevents) asAvg_cpu_ms,--Add            sum(totalduration)/ +  asSum_duration_ms,sum(totalduration)/(sum(completedevents)* +) asAvg_duration_ms,--Add            sum(totalreads) asSum_reads,sum(totalreads)/sum(completedevents) asAvg_reads,--Add            sum(totalwrites) asSum_writes,sum(totalwrites)/sum(completedevents) asAvg_writes,--Add            --sum (attentionevents) as Sum_attentions,            --(select StartTime from Readtrace.tbltimeintervals i where timeinterval = @StartTimeInterval) as [StartTime],            --(select EndTime from Readtrace.tbltimeintervals i where timeinterval = @EndTimeInterval) as [EndTime],            --add DatabaseName, LoginName(Select distinctDatabaseName fromFn_trace_gettable (N'F:\TROUBLESHOOTING\TRACE\HOSTNAME_INSTANCENAME_HIGHCPU33_AFTER.TRC',default)whereDatabaseID=A.dbid) asDatabaseName, (SelectLoginName fromReadtrace.tbluniqueloginnameswhereIid=A.loginnameid) asLoginName, (Select cast(Normtext as nvarchar(4000)) fromReadtrace.tbluniquebatches bwhereB.hashid=A.hashid) as [Normtext], Row_number () Over(Order  by sum(TOTALCPU)desc) asCpudesc, row_number () Over(Order  by sum(TOTALCPU)ASC) asCpuasc, row_number () Over(Order  by sum(totalduration)desc) asDurationdesc, row_number () Over(Order  by sum(totalduration)ASC) asDurationasc, row_number () Over(Order  by sum(totalreads)desc) asReadsdesc, row_number () Over(Order  by sum(totalreads)ASC) asReadsasc, row_number () Over(Order  by sum(totalwrites)desc) asWritesdesc, row_number () Over(Order  by sum(totalwrites)ASC) asWRITESASC fromReadtrace.tblbatchpartialaggs a--where TimeInterval @StartTimeInterval and @EndTimeInterval                    --and A.appnameid = IsNull (@iAppNameID, A.appnameid)                    --and A.loginnameid = IsNull (@iLoginNameID, A.loginnameid)                    --and a.dbid = IsNull (@iDBID, A.dbid)            Group  byA.hashid,a.dbid,a.loginnameid) asoutcomewhere(Cpudesc<=  -             --or CPUASC <= @TopN            orDurationdesc<=  -             --or DURATIONASC <= @TopN            orReadsdesc<=  -             --or READSASC <= @TopN            orWritesdesc<=  -             --or WRITESASC <= @TopN        )        Order  bySum_cpu_msdesc        option(RECOMPILE)Select Top  -Eventclass,textdata,databasename,databaseid,duration/ +duration_ms,cpu Cpu_ms,reads,writes,starttime,endtime,hostname,loginname,applicationname fromFn_trace_gettable (N'F:\TROUBLESHOOTING\TRACE\HOSTNAME_INSTANCENAME_HIGHCPU33_AFTER.TRC',default)whereTextData like '%gameweb_admin_gamescore_log%'
View Code

The code corresponds to the top Unique batches page of the RML report, while increasing the database, application grouping, and average consumption value information. Some components are exported to EXECL, Word, and pdf times. You can use statements to extract the consumption list directly from the database.

Top (parameter)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.