Statistics for SQL Server Read statement run SET STATISTICS time IO profile on

Source: Internet
Author: User

For statements to run, there are other factors to consider, in addition to the execution plan itself, such as the compile time of the statement, the execution time, the number of disk reads, and so on.

If the DBA is able to run the problem statement separately, you can open the following three switches before running to collect statistics about the statement run.

This information is valuable for analyzing problems.

1 set STATISTICS time ON2 set STATISTICS IO ON3 set STATISTICS profile on

SET STATISTICS time on


Please take a look at what information the set STATISTICS time on will return. Run the statement first:

1 DBCC DROPCLEANBUFFERS 2-Clears all cache data in Buffer pool 3 DBCC FREEPROCCACHE 4 GO 5  6--Clears all cached execution plans in buffer pool 7 SET Stati Stics time on 8 GO 9 use [adventureworks]10 GO11 SELECT DISTINCT ([ProductID]), [UnitPrice] from [dbo]. [Salesorderdetail_test]12 WHERE [productid]=77713 GO14 SET STATISTICS time OFF15 GO

In addition to the result set, SQL Server returns the following two pieces of information

1 SQL Server parse and compile time:  2    CPU time = 15 milliseconds, occupied time = 104 milliseconds. 3 SQL Server parse and compile time:  4    CPU time = 0 milliseconds, occupied time = 0 milliseconds. 5  6 (4 rows affected) 7  8 SQL Server Execution time: 9    CPU time = 171 milliseconds, occupied time = 1903 milliseconds. SQL Server Analysis and compilation time: one    CPU time = 0 milliseconds, occupied time = 0 milliseconds.

You know that SQL Server execution statements are divided into the following phases: Analysis-"compile-" execution

Analyze the appropriate execution plan based on the statistical information of the table, then compile the statement and execute the statement

Let's say what the above output means:

1. CPU time : The meaning of this value refers to the amount of pure CPU time that SQL Server spends in this step. That is, how much CPU resources the statement spends

2. Occupation time : This value refers to how much time is spent in this step. That is, this is the length of time the statement runs, some actions can occur I/O operations, resulting in I/O waits,

Or it is blocking, causing a blocking wait. In short, the time is spent, but no CPU resources. So it's normal to take longer than the CPU time, but CPU time is

The sum of the time statements on all CPUs. If the statement uses more than one CPU and the other waits almost no, then the CPU time is more than the elapsed time is normal

3, analysis and compile time: This step is the compilation time of the statement. SQL Server must compile the statement because it clears all execution plans before it runs.

The compilation time here is not 0. Because the compilation is mainly CPU operation, so the general CPU time and occupy time is similar. If there's a big difference here,

It is necessary to see if SQL Server has bottlenecks on system resources.

Here they are a 15 milliseconds, one is 104 milliseconds

4. SQL Server execution time: the time when the statement actually runs. Because the statement is the first run, SQL Server needs to read the data from the disk into memory, where the statement

A long I/O wait has occurred for the run. So there's a big difference between CPU time and time taken, one is 171 milliseconds and the other is 1903 milliseconds.

In general, this statement took 104+1903+186=2193 milliseconds, where the CPU time was 15+171=186 milliseconds. The main time of the statement should be spent on I/O waits

Now make the statement again, but do not clear any caches

1 SET STATISTICS time ON2 GO3 4 SELECT DISTINCT ([ProductID]), [UnitPrice] from [dbo]. [Salesorderdetail_test]5 WHERE [productid]=7776 7 GO8 SET STATISTICS time OFF9 GO

This time is much faster than last time. The output time statistics are:

1 SQL Server parse and compile time:  2    CPU time = 0 milliseconds, occupied time = 0 milliseconds. 3 SQL Server parse and compile time:  4    CPU time = 0 milliseconds, occupied time = 0 milliseconds. 5  6 (4 rows affected) 7  8 SQL Server Execution time: 9    CPU time = 156 milliseconds, occupied time = 169 milliseconds. SQL Server Analysis and compilation time: one    CPU time = 0 milliseconds, occupied time = 0 milliseconds.

Because the execution plan is reused, the "SQL Analysis and Compilation time" CPU time is 0, and the elapsed time is 0

Since the data is already cached in memory and does not need to be read from disk, the SQL execution time CPU time is 156, taking up time and CPU time very close, is 169.

This saves the runtime 1903-169=1734 milliseconds, which can be seen here again, where caching plays a critical role in statement execution performance

To not affect other tests, run the following statement to close set STATISTICS time on

1 SET STATISTICS time OFF2 GO

SET STATISTICS IO on

This switch is capable of outputting the number of physical reads and logical reads made by the statement. Plays an important role in analyzing the complexity of the statement

Or just the same query as the example

1 DBCC DROPCLEANBUFFERS2 GO3 SET STATISTICS IO ON4 GO5 6 SELECT DISTINCT ([ProductID]), [UnitPrice] from [dbo]. [Salesorderdetail_test]7 WHERE [productid]=7778 GO

His return was:

1 (4 rows affected) 2 table ' Salesorderdetail_test '. Scan Count 5, logical read 15,064 times, physical read 0 times, read 15,064 times, LOB logic read 0 times, lob physical read 0 times, lob read 0 times.

The meanings of each output are:

table : The name of the table. The table here is salesorderdetail_test.

Scan Count : Number of scans performed. According to the execution plan, the table was scanned several times. Generally speaking, the more the number of large table scan is not good. The only exception is if the execution plan chooses to run concurrently,

A table is read by multiple thread threads at the same time, and each thread reads a portion of it, but this shows the number of all threads. That is, a few of the thread is doing it concurrently,

There will be a few scans. The number is a little bit larger.

logical Reads : The number of pages read from the data cache. The greater the number of pages, the greater the amount of data that the query will access, the greater the memory consumption, and the more expensive the query will be.

You can check whether the index should be adjusted, reduce the number of scans, and narrow the scanning range

By the way, the logical reading of the statistical principle: why the results displayed in the unit is not page, nor K or KB. Sql server

When you do the reading and writing, you run to a specific piece of code. Each time this code is called, Reads/write will add 1. So this value is larger than

That statement must have done a lot of I/O, but not by this value to calculate the absolute number of I/O, this value reflects the logical read and write volume is not physical read and write volume

1 logic reads 15,064 times

Physical reads : Number of pages read from disk

read- ahead: the number of pages read into the cache for querying

Physical Read + pre-read : Is the number of pages that SQL Server reads from disk in order to complete this query. If it is not 0, the data is not cached in memory. Operating speed must be affected

LOB Logical Reads : The number of text, ntext, image, large value type (varchar (max), nvarchar (max), varbinary (max)) pages read from the data cache

LOB Physical reads : The number of text, ntext, image, large-value type pages read from disk

LOB Pre-read : The number of text, ntext, image, and large value type pages that are cached for querying

And then run it again, not emptying the cache

1 SET STATISTICS IO ON2 GO3 4 SELECT DISTINCT ([ProductID]), [UnitPrice] from [dbo]. [Salesorderdetail_test]5 WHERE [productid]=7776 GO

The result set returns:

1 table ' Salesorderdetail_test '. Scan Count 5, logical read 15,064 times, physical read 0 times, read 0 times, LOB logic read 0 times, 2 lob physical reads 0 times, LOB read 0 times.

This time the logic reads unchanged, or 15064 pages. But both the physical read and the read-ahead are 0. Description data is already cached in memory
The second run does not need to be read from disk again, saving time

To not affect other tests, run the following statement to close set STATISTICS IO on

1 SET STATISTICS IO OFF2 GO

SET STATISTICS profile on

This is the most complex one returned in three settings, and he returns the execution plan of the statement, as well as the actual return row count of the statement run at each step.

Through this result, not only can get the execution plan, understand the statement execution process, analyze the statement tuning direction, also can determine whether SQL Server

A correct execution plan was selected.

1 SET STATISTICS profile ON2 GO3 SELECT COUNT (B.[salesorderid]) 4 from [dbo]. [Salesorderheader_test] A5 INNER JOIN [dbo]. [Salesorderdetail_test] B6 on a.[salesorderid]=b.[salesorderid]7 WHERE a.[salesorderid]>43659 and A.[SalesOrderID] <536608 GO

The result set returned is very long, so here's the important field.

Note: Here is a look up from the bottom, that is to say from the bottom up until you get the result set so the Rows field (line 1) Displays the value that the query returns as the result set.

and the number of lines that indicate how many steps SQL Server has performed, here are 6 lines, indicating that Sqlsrver performed 6 steps!!

rows: The actual number of rows returned in each step of the execution plan

executes: How many times each step of the execution plan was run

stmttext: Specific content of the execution plan. The execution plan is displayed in a tree form. Each row is a step in the run, there will be a result set return, will have their own cost

estimaterows: SQL Server estimates the number of rows returned for each step based on the statistics on the table. When analyzing the execution plan,

We will often compare rows and estimaterows to see if SQL Server is correctly estimated to determine if the statistics are updated

estimateio: SQL Server generates I/O cost per step, based on the length of the field recorded in the EstimateRows and statistics

estimatecpu: sqlservr estimated CPU cost per step based on the length of the field recorded in the EstimateRows and statistics and the complexity of the things to be done

totalsubtreecost: SQL Server calculates the cost of each step of the execution plan subtree, based on a calculation formula Estimateio and ESTIMATECPU

(including the cost of this step and all of his lower-level steps), This is the field value described below

Warnings: A warning that SQL Server encounters at each step, for example, no statistics support cost estimates for a single step.

Parallel: Does this step of the execution plan use a parallel execution plan

From the above results, we can see that the execution plan is divided into 4 steps, and the first step is divided into two sub-steps.

Step A1 (line 5th): Find all A from the [salesorderheader_test] table. [salesorderid]>43659 and a.[salesorderid]<53660 values

Because the table has a clustered index on this field, SQL can directly use this index's seek

SQL prediction returns 10,000 records, and 10,000 records are actually returned. This prediction is accurate. The cost of this step is 0.202 (totalsubtreecost)

Step A2 (line 6th): Find the value of all a.[salesorderid]>43659 and a.[salesorderid]<53660 from the [salesorderdetail_test] table

Because the table has a nonclustered index on this field, SQL can directly use this index's seek

Here you can see where SQL is smart. Although the query statement only defines a on the [salesorderheader_test] table. [salesorderid]>43659 and a.[salesorderid]<53660 filter conditions,

But according to semantic analysis, SQL knows that this condition is also true on [Salesorderdetail_test]. So the SQL chooses to filter the condition first and then make the join. This can greatly reduce the cost of the join

In this step the SQL estimate returns 50,561 records and actually returns 50,577. Cost is 0.127, not high.

Step B (line 4th): make a join of the result set obtained by A1 and A2 two steps. Because SQL estimates that the two result sets are larger, he chooses the join method of the hash match directly.

SQL estimates that the join can return 50313 rows and actually return 50577 rows. Because SQL has statistics on both tables [SalesOrderID], the estimates here are very accurate.

The cost of this step equals Totalsubtreecost minus his sub-steps, 0.715-0.202-0.127=0.386. Because the pre-valuation is very accurate, you can trust that the cost here is the cost of each step.

Step C (line 3rd): The value of count (*) is a simple step in the result set returned by the join, and the result of Count (*) is always 1, so the predicted value is correct.

In fact, the cost of this step is estimated based on the size of the result set returned by join in the previous step (b). We know that the estimated return value of step B is very accurate, so there's no big problem with estimating cost for this step.

The cost of the subtrees tree is 0.745, minus his child node cost, and his own cost is 0.745-0.715=0.03. It's a very small step.

Step B (line 2nd): Converts the value returned by step c to type int, returning as a result

This step is a continuation of the previous step, which is much simpler. Convert the data type of a value to be almost negligible. So the cost of the subtrees tree is equal to his sub-nodes, both 0.745.

In other words, his own cost is 0.

In this way, users can understand the execution plan of statements, the accuracy of SQL Server estimates, and the distribution of cost

To say the last: different versions of SQL Server, different machine cost may vary, such as SQL Server 2005, SQL Server 2008

Transferred from: https://www.cnblogs.com/tohen/p/6183618.html

Statistics for SQL Server Read statement run SET STATISTICS time IO profile on

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.