Read SQL Server query plan

Source: Internet
Author: User
Tags joins sql server query stmt management studio sql server management sql server management studio



Read Catalogue


    • Begin
    • How SQL Server looks for records
    • SQL Server Join method
    • More specific execution procedures
    • Index statistics: The choice of the query plan is based on
    • Optimizing View Queries
    • Recommended Reading-MSDN Articles





For SQL Server optimizations, tuning queries can be a common thing. Due to the optimization of the database, it is also a wide-ranging topic, so this article only discusses how to understand SQL Server query plan when optimizing queries. After all, my knowledge of SQL Server is limited, and if there are any errors, I implore you to criticize them immediately after discovery.



First, open SQL Server Management Studio and enter a query statement to see how SQL Server displays the query plan.
Description: The database presented in this article is a database that I have prepared for a demo program and can be downloaded from this web page.


select v.OrderID, v.CustomerID, v.CustomerName, v.OrderDate, v.SumMoney, v.Finished
from   OrdersView as v
where v.OrderDate >= ‘2010-12-1‘ and v.OrderDate < ‘2011-12-1‘;


Where Ordersview is a view, it is defined as follows:


SELECT     dbo.Orders.OrderID, dbo.Orders.CustomerID, dbo.Orders.OrderDate, 
            dbo.Orders.SumMoney, dbo.Orders.Finished, 
            ISNULL(dbo.Customers.CustomerName, N‘‘) AS CustomerName
FROM         dbo.Orders LEFT OUTER JOIN
                dbo.Customers ON dbo.Orders.CustomerID = dbo.Customers.CustomerID


For the previous sentence query, the query plan given by SQL Server is as follows (click on the "Show Estimated execution Plan" button on the toolbar):






From this figure, we can get at least 3 useful information:
1. Which execution steps cost a higher price. Obviously, the cost of the two steps to the right is relatively high.
2. Which execution steps generate more data. For the amount of data produced by each step, the SQL Server execution plan is represented by the "line weight" and is therefore easily distinguishable.
3. What actions are performed at each step.



For a slow query, we usually need to know which steps are more expensive and, in turn, can try some improved methods. In general, if you can't solve the problem by improving hardware performance or adjusting the settings of Os,sql server, the rest of the alternatives usually have the following:
1. Increase the index of the corresponding field for operations such as "scan".
2. Sometimes rebuilding the index may also be valid, please refer to the following article for details.
3. Adjust the structure of the statement and guide SQL Server to execute it using other query schemes.
4. Adjust the table structure (sub-table or partition).



Here are some important theoretical knowledge that can be helpful in understanding the implementation plan.


Back to top SQL Server method for finding records


Speaking of which, we have to say the index of SQL Server. There are two kinds of indexes for SQL Server: clustered and nonclustered indexes. The difference between the two is that the "clustered index" directly determines where the record is stored, or that the record can be obtained directly from the clustered index. The nonclustered index holds two messages: 1. The value of the corresponding indexed field, 2. Record the location of the corresponding clustered index (save the record pointer if the table does not have a clustered index). Therefore, if you can find records through a "clustered index", it is obviously the quickest.



SQL Server has the following methods to find the data records you need:
1. "Table Scan": traverse the entire table to find all matching record rows. This operation will be checked in one line, and of course, the efficiency is the worst.
2. "Index Scan": According to the index, filtering out a portion of the records from the table, and then find all the matching row of records, obviously more than the first way to find a range of smaller, so than the "table Scan" faster.
3. "Index Seek": Based on the index, locate (get) the location of the record, and then get the record, so it is faster than the first two ways.
4. "Clustered Index Scan": Same as "Table Scan". Note: Do not assume that there is an index, it is not the same. In fact, it means to scan each row of records on a per-row basis by a clustered index, because records are stored sequentially in a clustered index. The table scan simply says that the tables to be scanned do not have a clustered index, so the two operations are essentially the same.
5. "Clustered index Seek": Get records directly based on the clustered index, fastest!



So, when a query is found to be slow, you can first check which operations are at a higher cost, and then see if those operations are "Table Scan" or "Clustered Index Scan" when looking for records, and if they are really related to these two types of operations, consider increasing the index to resolve them. However, when you increase the index, it also affects the modification action of the data table, because the index of the corresponding field is updated when the data table is modified. Therefore, too many indexes can also affect performance. Another situation is that it is not appropriate to increase the index: a field with a state of 0 or 1. For example, most of them are 1, so indexing at this point has no meaning at all. This can only be considered for 0 or 1 of these two cases are separated to save, sub-table or partition is a good choice.



If you cannot solve by adding indexes and adjusting tables, try adjusting the statement structure and directing SQL Server to execute with other query schemes. This method requires: 1. The function to be done on the statement is clear, 2. The structure of the data table to be queried is clear, 3. The relevant business background knowledge is clear. If this method can be solved, of course, it is also a good solution. However, sometimes SQL Server is smarter, and even if you tweak the statement structure, it does not affect its execution plan.



How to compare the performance of two SQL statements with the same function, I suggest two ways: 1. Put two query statements directly in SQL Server Management Studio and then look at their execution plan, and SQL Server will tell you the query cost of two queries in percent. This method is simple and usually can be consulted, however, sometimes it is not allowed, for specific reasons please continue to look down (perhaps the index statistics are too old).
2. According to the real program call, write the corresponding test code to invoke: This method is troublesome, but it is more representative of the actual call situation, the results are more reference value, it is also worthwhile.


Back to top SQL Server Join mode


In SQL Server, each join command runs in three more specific ways when it executes internally:



1. "Nested Loops join", if one join input is small and the other join is large and the index has been created on its join column, then the index Nested Loops connection is the fastest join operation because they require minimal I/O and comparison.



Nested loops joins are also known as nested iterations, which use one join input as an external input table (shown as the top input in a graphical execution plan) and another join input as an internal (bottom) input table. The outer loop processes the external input table row by line. The inner loop executes for each outer row, searching for matching rows in the internal input table. You can use the following pseudo-code to understand:


foreach (row r1 in outer table)
     foreach (row r2 in inner table)
         if (r1, r2 meets the matching conditions)
             output (r1, r2);


In the simplest case, the entire table or index is scanned when searching, which is called "nested loops join only." If the search uses an index, it is called an indexed nested loop join. If the index is built as part of the query plan (and destroys the index immediately after the query completes), it is called a temporary index nested loop join. The query optimizer considers all of these different scenarios.



Nested loops joins are especially effective if the external input is small and the internal input is large and the index is pre-created. in many small transactions, such as those that affect only a smaller set of rows, an indexed nested loop join is better than a merge join and a hash join. In large queries, however, nested loops joins are often not the best choice.



2. Merge Join, if the two join inputs are not small but are sorted on the join columns (for example, if they were obtained by scanning a sorted index), then the merged join is the fastest join operation. If the two join inputs are large, and the two inputs are about the same size, the pre-ordered merge join provides a similar performance as a hash join. However, if the size of the two inputs varies greatly, the hash join operation is usually much faster.



A merge join requires that both inputs be sorted on the merged column, and the merged columns are defined by the equivalent (ON) clause of the JOIN predicate. Typically, the query optimizer scans the index (if an index exists on the appropriate set of columns) or puts a sort operator underneath the merge join. In rare cases, although there may be more than one equivalent clause, only some of the available equivalent clauses are used to get the merged columns.



Because each input is sorted, the Merge Join operator gets a row from each input and compares it. For example, for inner join operations, returns if the rows are equal. If the rows are not equal, the rows with smaller values are discarded and another row is obtained from the input. This process will be repeated until all the actions have been processed.



A merge Join operation can be either a normal operation or a many-to-many operation. Many-to-many merge joins use temporary table storage rows (which can affect efficiency). If there are duplicate values in each input, the other input must be rewound to the beginning of the duplicates when each of the duplicates in one of the inputs is processed. You can create a unique index to tell SQL Server that there are no duplicate values.



If there is a resident predicate, all rows that satisfy the merge predicate will take the value of the resident predicate and return only those rows that satisfy the resident predicate.



Merging joins itself is fast, but selecting a merge join can be time-consuming if a sort operation is required. However, if the amount of data is large and you can get pre-ordered data from an existing B-tree index, the merge join is usually the fastest available join algorithm.



3. "Hash join", hash join can effectively handle unsorted large non-indexed input. They are useful for intermediate results of complex queries because: 1. Intermediate results are not indexed (unless they have been explicitly saved to disk and then indexed), and are typically not sorted appropriately for the next operation in the query plan. 2. The query optimizer only estimates the size of the intermediate results. Because the estimate can have a large error for complex queries, if the intermediate result is much larger than expected, the algorithm that processes the intermediate results must not only be effective but also moderately weakened.



Hash joins can reduce the use of non-normalization. Non-normalization generally gains better performance by reducing join operations, although there is a risk of redundancy (such as inconsistent updates). Hash joins reduce the need to use denormalized. Hash joins enable vertical partitioning (representing several sets of columns in a single table with a separate file or index) as a viable option for physical database design.



There are two types of input to a hash join: Build input and probe input. The query optimizer assigns these roles so that the smaller of the two inputs is the build input.



Hash joins are used for a variety of settings matching operations: Inner joins, left outer joins, right outer joins, and full outer joins, left and right semi-joins, intersections, unions, and differences. In addition, some variants of a hash join can be repeatedly deleted and grouped, such as the SUM (salary) group by department. These modifications use only one input for the build and probe role.



A hash join is divided into 3 types: In-memory hash joins, Grace Hash joins, and recursive hash joins.



Hash join in memory: A hash join scans or computes the entire build input before generating a hash table in memory. Each row is inserted into a hash bucket based on the hash value of the computed hash key. If the entire build input is less than the available memory, you can insert all the rows into the hash table. After the build phase is the probing phase. Scans or computes the entire probe input one line at a time, computes the value of the hash key for each probe row, scans the corresponding hash bucket, and generates a match.



Grace Hash Join: If the build input is larger than memory, the hash join is divided into several steps. This is called "Grace hash join". Each step is divided into the build phase and the probing phase. First, the entire build and probe input is consumed and partitioned (using a hash function on the hash key) as multiple files. Using a hash function on the hash key guarantees that any two join records must be in the same file pair. Therefore, a task that joins two large inputs simplifies multiple smaller instances of the same task. The hash join is then applied to each pair of partition files.



Recursive hash joins: If the build input is so large that a standard externally merged input requires multiple merge levels, multiple partitioning steps and multiple partition levels are required. If only some of the partitions are large, you only need to use additional partitioning steps for those partitions. To make all partition steps as fast as possible, large asynchronous I/O operations are used so that a single thread can make multiple disk drives work busy.



You cannot always determine which hash join to use during the optimization process. Therefore, SQL Server starts with a hash join in memory, and then gradually transforms to a Grace hash join and a recursive hash join based on the size of the build input.
If the optimizer incorrectly anticipates which of the two inputs is smaller and thus determines which one is the build input, the build role and probe role will be dynamically reversed. A hash join ensures that a smaller overflow file is used as the build input. This technique is called "role reversal". Role reversal occurs in a hash join after at least one file has overflowed to disk.



Description: You can also explicitly specify a join method, and SQL Server will try to respect your choices. For example you can write: Inner loop join, left outer merge join, Inner hash join
However, I do not recommend that you do this because the choice of SQL Server is basically correct, and you can try it without believing it.



Well, say a lot of theoretical things, and then a practical example to explain it.


Go back to the top for more specific execution of the process


Earlier, I gave a picture that reflects the execution plan of SQL Server executing a query, but it may reflect less detailed information, and of course, you can move the mouse pointer over a node with the following information appearing:






Just now, I installed the Chinese version, the above are Chinese characters, I do not say much. What I would like to say is another way of executing the process, which contains more execution information than this, and is actually the implementation of the situation. (Of course, you can also continue to use graphical methods, click the "Include Actual execution Plan" button on the toolbar before running the query.)



Let's go back to SQL Server Management Studio again, enter the following statement, and then execute.


set statistics profile on 

select v.OrderID, v.CustomerID, v.CustomerName, v.OrderDate, v.SumMoney, v.Finished
from   OrdersView as v
where v.OrderDate >= ‘2010-12-1‘ and v.OrderDate < ‘2011-12-1‘;


Note: Now adds a sentence, "SET statistics profile on", resulting in the following results:






You can see from the picture, after executing the query, get two tables, the table above shows the results of the query, the following table shows the query execution process. Compared to the first picture in this article, this picture may be less intuitive, but it can reflect more information, and especially in more complex queries, it may seem easier, because for complex queries, "execution plan" too many steps, graphics will cause the graphics too large, not easy to observe. And this execution process table can reflect 2 valuable data (the first two columns).



Let's take a look at this "execution process form". Let me pick out a few important words.
"Rows": Represents the number of record bars produced in an execution step. (real data, not expected)
"Executes": Indicates the number of times an execution step was executed. (real data, not expected)
"Stmt Text": Represents the description of the step to be performed.
"EstimateRows": Indicates how many rows of data to expect to be returned.



In this "Execution process table", I think the first three columns are more important for optimizing queries. For the first two columns, I explained the above, and the meaning is clear. The figures in the first two columns also roughly reflect the cost of those steps, which should be noted for slower queries. "Stmt Text" will tell you what to do with each step. For this kind of table, it is actually a kind of tree information (one line is represented by a node in graphical mode), so I suggest to read them from the inner layer. As an example, let me explain the execution of the form it expresses.



Line 5th: "Clustered Index Seek (OBJECT: ([mynorthwind].[ DBO]. [Customers]. [Pk_customers]), SEEK: ([MyNorthwind]. [dbo]. [Customers]. [Customerid]=[mynorthwind]. [dbo]. [Orders]. [CustomerID]) ORDERED FORWARD) ", meaning that SQL Server is doing a Seek operation on the table Customers and is in the way of" Clustered index Seek ", the corresponding index is" Pk_customers ", the value of Seek is derived from [Orders]. [CustomerID]



Line 4th: "Clustered Index Scan (OBJECT: ([mynorthwind].[ DBO]. [Orders]. [Pk_orders]), Where: ([MyNorthwind]. [dbo]. [Orders]. [orderdate]>= ' 2010-12-01 00:00:00.000 ' and [MyNorthwind]. [dbo]. [Orders]. [orderdate]< ' 2011-12-01 00:00:00.000 '), meaning that SQL Server does a scan operation on the table customers, which is the worst way to "table scan," because there is no index on the OrderDate column , so it's the only way.



Line 3rd: "Nested Loops (left Outer Join, Outer REFERENCES: ([mynorthwind].[ DBO]. [Orders]. [CustomerID])) , meaning that SQL Server joins the data generated by rows 5th and 4th in "Nested Loops", where the outer table is orders, and the matching operation to join is indicated in line 5th.



Line 2nd: "Compute Scalar (DEFINE: ([Expr1006]=isnull ([mynorthwind].[ DBO]. [Customers]. [Customername],n]) ", meaning to execute a call to the IsNull () function. For specific reasons, refer to the View definition code given in the previous section of this article.



Line 1th: "SELECT [v]." [Orderid],[v]. [Customerid],[v]. [Customername],[v]. [Orderdate],[v]. [Summoney],[v]. [Finished] from [Ordersview] [v] WHERE [v]. [Orderdate]>[email protected] and [v]. [orderdate]<@2], usually the 1th line is the entire query, representing its return value.


Back to top index statistics: Selection of query plan based on


The "Plan of execution", which is the plan, indicates the operational options that will be determined before it is executed. So how does SQL Server choose an execution plan? How does SQL Server know when to index or which index to use? For SQL Server, each time a query is executed, it is necessary to first check that the execution plan of the query exists in the cache, and if not, to generate an execution plan, which, when generating the execution plan, is not to see which indexes are available (randomly selected), but rather to refer to a known index statistic "Data. If you take a closer look at the previous execution plan or the execution process table, you will find that SQL Server can estimate the amount of data generated by each step, because SQL Server can estimate the amount of data that SQL Server can choose as the most appropriate method to perform the query process, at which point " Index statistics to tell SQL Server about this information. Speaking of which, are you a little curious, to let you have a perceptual understanding of "index statistics", let's look at what "index statistics" is like. Please enter the following statement in SQL Server Management Studio, and then execute.


DBCC SHOW_STATISTICS (products, Ix_categoryid)


The results are as follows:






First of all, explain the command: "DBCC SHOW_STATISTICS" This command can display the "index statistics" we want to know, it requires two parameters, 1. Table name, 2. Index name



Take a look at the result of the command, which consists of three tables:
1. The first table, which lists the key information for this index statistic.


Column name description
name The name of the statistic.
Updated Date and time when the statistic was last updated. The number of rows in the
rows table.
rows Sampled The number of sampled rows for statistics.
Steps data can be divided into groups, corresponding to a third table.
Density First index column prefix selectivity (excluding eq_rows).
Average key length average lengths of all indexed columns.
string index If yes, the statistics contain a string digest index to support estimating the result set size for the like condition. Applies only to   Char , varchar , nchar   and   nvarchar , varchar (max) , nvarchar (max) , text  , and ntext   The leading column of the data type.


2. The second table, which lists the selectivity of various field combinations, the smaller the data, the smaller the repetition, the higher the selectivity of course.


Column Name Description
All density The selectivity (including eq_rows) of the indexed column prefix set. Note: The smaller the value, the higher the selectivity.
If this value is less than 0.1, the selectivity of the index is relatively high, and conversely, the selectivity is not high.
Average length The average length of the index column prefix set.
Columns The name of the index column prefix for which all density and Average length are displayed.


3. The third table, the histogram of the data distribution, SQL Server relies on it to estimate the amount of data for the execution steps.


Column Name Description
Range_hi_key The maximum value in each group.
Range_rows Estimated number of rows per group of data, with no maximum value.
Eq_rows The estimated number of rows in each set of data groups that are equal to the maximum value.
Distinct_range_rows The estimated number of distinct values in each set of data groups that do not contain the maximum value.
Avg_range_rows The average number of duplicate values in each set of data groups, not including the maximum, calculation formula: Range_rows/distinct_range_rows for distinct_range_rows > 0


To allow you to better understand the data, especially the third group, see:






When I was populating the test data, I deliberately divided the CategoryID into 1 to 8 (10 was later added), and each group was populated with 78 data. So the data for the third table of index statistics is also correct, and it is based on these statistics that SQL Server can estimate the corresponding amount of data for each execution step, thus affecting choices such as join. Of course, when choosing a join method, you should also refer to the selectivity of the fields in the second table. When SQL Server generates an execution plan for a query, the query optimizer uses these statistics and combines the associated indexes to evaluate the cost of each scenario to select the best query plan.



Another example illustrates the importance of statistical information for query plans. To add more data first, look at the following code:


declare @newCategoryId int;
insert into dbo.Categories (CategoryName) values(N‘Test statistics‘);
set @newCategoryId = scope_identity();

declare @count int;
set @count = 0;

while( @count < 100000 )
begin
    insert into Products (ProductName, CategoryID, Unit, UnitPrice, Quantity, Remark) 
    values( cast(newid() as nvarchar(50)), @newCategoryId, N‘个‘, 100, @count +1, N‘‘);

    set @count = @count + 1;
end
go

update statistics Products;
go


Take a look at the index statistics:






Take a look at the same query, but because the query parameter values are not the same, SQL Server chooses the execution plan:


select p.ProductId, t.Quantity
from Products as p left outer join [Order Details] as t on p.ProductId = t.ProductId
where p.CategoryId = 26;-26 is the newly generated CategoryId, so this query will return 10W records

select p.ProductId, t.Quantity
from Products as p left outer join [Order Details] as t on p.ProductId = t.ProductId
where p.CategoryId = 6;-this query will return 95 records





As you can see, SQL Server chooses a completely different execution plan because of the different parameter values of CategoryID. The importance of statistical information is very clear here.



After the statistics are created, the database engine sorts the column values (creating statistics based on these values) and creates a "histogram" based on these values (up to 200, separated by intervals). The histogram specifies how many rows exactly match each interval value, how many rows are within the interval, and the density or recurrence rate of values in the interval.



SQL Server 2005 introduces additional information about the statistics collection created for char, varchar, varchar (max), nchar, nvarchar, nvarchar (max), text, and ntext columns. This information, called a String digest, helps the query optimizer to estimate the selectivity of the query predicate in the string pattern. When you have a like condition in a query, using a string digest allows you to more accurately estimate the size of the result set and continuously refine the query plan. These conditions include conditions such as where ProductName like '%bike ' and where Name is ' [Cs]heryl '.



Since index statistics is so important, when will it be generated or updated? In fact, "index statistics" are not maintained manually by us, and SQL Server automatically maintains them. There is also a parameter in SQL Server to control this update:






How statistical information functions automatically



When you create an index, the query optimizer automatically stores statistics about the indexed columns. Also, when the Auto_create_statistics database option is set to ON (the default), the database engine automatically creates statistics for columns that do not have an index for the predicate.



As the data in the column changes, the statistics for indexes and columns can become obsolete, causing the query optimizer to select a query processing method that is not optimal. For example, if you create a table that contains an indexed column and 1,000 rows of data, and each row has a unique value in the indexed column, the query optimizer treats that indexed column as a good way to collect query data. If there are many duplicate values after the data in the column is updated, the column is no longer the ideal candidate for the query. However, the query optimizer still treats the index's stale distribution statistics (based on the data before the update) as good candidate columns.



When the Auto_update_statistics database option is set to ON (the default), the query optimizer automatically updates these statistics periodically when the data in the table changes. Statistics updates are initiated whenever the statistics used in the query execution plan are not tested against the current statistics. Sampling is done randomly on individual data pages, and is taken from the minimum nonclustered index of the columns required by the table or statistical information. When a data page is read from disk, all rows on that data page are used to update the statistics. As a general rule, update the statistics when approximately 20% of the data rows change. However, the query optimizer always ensures that the number of rows sampled is as low as possible. For tables smaller than 8 MB, a full scan is always performed to collect statistics.



Sampling data, rather than analyzing all the data, minimizes the overhead of automatic statistics updates. In some cases, statistical sampling does not get the exact characteristics of the data in the table. You can use the sample clause and the FULLSCAN clause of the UPDATE STATISTICS statement to control the amount of data that is sampled when statistics are manually updated by table. The FULLSCAN clause specifies that all data in the table is scanned to collect statistics, while the sample clause is used to specify the percentage of rows sampled or the number of rows sampled



In SQL Server 2005, the database Options Auto_update_statistics_async provides statistical information for the asynchronous Update feature. When this option is set to ON, the query does not wait for statistics updates to compile. The outdated statistics are placed in the queue and are updated by the worker threads in the background process. Queries and any other concurrent queries are compiled immediately by using existing expiration statistics. Query response time is predictable because there is no delay in statistics waiting for updates, but outdated statistics can cause the query optimizer to choose an inefficient query plan. Queries that are launched after the updated statistics are ready will use those statistics. This may cause the cached plan to be recompiled (depending on the older statistics version). If some data definition language (DDL) statements (for example, CREATE, ALTER, and DROP statements) appear in the same explicit user transaction, you cannot update the asynchronous statistics.



The Auto_update_statistics_async option is set at the database level and determines the update method that is used for all statistics in the database. It applies only to statistics updates and cannot be used to create statistics asynchronously. Setting this option to on is only valid if Auto_update_statistics is set to ON. By default, the Auto_update_statistics_async option is set to OFF.



From the above description, we can see that, for large tables, there is a possibility that the statistical information update is not timely, this time, it may affect the query optimizer judgment.
Some people may have an experience: for some slow queries, they think of rebuilding the index to try to fix it. In fact, it makes sense to do so. Because, at some point, a query suddenly slows down, it may not be related to the update of statistical information, which will affect the query optimizer judgment. If you rebuild the index at this point, you can let the query optimizer know the latest data distribution and naturally avoid this problem. Remember the execution process table that I used to display in front of "SET statistics profile on"? Note that the table shows the actual amount of data and the estimated amount of data for each step. To rebuild the index, we can actually use "SET statistics profile on" to see if the actual amount of data and the estimated amount of data is larger than the difference, then we can consider the manual to update the statistics, and then try again.


Back to top optimization view query


Again, the Optimization view query, although the view is also defined by a query statement, is essentially a query, but it and the general query statements in the optimization, there is still some difference. The main difference here is that the view is defined by a query statement, but if you only parse the query definition, you may get little meaning, because the view is not used directly, but before it is used, it is added to the where statement, or is placed in other statements for use by the FROM clause. Here is an example, in my demo database there is a view ordersview, the definition code is preceded by. Let's take a look at what the execution plan would be if we were to use this view directly:






As you can see from this view, SQL Server does a full table scan of table orders and should be inefficient. Take a look at the following query:






As you can see from this execution plan, it's not the same as the one above. The lookup of the Orders table in the previous query was using the "Clustered Index Scan" method, and now the "Clustered Index Seek" method was used, and the percentage of the cost of the rightmost two steps changed. This is enough to show that when optimizing a view, it is best to apply different filters according to the actual requirements, and then decide how to optimize.



Let's take a look at the execution plan of this view, which consists of three queries.


select * from dbo.OrdersView where OrderId = 1;
select * from dbo.OrdersView where CustomerId = 1;
select * from dbo.OrdersView where OrderDate >= ‘2010-12-1‘ and OrderDate < ‘2011-12-1‘;





Obviously, for the same view, under different filtering conditions, the difference between execution plans is obvious.


Back to top recommended reading-MSDN articles


Index statistical information
http://msdn.microsoft.com/zh-cn/library/ms190397 (sql.90). aspx



Query Optimization Recommendations
http://msdn.microsoft.com/zh-cn/library/ms188722 (sql.90). aspx



Checklist for parsing a slow-running query
http://msdn.microsoft.com/zh-cn/library/ms177500 (sql.90). aspx



Logical and physical operator references
http://msdn.microsoft.com/zh-cn/library/ms191158 (sql.90). aspx









Http://www.cnblogs.com/fish-li/archive/2011/06/06/2073626.html



Read SQL Server query plan


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.