SQL Server Performance Optimization 3 index maintenance

Source: Internet
Author: User

Objective

An earlier article introduced indexes to improve query performance in a database, which is just the beginning. Perhaps assuming a lack of proper maintenance, index the performance of your previously established and even become a drag-and-drop database of accomplices.

Looking for fragments

The elimination of fragmentation index maintenance is probably the most common task, and is to use REORGANIZE to "reorganize" the index when the fragmentation level is between 5% and 30%. The REBUILD is used to "rebuild" the index if it reaches more than 30%. There are a number of factors that you may want to consider when deciding which means to use and the timing of the operation, and the following 4 are what you need to consider:

    • Plan for backup
    • Load of server
    • Disk space remaining
    • Reply (Recovery) model

PS: Although fragmentation is closely related to performance, in some specific cases he can be ignored. For example, you have a table with a clustered index, almost all the processing for that table, but a single piece of data is taken from the primary key. The impact of fragmentation on this occasion can be negligible.

So how do you determine the fragmentation status of an index? Use System functions sys.dm_db_index_physical_stats and System Folder sys. Indexes. The demo sample script is as follows:

--Get information for all indexes on the specified table (demo example: Orddemo) Select  sysin.name as IndexName  , sysin.index_id  , Func.avg_fragmentation_ In_percent  , Func.index_type_desc as Indextype  , Func.page_countfrom  sys.dm_db_index_physical_stats (db _ID (), object_id (N ' Orddemo '), NULL, NULL, NULL) as Funcjoin  sys.indexes as Sysinon  func.object_id = Sysin.object _id and func.index_id = sysin.index_id--The index_id of the clustered index is a non-clustered index of index_id>1--the following script is filtered with a WHERE clause (excluding tables without indexes)--The script returns Back to the full index of the database, it may take a long time!

SELECT Sysin.name as IndexName , sysin.index_id , Func.avg_fragmentation_in_percent , func.index_ Type_desc as Indextype , Func.page_countfrom sys.dm_db_index_physical_stats (db_id (), NULL, NULL, NULL, NULL ) as Funcjoin sys.indexes as Sysinon func.object_id = sysin.object_id and func.index_id = Sysin.index_idwhere s ysin.index_id>0;

Output such as the following

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvc3fsy2hlbg==/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/center ">

The fragment of the demo sample database is 0. This is due to the fact that the fragments were created while running additions and deletions, and our database has not done similar operations.


Fill factor

As mentioned earlier, the data is stored in the database as a 8KB data page, if you have a table that has a clustered index set up. Whenever there is data inserted. The database locates the Insert location (data page) and writes the information according to the primary key.

If the data page is full or not enough space to hold the new data, the database creates a new 8KB data page, and the new process creates I/O consumption.

Fill factor is used to reduce the occurrence of this situation, assuming you set a fill factor of 10, then your data initially only use 10% of the 8KB data page, when inserting a new record basically do not worry about the unnecessary I/O consumption, due to the data page reserved 90% of space.


The fill factor is also a double-edged sword. He reduced the performance of the read operation at the same time as the write performance was added.

The fill factor works only when an index or rebuild (REBUILDI) index is established. Invalid for general DML operation (data page is always populated to 100%) "

The following script helps you understand the index's fill factor values:

SELECT  object_name (object_id) as TableName  , NAME as IndexName  , Type_desc  , Fill_factorfrom  Sys.indexeswhere  --here filter by where to represent only clustered indexes and nonclustered indexes  type_desc<> ' HEAP '
You can also view the default fill factor values on the data server:

SELECT  Description  , value_in_usefrom  sys.configurationswhere  Name = ' fill factor (%) '
ps:0 indicates that no space is reserved, no matter what.

Set the fill factor value by using the following script:

ALTER INDEX [Idx_refno] on [Orddemo]rebuild with (fillfactor=) go--assumes that you want to set the default value on the server. Use the following script sp_configure ' show advanced options ', 1goreconfiguregosp_configure ' fill factor ', 90GORECONFIGUREGO
A larger fill factor (90% or more) is recommended on a table with a static table (accidental update). A lower fill factor (70%-80%) is recommended for read-write frequent tables.

In particular. When your clustered index is built on a self-increment field, it is no problem to set the fill factor to 100%. Since the newly inserted data is always at the end of all data, there is no case between inserting records and records.

Rebuilding (REBUILD) indexes to improve indexing efficiency

The role of rebuilding an index as the name implies, the advantages he brings include the elimination of fragmentation, the statistics update, and the alignment of the physical sort order in the data page. In addition, he will compress the data pages based on the fill factor, adding data pages (if necessary). A lot of advantages, just this operation is very resource-intensive. It will take quite a long time. Assuming you decide to start rebuilding the index, you also need to know that he has two modes of work:

Offline Mode: This is the default Rebuild index mode, which locks the table until the rebuild is complete. Suppose the table is very large. Causes the user (for several hours) to be unable to use the table. Offline mode works faster than online mode and consumes less space in tempdb.

Online mode: Assuming the objective conditions do not agree with you to lock the table, you can only choose the online mode, which will cost a lot of other time and server resources. It is worth mentioning that if your table includes varchar (max), nvarchar (max), Text Type field, you will not be able to rebuild the index in this mode.

"Tip: This mode selection is supported only in the dev/Enterprise Edition. Other version numbers use offline mode by default. 】

Here is a demo sample script to rebuild the index:

--Rebuild index in online mode idx_refnoalter index [IDX_REFNO] on [Orddemo]rebuild with (fillfactor=80, online=on) go--offline mode rebuild index Idx_refno Alter INDEX [IDX_REFNO] on [Orddemo]rebuild with (fillfactor=80, Online=off) go--rebuild all indexes on Orddemo table ALTER index all on [ord Demo]rebuild with (fillfactor=80, Online=off) go--Rebuild Index Idx_reno (drop_existing=on) CREATE CLUSTERED Index [IDX_REFNO] on [  Orddemo] (REFNO) with (drop_existing = On,fillfactor = 70,online = ON) go--use DBCC DBREINDEX to rebuild all indexes on Orddemo table DBCC DBREINDEX (' Orddemo ') go--using DBCC DBREINDEX to reconstruct an index on the Orddemo table DBCC dbreindex (' Orddemo ', ' idx_refno ', ' a ') GO

"DBCC Dbreindex will be discarded on version number"

Based on the author's personal experience, it is better to use bulk-log recovery (bulk-logged recovery) or simple recovery for rebuilding operations on a table with large data volumes, which prevents the log file from being too large. Just to remind you that switching the recovery mode interrupts the backup chain of the database. So let's say you were in full recovery mode (Full recovery), remember to rebuild and switch back.

You must be patient when rebuilding. Long may take 1 days, presumptuous to interrupt him is very critical (the database may enter recovery mode).

The user who runs the operation must be the whole of the table, or a member of the sysadmin of that server, or the db_owner/db_ddladmin of the database.

Restructure (REORGANIZE) index to improve indexing efficiency

The reorganization does not lock whatever object. He is an optimization of the current b-tree, organization of Data page processing and defragmentation.

The Reorganization Index processing Demo sample script is as follows:

--Restructure "IDX_REFNO" index on "orddemo" table alter INDEX [IDX_REFNO] on [orddemo]reorganizego--Reorganization Orddemo table All indexes alter index all on [ orddemo]reorganizego--re-AdventureWorks2012 all indexes on Orddemo tables in the database DBCC INDEXDEFRAG (' AdventureWorks2012 ', ' Orddemo ') go-- Re-index IDX_REFNODBCC indexdefrag (' AdventureWorks2012 ', ' Orddemo ', ' idx_refno ') on Orddemo table in AdventureWorks2012 database GO

Note: The user who runs the operation must be a member of the table or a sysadmin of that server, or the db_owner/db_ddladmin of the database.

Missing index found

Now that you've learned about the performance gains of indexes, it's very difficult to actually build enough correct and necessary indexes at the outset, how can we infer which tables need to be indexed and which ones are not built correctly?

Typically, SQL Server executes a query script with an existing index, assuming that no index is found he will voluntarily generate one and store it in the DMV (dynamic management view).

This information is purged whenever the SQL Server service restarts, so it is best to keep the SQL Server service running during the process of getting the missing index until all the business logic runs through it.

You can follow the links below to get a lot of other relevant information:

    • Sys.dm_db_missing_index_details
    • Sys.dm_db_missing_index_group_stats
    • Sys.dm_db_missing_index_groups
    • Sys.dm_db_missing_index_columns (Index_handle)

Provides a ready-made script:

SELECT Avg_  Total_user_cost * Avg_user_impact * (User_seeks + User_scans) as Possibleimprovement, Last_user_seek, Last_user_scan , statement as Object, ' CREATE INDEX [idx_ ' + CONVERT (varchar,gs. Group_handle) + ' _ ' + CONVERT (varchar,d.index_handle) + ' _ ' + replace (replace (replace ([statement], '] ', '), ' [', ' "), '. ' , ') + ' + ' + ' on ' + [statement] + ' (' + ISNULL (equality_columns, ') + case when equality_columns are not Null and Inequality_columns is isn't NULL then ', ' ELSE ' END + ISNULL (inequality_columns, ') + ') ' + Isnul  L (' INCLUDE (' + included_columns + ') ', ') as Create_index_syntaxfrom sys.dm_db_missing_index_groups as Ginner JOIN Sys.dm_db_missing_index_group_stats as Gson gs.group_handle = G.index_group_handleinner JOIN sys.dm_db_missing_index_ Details as DON g.index_handle = D.index_handleorder by possibleimprovement DESC 

PS: The information you get is a list of proposals, finally the decision is yours, and the DMV holds up to only 500 indexes.

Found unused indexes we built indexes to improve performance, but assuming that the established index is not being exploited, it becomes cumbersome.

Keep the SQL Server service executing for the same reason as in the previous section. Until all the business logic runs through. Execute the script:

SELECT  Ind. index_id,  obj. Name as TableName,  Ind. Name as IndexName,  Ind. Type_desc,  Indusage.user_seeks,  Indusage.user_scans,  indusage.user_lookups,  indusage.user_  Updates,  Indusage.last_user_seek,  indusage.last_user_scan,  ' drop index [' + Ind.name + '] on [' + Obj.name + '] ' as    dropindexcommandfrom  sys.indexes as Indjoin  sys.objects as Objon  ind.object_id=obj. Object_idleft JOIN  sys.dm_db_index_usage_stats indusageon  ind.object_id = Indusage.object_idand  ind . Index_id=indusage.index_idwhere  ind.type_desc<> ' HEAP ' and obj.type<> ' S ' and  objectproperty ( obj.object_id, ' isusertable ') = 1AND  (isnull (indusage.user_seeks,0) = 0AND  isnull (indusage.user_scans,0) = 0AND  isnull (indusage.user_lookups,0) = 0) ORDER by  Obj.name,ind. Namego

After obtaining this information. It is up to you to take action. But when you decide to delete an index, note the following two points:

    • Assume that the current index is a primary key or a unique key. He can guarantee the integrity of the data.
    • A unique index, even if it is not used by itself, can provide information to the optimizer to help it generate a better run schedule

Build an indexed view (indexed view) to improve performance

A view is a stored query that behaves like a table. It has two main advantages:

    • Restrict users to only access specific fields and specific data in a particular table
    • Allow developers to organize raw information into user-oriented logical views in a way that they define

Indexed views Parse/Optimize query statements when they are created, and store relevant information in a physical form in a database. Before you decide to use indexed views, consider the following recommendations:

    • Views should not participate in other views
    • Trying to be able to participate in whatever the original table
    • Field names must be explicitly defined to define the appropriate aliases

It is also not appropriate to use indexed views if the processing query for the object is much less updated or if the original table is a frequently updated table.

Suppose you have a query that includes more totals (aggregation)/unions (joins) and the amount of data in the table is very large. Then you can consider using indexed views. The following parameters must be set using the indexed view (Numeric_roundabort is off. The rest is on)

    • ARITHABORT
    • Concat_null_yields_null
    • Quoted_identifier
    • Ansi_warnings
    • Ansi_nulls
    • Ansi_padding
    • Numeric_roundabort

Demo Sample script:

CREATE VIEW poviewwith schemabindingasselect  POH. PurchaseOrderID  , POH. OrderDate  , EMP. LoginID  , v.name as VendorName  , SUM (POD. OrderQty) as OrderQty  , SUM (POD. Orderqty*pod. UnitPrice) as Amount  , COUNT_BIG (*) as Countfrom  [purchasing].[ PurchaseOrderHeader] as Pohjoin  [purchasing].[ PurchaseOrderDetail] as Podon  POH. PurchaseOrderID = POD. Purchaseorderidjoin  [humanresources].[ Employee] as Empon  POH. Employeeid=emp. Businessentityidjoin  [purchasing].[ Vendor] as VON  POH. Vendorid=v.businessentityidgroup by  POH. PurchaseOrderID  , POH. OrderDate  , EMP. LoginID  , v.namego--build a clustered index on the view so that it becomes indexed view create UNIQUE CLUSTERED index Indexpoview on Poview ( PurchaseOrderID) GO

You can compare query statements with the run schedule of the query Index view, which provides better query performance in the way indexed views:


SQL Server's query optimizer always tries to find the best run plan, and sometimes even though you have an indexed view, the optimizer uses the index on the original table, and you can use the with NOEXPAND to force the index on the indexed view (not the index on the original table).

Indexed views are supported on each version number of SQL Server 2012. Query processors in the development or Enterprise editions can even optimize queries that match indexed views.

The indexed view must be built with the with SCHEMABINDING to ensure that the fields used are not altered.

Assuming that the indexed view includes a GROUP by clause, you must include COUNT_BIG (*) in the SELECT clause, and you cannot specify having, CUBE, and ROLLUP.

Use indexes on calculated fields (Computed Columns) to improve performance

First, let's introduce the calculated field (Computed Columns). It uses an expression to refer to the other fields of the same table, and then the result is calculated.

The value of this field is evaluated again each time it is called. Unless you're building it with a PERSISTED tag.

Before deciding whether to index on a calculated field, you need to consider several points:

    • The calculation field is the case of Image, Text, or ntext, which can only be used as a non-keyword segment of a nonclustered index (non-key column)
    • calculated field expression cannot be a REAL or FLOAT type
    • The calculated field should be accurate (? )
    • The calculated field should be deterministic (enter the same value and output the same result)
    • Calculated fields Assume that a function is used. Whether the user function or the system function, the owner of the table and function must be the same
    • Functions for multiple rows of records (for example: SUM, AVG) cannot be used in a calculated field
    • Adding or deleting changes will change the value of the index on the calculated field, so you must set the following 6 parameters.

SET ANSI_NULLS on
SET ansi_padding on
SET ansi_warnings on
SET ARITHABORT on
SET Concat_null_yields_null on
SET QUOTED_IDENTIFIER ON
SET Numeric_roundabort OFF

Let's look at a complete example:

1. Set the system variables and build our test data sheet

SET ansi_nulls ONSET ansi_padding ONSET ansi_warnings ONSET ARITHABORT ONSET concat_null_yields_null ONSET quoted_identif IER ONSET numeric_roundabort offselect  [SalesOrderID]  , [salesorderdetailid]  , [Carriertrackingnumber]  , [OrderQty]  , [ProductID]  , [Specialofferid]  , [Unitprice]into  salesorderdetaildemofrom  [AdventureWorks2012]. [Sales]. [SalesOrderDetail] GO

2. Create a user-defined function. Then create a calculation field and use this function

CREATE Function[dbo]. [Udftotalamount] (@TotalPrice Numeric (10,3), @FreightTINYINT) RETURNS Numeric (10,3) with Schemabindingasbegindeclare @NetPrice Numeric (10,3) SET @NetPrice = @TotalPrice + (@TotalPrice * @Freight/100) RETURN @NetPriceENDGO--adding computed column Salesorderdetaildemo tablealter TABLE Salesorderdetaildemoadd [Netprice] as [dbo]. [Udftotalamount] (orderqty*unitprice,5) GO

3. Create a clustered index. Open the performance indicator switch and run a query (note that at this point we have not indexed the calculated field.) )

CREATE Clustered Index Idx_salesorderid_salesorderdetailid_salesorderdetaildemoon Salesorderdetaildemo ( Salesorderid,salesorderdetailid) go--checking Salesorderdetaildemo with statistics option on to--measure Performanceset STATISTICS IO ONSET STATISTICS time ongo--checking SELECT statement without have Index on Computedcolumns Elect * from Salesorderdetaildemo WHERE netprice>5000go

The performance results for the output are as follows:

SQL Server parse and compile time:
CPU time = 650 ms, Elapsed time = 650 Ms.
SQL Server parse and compile time:
CPU time = 0 ms, Elapsed time = 0 Ms.

(3864 row (s) affected)

Table ' Salesorderdetaildemo '. Scan count 1, logical reads 757,
Physical reads 0, Read-ahead reads 0, LOB logical reads 0, LOB
Physical reads 0, lob read-ahead reads 0.

SQL Server Execution times:
CPU time = 562 ms, Elapsed time = 678 Ms.

4. Before indexing on a calculated field, the following script can be used to confirm that the previously mentioned creation requirements are met: (return value: 0 not satisfied.) 1 satisfied)

Selectcolumnproperty (object_id (' Salesorderdetaildemo '), ' netprice ', ' IsIndexable ') as ' indexable? ', ColumnProperty ( OBJECT_ID (' Salesorderdetaildemo '), ' netprice ', ' isdeterministic ') as ' deterministic?

', ObjectProperty (object_id (' Udftotalamount '), ' isdeterministic ') ' udfdeterministic? ', ColumnProperty (' OBJECT_ID (' Salesorderdetaildemo '), ' netprice ', ' isprecise ') as ' precise?

'


5. Build the index when the requirements are met. and run the previous query statement again

CREATE INDEX Idx_salesorderdetaildemo_netpriceon Salesorderdetaildemo (netprice) goselect * from Salesorderdetaildemo WHERE Netprice>5000go
The performance results for this time are as follows:

SQL Server parse and compile time:
CPU time = 0 ms, Elapsed time = 0 Ms.
SQL Server parse and compile time:
CPU time = 0 ms, Elapsed time = 0 Ms.

(3864 row (s) affected)

Table ' Salesorderdetaildemo '. Scan count 1, logical reads 757,
Physical reads 0, Read-ahead reads 0, LOB logical reads 0, LOB
Physical reads 0, lob read-ahead reads 0.

SQL Server Execution times:
CPU time = 546 ms, Elapsed time = 622 Ms.

Confirm the disk space occupied by the index

SELECT case  index_id if    0 Then ' HEAP ' while    1 Then ' Clustered index '    ELSE ' non-clustered index '  END As Index_type,  SUM (case is    filledpage > Pagetodeduct then (filledpage-pagetodeduct)    ELSE      0  END) * 8 Index_sizefrom (  SELECT    partition_id,    index_id,    SUM (Used_page_count) as Filledpage,    SUM (case if        (index_id < 2) then (In_row_data_page_count + Lob_used_page_count + row_overflow_used _page_count)        ELSE          lob_used_page_count + row_overflow_used_page_count      END    ) as Pagetodeduct  from    sys.dm_db_partition_stats  GROUP by    partition_id,index_id) as Innertablegroup  by Index_idgo

PS: Unit of output kb


SQL Server Performance Optimization 3 index maintenance

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.