Start from Create
- By using the explicit CREATE INDEX command
- As an implied object when creating a constraint
Implicit indexes created with constraints
An implicit index is created when you add one of the following two constraints to the table.
- PRIMARY KEY constraint (clustered index)
- Unique constraint (unique index)
First, CREATE index syntax
The CREATE INDEX statement does what it sounds like-to create an index on a declared column on a specified table or view:
CREATE [UNIQUE] [CLUSTERED | Nonclustered]index<IndexName> On<Tableor view name> (<ColumnName>[asc| desc][,... N]) INCLUDE (<ColumnName>[,... N]) [with [PAD_INDEX = {on | OFF}] [[,] FILLFACTOR =<fillfactor>] [[,] Ignore_dur_key = {on | OFF}] [[,] drop_existing = {on | OFF}] [[,] Statistics_norecompute = {on | OFF}] [[,] sort_in_tempdb = {on | OFF}] [[,] ONLINE = {on | OFF}] [[,] allow_row_locks = {on | OFF}] [[,] allow_page_locks = {on | OFF}] [[,] MAXDOP = <maxinum degree of parallelism>][on {<filegroup> | c12><partition scheme name> | DEFAULT}]
The CREATE index statement must appear with the table or view, and you need to declare the table where the column resides (on). The following explains the effect of an option
1, Asc/desc
These two options allow you to select ascending and descending order for the index. The default option is ASC, which is ascending.
Why do I need two options for ascending and descending? Not the reverse order to view the index is it OK? But what if a column is sorted in ascending order, but the other columns require descending order? Because the indexed columns are stored together, the reverse view of the index on a column reverses the order of the other columns. If a column is declared to be ascending, and the other column is descending, the second column is reversed directly in the physical data of the index-suddenly it is not necessary to change the way the data is accessed.
2. INCLUDE
This is an option supported by SQL SERVER2005 and subsequent versions. It is designed to provide better support for overlay queries (covered queries).
When you include columns rather than placing them on the on list, SQL Server adds them only at the leaf level of the index. Because each row on the index leaf level corresponds to a single row of data, what you do is essentially include more raw data at the leaf level of the index. This has a benefit because SQL Server stops working when it has what it actually needs. SQL Server does not have to continue accessing the actual rows of data when it traverses the index to find all the data it needs, so it does not have to reach the data row. By including a specific column in the index, you can "overwrite" the query at the leaf level with that particular index, saving the I/O associated with using the index pointer to reach the data page. Actually, for example, you create an index for a date column, but include an Order ID column. Then, to find the order ID for a date, you don't have to go to the actual data row because you have the data you need in the index. However, be careful not to misuse this option, which increases the size of the leaf level of the index page when the column is included. This means fewer rows per page, so more I/O is needed to see the same number of rows. The result may be that speeding up a query may slow down other queries. Consider the impact on various parts of the system, rather than just the specific queries that are being used at some point.
3. With
With is very simple-it just tells SQL Server to provide one or more of the following options.
4, Pad_index
This option determines how full the index's non-leaf-level pages will be (expressed as a percentage) when the index is first created. You do not have to declare percentages in PAD_INDEX because the percentages specified by the following Filltactor options are used. it would be meaningless to set pad_index=on without the FILLFACTOR option.
5, FILLFACTOR
When SQL Server first creates an index, it will, by default, fill the page as much as possible, leaving only two records of the control, and you can set Filltactoe to any value between 0-100. Once the index construction is complete, this number will represent a percentage of the page's relative full extent. But with page splits, the data will still be distributed across two pages-in addition to periodically rebuilding the index, you cannot constantly control the percentage of padding.
When the page density needs to be adjusted, the use of filltactor needs to be considered in the following ways:
- In the case of an OLTP system (often added and removed), a lower fillfactor is required.
- In the case of OLAP or other systems that are very stable (almost no additions and deletions), you need the highest possible fillfactor.
- If the transaction scale is moderate and there are many report type queries based on it, you may need a medium level of fillfactor (not too low or too high).
- If no value is provided, SQL Server fills the page to a difference of two lines and guarantees at least one row per page. (If the line is a 8000-character justifies, then each page can only be placed one line, so the difference between two lines is not reached).
6, Ignore_dup_key
The IGNORE_DUP_KEY option is almost a way to evade the system. In short, it makes the unique constraint somewhat different from the way it should operate.
Typically, a unique constraint (or unique index) does not allow any kind of repetition-if a transaction attempts to create a duplicate value based on a column that is defined as unique, the transaction is rolled back and rejected. However, once the IGNORE_DUP_KEY option is set, it will get mixed behavior. Still receive the error message, but the error will be just a warning-the record is still not inserted.
From the ignore_dup_key point of view, the transaction cannot be rolled back (the error is still a warning error, not a critical error), but the duplicate row is rejected.
In a word, the attitude of this thing is that repeating the line is fine, but you have to have a row for that value to be OK (when inserting, repeating rows are ignored, or all are not allowed to be inserted).
When you create a unique index, you can specify the IGNORE_DUP_KEY option, so the option to create a unique index at the beginning of this article can be:
[Name]OFF);
Ignore_dup_key the name is easy to misunderstand. Duplicate values are never ignored when a unique index exists. More precisely, duplicate keys are never allowed in a unique index. This option works only when multiple columns are inserted.
For example, you have two tables, and table A and table B have exactly the same structure. You may submit the following statement to SQL Server.
From TableB;
SQL Server attempts to insert data from all table B into table A. But what if a unique index denies that table B contains the same data as table A? Do you want to just repeat data insertions unsuccessfully, or is the entire INSERT statement unsuccessful?
This depends on the ignore_dup_key parameter you set, and when you create a unique index, you can set parameters to what to do when the insertion is unsuccessful, and the two parameters for setting Ignore_dup_key are interpreted as follows:
Ignore_dup_key=OFF
The entire INSERT statement does not succeed and the error message pops up, which is also the default setting.
Ignore_dup_key=OFF
Only those lines that have duplicate keys are unsuccessful, and all other rows succeed. and a warning message pops up.
The IGNORE_DUP_KEY option affects only the INSERT statement. And will not be affected by Update,create Index,alter INDEX. This option can also be set when setting a primary key and a unique constraint.
7, Drop_existing
If you specify the DROP_EXISTING option, the index will be deleted before the new index is constructed if a previous name already exists. This option is more efficient than simply deleting and recreating an existing index when using this option with a clustered index. If you re-create an index that exactly matches an existing index, SQL Server knows that it does not need to involve non-clustered indexes, but in order to accommodate the different row locations, explicit deletion and creation will cause all non-clustered indexes to be rebuilt two times. If you use drop_existing to change the structure of the index, the NCI is rebuilt only once, not two times.
8, Statistics_norecompute
By default, SQL Server attempts to automate the process of updating statistics on tables and indexes. By selecting this option, you are manually responsible for updating the statistics yourself. To turn this option off, you need to run the UpdateStatistics command, but do not use NoRecompute.
It is strongly recommended that you do not use this option because the query optimizer uses statistics on the index to indicate how useful the index is for a given query. As the number of data in the table increases or decreases, and the column fixed value changes. The statistics on the index are constantly changing. Based on these two points, you can know that statistics are not updated and the query optimizer runs the query based on outdated information, and turning on the automatic statistics feature means that statistics are updated periodically (how often updates depend on the nature and frequency of updates to the table). Conversely turning off automatic Update statistics means that the information is obsolete, or you need to set a schedule to run update STATISTICS manually.
9, sort_in_tempdb
This option is only meaningful if tempdb is stored on a drive that is physically detached from the database that contains the new index. Why?
When SQL Server indexes, it must perform several read operations to handle various index construction steps.
1, traverse all the data, constructs the leaf rows corresponding to the actual data each row. Similar to the actual data and the final index, the content enters the page for temporary storage. These intermediate pages are not the final index pages, but are the locations that are temporarily stored each time the sort buffer is full.
2. Run them separately through these intermediate pages to merge them into the final leaf page.
3. When filling leaf-level pages, build non-leaf-level pages.
If you do not use the SORT_IN_TEMPDB option, the intermediate pages are written to the same physical file in which the database is stored. This means that the read operation of the actual data must compete with the write operation of the build process. Both of these situations cause the head to move to a different position (read and write). The result is that the head moves back and forth frequently-it takes time.
On the other hand, if you use sort_in_tempdb, the intermediate pages will be written to TEMPDB instead of the database's own files. If they are on a separate physical drive, this means there is no contention between the read and write operations of the index build. Keep in mind, however, that this is only valid if tempdb is on a separate physical drive that is separate from the database file. Otherwise, only nominal change, and I/O competition is still the problem.
If you want to use sort_in_tempdb, make sure there is enough space in TEMPDB to support large files.
10. ONLINE
If this option is set to ON, it forces the table to remain valid for general access and does not create any locks that prevent users from using indexes and/or tables. By default, full-index operations get the required locks (resulting in a table lock) to make full and efficient access to the table, however, the side effect is that this will block the user (which is contradictory: The index may be being indexed to make the database more useful, but at the same time making the table unusable).
11. Allow Row/page LOCKS
The Allow setting here determines whether the index allows row and page locks.
12, MAXDOP
This option overrides the system settings on the maximum degree of parallelism for the build index. The degree of parallelism refers to how many processes are used for a database operation. There is a system setting called Maximum degree of parallelism that allows you to limit the number of processors in each operation. The MAXDOP option created by the index allows the degree of parallelism to be set above or below the base system settings. As long as it fits.
13. On
SQL Server allows data and indexes to be stored separately by using the on option. This has the following advantages:
- The space required for the index can be dispersed across other drives.
- I/O for index operations does not aggravate the burden of physical data retrieval.
The following is a simple addition to the concept of XML indexes.
The XML index is a new feature of SQL Server2005.
In addition to Ignore_dup_key and online, the XML creation syntax supports all the same options seen in the previous create statement.
In SQL Server, you can create an index on a column of type XML. The main requirements for doing so are as follows.
- You must have a clustered index on the table that contains the XML that you want to index.
- Before you can create a secondary index, you must first create a "primary" XML index on the XML data column.
- XML indexes can only be created on columns of XML type (and XML indexes are the only indexes created on columns that can be re-typed).
- The XML column must be part of the base table-the index cannot be created on the view.
1. Primary XML index
The first index created on the XML index must be declared as a "primary index." When the primary index is created, SQL Server creates a new clustered index that combines the clustered index of the base table with the data from any specified XML node.
2. Secondary XML index
A non-clustered index similar to the cluster key that points to the clustered index, and the secondary XML index points to the primary XML index in a very similar way. Once you have created the primary XML index, you can create up to 248 XML indexes on the XML columns.
Second, modify the index
The ALTER Index command is somewhat deceptive in what it is used to do. As of now, the ALTER command is always related to the definition of the object being modified. For example, ALTER TABLE to add or disable constraints and columns. ALTER index is different-the command is related to maintenance and completely irrelevant to the structure. If you need to modify the composition of the index, you can drop and create the index, or create and use the index with the Drop_existing=on option.
The syntax for ALTER index is similar to the following:
AlterINDEX {<nameOfIndex>|All}On<TableOrView Name>{REBUILD[[WITH ([PAD_INDEX = {on | OFF}]|[[,]FILLFACTOR=<FillFactor>|[[,] Sort_in_tempdb= {On|OFF}]|[[,] Ignore_dup_key= {On|OFF}]|[[,] Statistics_norecompute= {On|OFF}]|[[,] ONLINE= {On|OFF}]|[[,] Allow_row_locks= {On|OFF}]|[[,] Allow_page_locks= {On|OFF}]|[[,] MAXDOP=<Max degreeof parallelism>)]|[PARTITION = <partition number> [with (< PARTITION rebuild index option> [,... N])]]]|DISABLE|REORGANIZE[PARTITION = <partition number>][With (Lob_compaction = {on | OFF})]|SET ([Allow_row_locks = {on | OFF}]|[[,] Allow_page_locks= {On| off| [[, Ignore_dup_key " Span style= "color: #808080;" >= {on | off| [[, Statistics_ NoRecompute = {on | Span style= "color: #0000ff;" >off[;]
Some of these options are the same as the CREATE INDEX command, so you'll skip the redefinition of these options here. In addition, a considerable number of alter-specific options are detailed and related to things like dealing with fragmentation. The following parameters are explained below
1. Index name
If you want to maintain a specific index, you can specify the index, or use all to indicate that you want to maintain all the indexes associated with the specified table.
2. Table name or view name
The name of the particular object (table or view) on which to maintain. Note that it must be a specific table (you can give it a list and then say "please handle all these!"). ”)。
3, Rebuld
If you run ALTER index with this option, the old index is completely discarded and the new index is regenerated. The result is a truly optimized index, where all leaf-and non-leaf-level pages are re-constructed as defined. If it is a clustered index, the physical data is also re-organized.
By default, pages are re-organized to be two lines full. As with the CREATE TABLE syntax, you can set FILLFACTOR to any value before 0~100. The value is the extent, in percent, that the page fills up after the database has finished re-organization. However, on page splitting, the data will be half-split on two pages-in addition to periodically rebuilding the index, you must not continuously control the percentage of padding. To be careful with this option, once you start rebuild, rebuild the money at the completion index, the index you are using is actually gone. All queries that depend on the index may become unusually slow. For such things, you first need to test on an offline system to see how long the whole process will take. Then, you plan to run during off-peak hours.
4, DISABLE
This option is true, but in a somewhat drastic way. If the command is all about taking the index offline until you decide what to do further, it is a good choice, but it actually marks the index as unavailable, and once an index is disabled, the index must be rebuilt (not re-organized, but rebuilt) before it is reactivated.
If the clustered index is disabled for the table, the table is also disabled. The data is still retained, but in rebuilding the cluster index, the money cannot be accessed by all indexes (because they are dependent on the clustered index).
5, REORGANIZE If you reorganize the index, you get a little bit better than the full rebuild index, but this method can be done online (the user can still use the index).
What do you mean by a little less? In fact, reorganize only works at the leaf level of the index and does not touch the non-leaf level. This means that no full optimization is achieved. However, for most indexes, that is not where the actual fragmentation occurs.
Third, delete the index
If you continually re-analyze the situation and add an index, don't forget to delete the index. Remember the overhead on the insert index.
The syntax for dropping an index is as follows:
<view name;. <index name>
This allows you to delete the index.
SQL Server Index syntax < fourth >