SQL Server Query optimization method reference

Source: Internet
Author: User
Tags joins sql server query mssql server memory

Today to see a Bo Friends of the article, feel good, reprint, hope to help you, more articles, please visit: http://blog.haoitsoft.com

1, no index or no index (this is the most common problem of slow query, is the defect of program design)
2, I/O throughput is small, forming a bottleneck effect.
3. No computed columns are created, resulting in queries not being optimized.
4. Insufficient memory
5. Slow network speed
6, the amount of data queried is too large (can use multiple queries, other methods to reduce the amount of data)
7, lock or deadlock (this is also the most common problem of slow query, is the defect of program design)
8, sp_lock,sp_who, the activity of the user view, the reason is to read and write competitive resources.
9. Return unnecessary rows and columns
10, query statement is not good, no optimization

You can refine the query by using the following methods

1, put the data, logs, indexes on different I/O devices, increase the read speed, previously can be tempdb should be placed on the RAID0, SQL2000 is not supported. The larger the amount of data (size), the more important it is to increase I/O.
2. Vertical and horizontal partition table, reduce the size of the table (Sp_spaceuse)
3. Upgrading hardware
4, according to the query criteria, index, optimize the index, optimize access mode, limit the data volume of the result set. Note that the fill factor is appropriate (preferably using the default value of 0). The index should be as small as possible, using a Lie Jian index with a small number of bytes (refer to the creation of the index), do not Jianjian a single index on a limited number of values such as the gender field
5, improve speed;
6, expand the memory of the server, Windows 2000 and SQL Server 2000 can support 4-8g memory. Configure virtual Memory: The virtual memory size should be configured based on the services that are running concurrently on the computer. Run Microsoft SQL Server? 2000, consider setting the virtual memory size to 1.5 times times the physical memory installed on your computer. If you have additional full-text search features installed and you plan to run the Microsoft search service to perform full-text indexing and querying, consider: Configure the virtual memory size to be at least 3 times times the physical memory installed on the computer. Configure the SQL Server max server memory server configuration option to 1.5 times times the physical memory (half of the virtual memory size setting).
7. Increase the number of server CPUs, but it is important to understand that parallel processing of serial processing requires resources such as memory. The use of parallel or string travel is the MSSQL automatic evaluation option. A single task is decomposed into multiple tasks and can be run on the processor. For example, delays in sorting, connecting, scanning, and group by words are performed simultaneously, and SQL Server determines the optimal level of parallelism based on the load of the system, and complex queries that consume large amounts of CPU are best suited for parallel processing. However, the update operation is Update,insert, and delete cannot be processed in parallel.
8, if you use like to query, simple to use index is not, but the full-text index, consumption of space. Like ' a% ' uses the index like '%a ' when querying with like '%a% ' without an index, the query time is proportional to the total length of the field value, so the char type is not used, but varchar. The full-text index is long for the value of the field.
9. Separation of DB server and application server; OLTP and OLAP separation
10. A distributed partitioned view can be used to implement a federation of database servers. A consortium is a set of servers that are managed separately, but they work together to share the processing load of the system. This mechanism of forming a federation of database servers through partitioned data can expand a set of servers to support the processing needs of large, multi-tiered Web sites. For more information, see Designing federated database servers. (Refer to SQL Help file ' partitioned view ')

A, before implementing a partitioned view, you must first horizontally partition the table
b, after creating the member table, define a distributed partitioned view on each member server, and each view has the same name. This enables queries that reference the distributed partitioned view name to run on any member server. The system operates as if each member server has a copy of the original table, but there is only one member table and one distributed partitioned view on each server. The location of the data is transparent to the application.

11. Rebuild the index DBCC REINDEX, DBCC INDEXDEFRAG, shrink data and log DBCC SHRINKDB,DBCC shrinkfile. Sets the auto-shrink log. For large databases do not set the database autogrow, it will degrade the performance of the server. There's a lot of emphasis on T-SQL, and here's a list of common points: first, the DBMS processes the query plan:

1. Lexical and grammatical checking of query statements
2. Query optimizer to submit statements to the DBMS
3 optimization of optimized algebra and access paths
4. Generate query plan by precompiled module
5, and then at the appropriate time to submit to the system processing execution
6, finally return the execution result to the user next, look at the SQL Server data storage structure: A page size of 8K (8060) bytes, 8 pages for a disk area, according to B-Tree storage.

12. The difference between commit and Rollback Rollback: Roll back all things. Commit: Commit the current thing. There is no need to write things in dynamic SQL, if you want to write please write outside such as: Begin TRAN EXEC (@s) commit trans or write dynamic SQL as a function or stored procedure.

13, in the query SELECT statement using the WHERE clause to limit the number of rows returned, avoid table scan, if the return of unnecessary data, wasted the server's I/O resources, aggravating the burden of the network to reduce performance. If the table is large, locks the table during the table scan and prevents other joins from accessing the table, with serious consequences.

14. The SQL Comment Statement has no effect on execution

15, as far as possible without using the cursor, it occupies a large number of resources. If you need to execute row-by-row, try to use non-cursor technology, such as: In the client loop, with temporary tables, table variables, subqueries, with case statements and so on. Cursors can be categorized according to the extraction options it supports: forward-only the rows must be fetched in the order from the first row to the last row. Fetch NEXT is the only allowed fetch operation and is the default. Scrollable can randomly fetch any row anywhere in the cursor. The technique of cursors becomes very powerful under SQL2000, and his purpose is to support loops.

There are four concurrency options

READ_ONLY: The cursor is not allowed to locate updates (update), and there is no lock in the row that makes up the result set.

Optimistic with ValueS: Optimistic concurrency control is a standard part of transaction control theory. Optimistic concurrency control is used in situations where there is only a small chance for a second user to update a row in the interval between opening the cursor and updating the row. When a cursor is opened with this option, there is no lock to control the rows in it, which will help maximize its processing power. If the user attempts to modify a row, the current value of this row is compared with the value obtained when the row was last fetched. If any value changes, the server will know that the other person has updated the row and will return an error. If the value is the same, the server executes the modification. Select this concurrency option optimistic with row VERSIONING: This optimistic concurrency control option is based on row versioning. With row versioning, the table must have some version identifier that the server can use to determine whether the row has changed after it has been read into the cursor.
In SQL Server, this performance is provided by the timestamp data type, which is a binary number that represents the relative order in which changes are made in the database. Each database has a global current timestamp value: @ @DBTS. Each time a row with a timestamp column is changed in any way, SQL Server stores the current @ @DBTS value in the timestamp column, and then increases the value of the @ @DBTS. If a table has a timestamp column, the timestamp is recorded at the row level. The server can compare the current timestamp value of a row with the timestamp value stored at the last fetch to determine whether the row has been updated. The server does not have to compare the values of all columns, just compare the timestamp columns. If an application requires optimistic concurrency based on row versioning for tables that do not have timestamp columns, Reise considers optimistic concurrency control based on numeric values.
SCROLL LOCKS This option for pessimistic concurrency control. In pessimistic concurrency control, when a row of a database is read into a cursor result set, the application attempts to lock the database row. When a server cursor is used, an update lock is placed on the row when it is read into the cursor. If a cursor is opened within a transaction, the transaction update lock is persisted until the transaction is committed or rolled back, and the cursor lock is dropped when the next row is fetched. If you open a cursor outside of a transaction, the lock is discarded when the next row is fetched. Therefore, each time a user needs full pessimistic concurrency control, the cursor should open within the transaction. An update lock prevents any other task from acquiring an update lock or exclusive lock, preventing other tasks from updating the row.
However, updating a lock does not prevent a shared lock, so it does not prevent other tasks from reading the row unless the second task also requires a read with an update lock. Scroll locks These cursor concurrency options can generate scroll locks based on the lock hints specified in the SELECT statement defined by the cursor. The scroll lock is fetched on each line at fetch and remains until the next fetch or the cursor closes, whichever occurs first. The next time the fetch occurs, the server acquires a scroll lock for the row in the new fetch and releases the last scroll lock to fetch rows. A scroll lock is independent of the transaction lock and can be persisted after a commit or rollback operation. If the option to close the cursor at commit is off, the commit statement does not close any open cursors, and the scroll lock is persisted to the commit to maintain isolation of the extracted data. The type of scroll lock acquired depends on the cursor concurrency option and the lock hint in the cursor SELECT statement.
Lock prompt read-only optimistic numeric optimistic row version control lock silent unlocked unlocked unlocked not locked NOLOCK unlocked unlocked unlocked HOLDLOCK shared share share update UPDLOCK error update update TABLOCKX error Unlocked not Lock update other unlocked unlocked Unlocked update * Specifies that the NOLOCK hint will make the table specified with this hint read-only in cursor.

16, use Profiler to track the query, get the time required to query, find out the problem of SQL; optimizing indexes with the index optimizer

17. Note the difference between Union and union all. UNION All good

18, pay attention to using distinct, do not use when not necessary, it will make the query slower than the union. Duplicate records are not a problem in the query.

19. Do not return rows or columns that are not required when querying

20. Use sp_configure ' query governor cost limit ' or set Query_governor_cost_limit to limit the resources consumed by the query. When an estimate query consumes more resources than the limit, the server automatically cancels the query and kills it before the query. Set Locktime Setting the lock time

21, use select Top 100/10 Percent to limit the number of rows returned by the user or set rowcount to limit the operation of the row

22, before SQL2000, generally do not use the following words: "Is NULL", "<>", "! =", "!>", "! < "," not "," not EXISTS "," not "," not like ", and" like '%500′ ", because they do not go index is all a table scan. Also do not add functions, such as convert,substring, in the WHERE clause, if you must use a function, create a computed column and then create an index instead. You can also work around: WHERE SUBSTRING (firstname,1,1) = ' M ' instead of where FirstName like ' m% ' (index Scan), be sure to separate the function and column names. And the index cannot be built too much and too large. The not in will scan the table multiple times, using EXISTS, not EXISTS, in, and left OUTER joins instead, especially the right connection, while EXISTS is faster than in. The slowest is the not operation. If the value of the column is empty, the previous index does not work, and now 2000 of the optimizer can handle it. The same is NULL, ' not ', ' not EXISTS ', ' not ' to optimize her, and ' <> ' cannot be optimized, not indexed.

23. Use Query Analyzer to view the SQL statement's query plan and evaluate whether the analysis is an optimized SQL. The average 20% of the code occupies 80% of the resources, and the focus of our optimization is these slow places.

24. If you use in or OR and so on to find that the query does not go index, use the display declaration to specify the index: SELECT * from personmember (index = ix_title) WHERE ProcessID in (' Male ', ' female ')

25, will need to query the results of pre-calculated to put in the table, query time and then select. This is the most important means before SQL7.0. For example, the hospital's hospitalization fee calculation.

26, MIN () and MAX () can use the appropriate index

27, the database has a principle is the code closer to the better, so the priority to choose Default, in turn, Rules,triggers, Constraint (constraints such as external health main health checkunique ..., the maximum length of data type, etc. are constraints), Procedure. This not only makes maintenance work small, it writes programs with high quality, and executes faster.

28, if you want to insert a large binary value into the image column, using stored procedures, do not use inline insert to insert (do not know whether Java). Because the application first converts the binary value to a string (twice times its size), the server receives the character and converts it to a binary value. The stored procedure does not have these actions: Method: Create procedure P_insert as insert into table ( Fimage) VALUES (@image), call this stored procedure in the foreground to pass in binary parameters, so processing speed significantly improved.

29, between at some point faster than in, between can quickly find the range based on the index. The difference is visible with the query optimizer. SELECT * from Chineseresume where title in (' Male ', ' female ') Select * from chineseresume where between ' male ' and ' female ' are the same. Because in will be compared several times, it is sometimes slower.

30, it is necessary to create indexes on global or local temporary tables, sometimes to improve speed, but not necessarily, because the index also consumes a lot of resources. His creation is the same as the actual table.

31, do not build things that do not work, such as generating reports, wasting resources. Use it only when it is necessary to use things.

32. Words with or can be decomposed into multiple queries, and multiple queries are connected through union. Their speed is only related to whether the index is used, and if the query requires a federated index, it is more efficient to execute with UNION all. The words of multiple or are not used in the index, and the form of Union is then tried to match the index. A key question is whether to use the index.

33, minimize the use of views, its low efficiency. The view operation is slower than the direct table operation and can be replaced by stored procedure. In particular, instead of nesting views, nested views add to the difficulty of finding the original data. We look at the nature of the view: It is a well-optimized SQL that has generated query planning on the server. When retrieving data for a single table, do not use a view that points to multiple tables, either directly from the table or only the view that contains the table, otherwise the unnecessary overhead is increased and the query is disturbed. In order to speed up the query of the view, MSSQL adds the function of the view index.

34, do not use distinct and order by when not necessary, these actions can be changed in the client execution. They add extra overhead. This is the same as union and union all. SELECT top ad.companyname,comid,position,ad.referenceid,worklocation, convert (varchar (ten), ad.postdate,120) as Postdate1,workyear,degreedescription from Jobcn_query.dbo.COMPANYAD_query ad where Referenceid in (' jcnad00329667′, ' jcnad132168′, ' jcnad00337748′, ' jcnad00338345′, ' jcnad00333138′, ' jcnad00303570′, ' jcnad00303569′, ' JCNAD00303568′, ' jcnad00306698′, ' jcnad00231935′, ' jcnad00231933′, ' jcnad00254567′, ' jcnad00254585′, ' jcnad00254608′, ' JCNAD00254607′ , ' jcnad00258524′, ' jcnad00332133′, ' jcnad00268618′, ' jcnad00279196′, ' jcnad00268613′ ' ORDER by postdate Desc

35, in the face value of the list, will appear the most frequent values on the front, the least appear in the last face, reduce the number of judgments

36, when using SELECT INTO, it will lock the system table (sysobjects,sysindexes, etc.), blocking the access of other connections. When creating a temporary table, use the Display declaration statement instead of SELECT INTO. drop table T_LXH BEGIN TRAN SELECT * into T_lxh from chineseresume where name = ' XYZ ' –commit in another connection select * from Sysobje The CTS can see that SELECT into locks system tables, and Create table locks system tables (whether temporary or system tables). So don't use it in things!!! In this case, use a real table, or a temporary table variable, if it is a temporary table that you want to use frequently.

37, generally in the group by a have a sentence before you can eliminate the redundant lines, so try not to use them to do the work of the culling line. Their order of execution should be optimal: the WHERE clause of the SELECT selects all the appropriate rows, group by is used to group the statistical rows, and the having words are used to exclude redundant groupings. This way, group by has a small cost, fast query. For large rows of data grouping and having a very consuming resource. If the purpose of group by is not to include calculations, just groups, then use distinct faster

38, one update multiple records score multiple updates each time a fast, that is, batch processing good

39, the use of temporary tables, as far as possible to use the result set and table class variables to replace it, table type of variable than temporary table good

40, under SQL2000, the calculated field can be indexed, the conditions to be met are as follows:

A, the expression of the calculated field is determined
B, cannot be used in the Text,ntext,image data type
C, the following options must be formulated ansi_nulls = on, ansi_paddings = ON, ....

41, try to put the data processing work on the server, reduce the network overhead, such as the use of stored procedures. Stored procedures are compiled, optimized, and organized into an execution plan, and stored in the database SQL statement, is the control of the language of the * *, the speed of course fast. Dynamic SQL, which is executed repeatedly, can use temporary stored procedures that are placed in tempdb (temporary tables). Previously, because SQL Server did not support complex math calculations, it was forced to put this work on top of other tiers and increase the overhead of the network. SQL2000 supports UDFs, which now supports complex mathematical calculations, the return value of functions is not too large, which is expensive. A user-defined function that executes like a cursor consumes a large amount of resources, if a large result is returned with a stored procedure

42. Do not use the same function repeatedly in a sentence, wasting resources, putting the result in a variable and then calling faster

43, SELECT COUNT (*) efficiency teaches low, try to adapt his writing, and exists fast. Also note the difference: Select count (field of NULL) from Table and select count (field of NOT NULL The return value of the from Table is different.

44, when the server memory enough, the number of threads = The maximum number of connections +5, so as to maximize the efficiency; otherwise, the thread pool of SQL Server is enabled by using the number of threads < maximum number of connections to resolve, if the number = Max connections +5, severely damage the performance of the server.

45, in a certain order to access your table. If you lock table A and then lock table B, you must lock them in this order in all stored procedures. If you (inadvertently) lock table B in a stored procedure, and then lock Table A, this could result in a deadlock. Deadlocks are hard to find if the lock sequence is not designed in advance

46. Monitor the load Memory:page faults/sec counter of the appropriate hardware through SQL Server Performance Monitor If the value is occasionally higher, it indicates that the thread is competing for memory at that time. If it continues to be high, then memory can be a bottleneck. Process:

1,% DPC time refers to the percentage of the processor used in the deferred program invocation (DPC) to receive and provide services during the sample interval. (DPC is running at a lower interval than the standard interval priority). Because DPC is performed in privileged mode, the percentage of DPC time is part of the percentage of privileged time. These times are calculated separately and are not part of the total number of interval calculations. This total shows the average busy time as a percentage of the instance time.
2,%processor Time counter if the value of this parameter continues to exceed 95%, the bottleneck is the CPU. Consider adding a processor or swapping it for a faster one.
3,% Privileged time refers to the percentage of non-idle processor times used for privileged mode. (Privileged mode is a processing mode designed for operating system components and manipulating hardware drivers.) It allows direct access to hardware and all memory. Another mode is User mode, which is a limited processing mode designed for application, environment sub-system and integer sub-system. The operating system translates the application thread into privileged mode to access the operating system services). The% of privileged time includes the time to service the interruption and DPC. A high privilege time ratio can be caused by a large number of intervals that failed devices produce. This counter displays the average busy time as part of the sample time.
4,% User time represents CPU-intensive database operations, such as sorting, executing aggregate functions, and so on. If the value is high, consider increasing the index, using a simple table join, and horizontally splitting the large table to reduce the value. Physical DISK:CURRETN Disk Queue Length counter this value should not exceed 1.5~2 times the number of disks. To improve performance, you can increase the disk. Sqlserver:cache hit Ratio counter the higher the value, the better. If it lasts below 80%, you should consider increasing the memory. Note that the value of this parameter is incremented after starting SQL Server, so the value will not reflect the current value of the system after a period of time has elapsed.

47. Analysis Select Emp_name form employee where salary > 3000 If salary is a float type in this statement, the optimizer optimizes it to convert (float,3000). Since 3000 is an integer, we should use 3000.0 in programming instead of waiting for the DBMS to be transformed by the runtime. Conversions of the same character and integer data.

One, the operator optimization
1, in operator
The advantages of SQL in write are easier to write and easy to understand, which is more suitable for modern software development style. But SQL performance with in is always lower, and the steps taken from Oracle to parse SQL with in is the following differences from SQL without in:
Oracle attempts to convert it into a connection to multiple tables, and if the conversion is unsuccessful, it executes the subquery in the inside, then queries the outer table record, and if the conversion succeeds, it directly uses the connection method of multiple tables. This shows that using in SQL at least one more conversion process. General SQL can be converted successfully, but for the inclusion of grouping statistics and other aspects of SQL cannot be converted.
Recommended scenario: In a business-intensive SQL, try not to use the in operator, instead of using the EXISTS scheme.
2, not in operator
This action is not recommended for strong columns because it cannot apply the index of the table.
Recommended scenario: Replace with not EXISTS scheme
3, is null or is not NULL operation (determines whether the field is empty)
Determining whether a field is empty generally does not apply an index because the index is not an index null value.

Recommended scenario: Replace with other operations with the same function, such as: A is not null changed to A>0 or a> ", etc. The field is not allowed to be empty, but instead of a null value with a default value, such as the Status field in the requisition is not allowed to be empty, the default is the request.
4, > and < operator (greater than or less than operator)
The greater than or less than the operator generally does not need to adjust, because it has an index will be indexed to find, but in some cases it can be optimized, such as a table has 1 million records, a numeric field A, 300,000 records of a=0,30 Records of the A=1,39 million records of a=2,1 Records of the a=3. There is a big difference between performing a>2 and a>=3, because Oracle finds the index of records for 2 and then compares them, while A>=3 Oracle locates the records index of =3 directly.
5. Like operator
The LIKE operator can apply a wildcard query, where the wildcard combination may reach almost arbitrary queries, but if used poorly it can produce performance problems, such as the "%5400%" query does not reference the index, and the "x5400%" reference to the scope index.
A practical example: Use the user identification number behind the business number in the YW_YHJBQK table to query the business number YY_BH like '%5400% ' this condition will result in a full table scan, if changed to yy_bh like ' x5400% ' or yy_bh like ' b5400% ' will benefit The performance of the two-range query with YY_BH Index is certainly greatly improved.
6. Union operator
The Union will filter out duplicate records after the table link is made, so the resulting set of results will be sorted after the table is connected, the duplicate records are deleted and the results returned. Most of the actual applications do not produce duplicate records, the most common being the process table and the History table Union. Such as:
SELECT * FROM ***fys
SELECT * FROM Ls_jg_dfys
This SQL takes out the results of two tables at run time, then sorts the duplicate records with the sort space, and finally returns the result set, which may cause the disk to be sorted if the table data volume is large.
Recommended Scenario: Use the union ALL operator instead of union because the union all operation simply merges two results and returns.
SELECT * FROM ***fys
SELECT * FROM Ls_jg_dfys
Second, the impact of SQL writing
1, the same function the same performance different SQL effect.
As a SQL in a programmer wrote for Select * from ZL_YHJBQK
B programmer writes for Select * from DLYX.ZL_YHJBQK (prefixed with table owner)
C Programmers write for Select * from Dlyx. ZLYHJBQK (uppercase table name)
The D programmer writes for Select * from Dlyx. Zlyhjbqk (more spaces in the middle)
The result of the above four SQL is the same as the execution time after the Oracle analysis, but from the Oracle shared memory SGA, it can be concluded that Oracle will analyze each SQL and consume shared memory. If you write the SQL string and format exactly the same, then Oracle will only parse once, and the shared memory will only leave a single analysis, not only to reduce the time to analyze SQL, but also to reduce the duplication of shared memory information, Oracle can accurately count the frequency of SQL execution.
2, where the condition order after the influence
The condition order after the WHERE clause has a direct effect on the query of the large data scale. Such as:
Select * from zl_yhjbqk where dy_dj = ' 1KV or less ' and xh_bz=1
Select * from Zl_yhjbqk where xh_bz=1 and dy_dj = ' 1KV or less '
The above two SQL DY_DJ (voltage level) and XH_BZ (PIN household sign) Two fields are not indexed, so the execution is full table scan, the first SQL DY_DJ = ' 1KV below ' condition in the recordset ratio is 99%, and xh_bz=1 ratio is only 0.5%, At the time of the first SQL 99% records are compared Dy_dj and xh_bz, while in the second SQL 0.5% records are DY_DJ and xh_bz comparisons, so that the second SQL CPU utilization is significantly lower than the first one.
3. Influence of query Table order
The order of the list in the table following the from will have a performance impact on SQL, and with no indexes and no statistical analysis of the tables by Oracle, Oracle will be linked in the order in which the tables appear, so that the order of the tables is not the same as the data that is consuming the server resource. (Note: If the table is statistically analyzed, Oracle will automatically link the small table and then the large table)
Third, the use of SQL statement index
1. Operator optimization (IBID.)
2, some optimization of the condition field
Fields that use function processing cannot take advantage of indexes such as:
substr (hbs_bh,1,4) = ' 5400 ', optimized processing: HBS_BH like ' 5,400% '
Trunc (SK_RQ) =trunc (sysdate), optimized processing: Sk_rq>=trunc (sysdate) and Sk_rq<trunc (sysdate+1)
Fields that have been explicitly or implicitly operated cannot be indexed, such as: ss_df+20>50, optimized processing: ss_df>30
' X ' | | Hbs_bh> ' X5400021452 ', optimized for handling:hbs_bh> ' 5400021542 '

Sk_rq+5=sysdate, optimized processing: sk_rq=sysdate-5
hbs_bh=5401002554, optimized processing: hbs_bh= ' 5401002554 ', note: This condition implicitly to_number conversion for HBS_BH, because the Hbs_bh field is a character type.
Field operations that include multiple tables in the condition cannot be indexed, such as: YS_DF>CX_DF, cannot be optimized
QC_BH | | Kh_bh= ' 5400250000 ', optimized processing: qc_bh= ' 5400 ' and kh_bh= ' 250000 '
Iv. Other
Oracle's hints feature is a relatively strong feature and is a complex application, and hints are only a recommendation to Oracle, and sometimes Oracle may not follow the prompts for cost considerations. According to practical application, the developer is generally not recommended to apply the Oracle hint, because the performance of the database and server is not the same, it is possible to improve performance in one place, but the other is down, Oracle in the SQL to perform the analysis is more mature, If the parse execution path is not the first should be analyzed in the database structure (mainly index), the server's current performance (shared memory, disk file fragmentation), database objects (tables, indexes) statistics are correct.

SQL Server Query optimization method reference (GO)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.