MS SQL Server Query optimization method (1) There are many reasons for the slow query speed, common as follows:

Source: Internet
Author: User
Tags commit execution implement insert joins sql server query query server memory
server| Speed | optimization

MS SQL Server Query optimization method (1) There are many reasons for the slow query speed, common as follows:


1, no index or no index (this is the most common problem of query slow, is the defect of program design)
2, I/O throughput is small, creating a bottleneck effect.
3. No computed columns were created to cause the query to be not optimized.
4, not enough memory
5, the network speed is slow
6, the query out of the amount of data is too large (you can use multiple inquiries, other methods to reduce the amount of data)
7, lock or deadlock (this is the most common problem of query slow, is the defect of program design)
8, sp_lock,sp_who, the active user view, the reason is the reading and writing competition resources.
9, returned the unnecessary rows and columns
10, the query statement is not good, no optimization

You can refine your query by:


1, the data, log, index on the different I/O devices, increase the reading speed, before the tempdb should be placed on the RAID0, SQL2000 not support. The larger the amount of data (size), the more important I/O is.
2, longitudinal, transverse partition table, reduce the size of the table (Sp_spaceuse)
3. Upgrade Hardware
4, according to query conditions, establish index, optimize index, optimize access mode, limit the data quantity of result set. Note that the fill factor should be appropriate (preferably with the default value of 0). The index should be as small as possible, with a small number of bytes of Lie Jian index Good (reference to the creation of the index), not to a limited number of characters Jianjian a single index such as Gender field
5, improve the speed;
6, to expand the server's memory, Windows 2000 and SQL Server 2000 can support 4-8g memory.

Configure virtual Memory: Virtual memory size should be configured based on a service that is running concurrently on the computer. Run Microsoft SQL Server? When you are 2000, consider setting the virtual memory size to 1.5 times times the physical memory installed on your computer. If you installed the Full-text Search feature and intend to run the Microsoft search service to perform full-text indexing and querying, consider: Configure the virtual memory size to be at least 3 times times the physical memory installed on your computer. Configure the SQL Server max server memory server configuration option to 1.5 times times the physical memory (half of the virtual memory size setting).
7, increase the number of server CPUs; but it must be understood that parallel processing of serial processing requires resources such as memory. The use of parallel or serial stroke is the automatic evaluation of MSSQL selection. A single task can be run on a processor by breaking it into multiple tasks. For example, delays in sorting, linking, scanning, and group by words, SQL Server determines the optimal level of parallelism based on the load of the system, and complex queries that consume a large number of CPUs are best suited for parallel processing. However, the update operation Update,insert,delete cannot be processed in parallel.
8, if you are using like to query, the simple use of index is not possible, but full-text indexing, space consumption. Like ' a% ' using the index as '%a ' without using the index to "%a%" query, the query time is proportional to the total length of the field value, so you cannot use the char type, but the varchar. A full-text index that is long for a field's value.
9, DB Server and Application Server detach, OLTP and OLAP detach
10, distributed partitioned views can be used to implement a federation of database servers.

A consortium is a set of separately managed servers, but they work together to share the processing load of the system. This mechanism for creating a federation of database servers through partitioned data can expand a set of servers to support the processing needs of large, multi-tiered Web sites. For more information, see Designing a federated database server. (Reference SQL Help file ' partitioned view ')
A, before you implement a partitioned view, you must first partition the table horizontally
B, after the member table is created, a distributed partitioned view is defined on each member server, and each view has the same name. In this way, queries that reference the distributed partitioned view name can run on any one member server. The system operates like a copy of the original table on each member server, but in fact there is only one member table and one distributed partitioned view per server. The location of the data is transparent to the application.


11. Rebuild index DBCC REINDEX, DBCC INDEXDEFRAG, shrink data and log DBCC SHRINKDB,DBCC shrinkfile. Set up automatic shrink logging. For large databases do not set up automatic database growth, it can degrade server performance.

There is a lot of emphasis on T-SQL, and here's a list of common points: first, the process that the DBMS processes the query plan is this:
1. Lexical and grammatical examination of query statements
2, the query optimizer that submits the statement to the DBMS
3 Optimization of algebra optimization and access path optimization
4, from the precompiled module to generate query planning
5, and then at the appropriate time to submit to the system processing execution
6, and finally return the results of the execution to the user.

Second, look at the structure of SQL Server data storage: A page size of 8K (8060) bytes, 8 pages for a disk, according to B-Tree storage.


12, the difference between commit and Rollback Rollback: Roll back all things. Commit: Commit the current thing. There is no need to write things in dynamic SQL, if you want to write please write in the outside such as: Begin TRAN EXEC (@s) commit trans or write dynamic SQL as a function or stored procedure.
13, in the query SELECT statement with the WHERE clause to limit the number of rows returned to avoid table scanning, if the return of unnecessary data, waste the server's I/O resources, increased the burden of the network to reduce performance. If the table is large, locks the table during table scans, and prevents other joins from accessing the table, with serious consequences.
14, the SQL Annotation declaration has no effect on the execution
15, as far as possible without the use of cursors, it occupies a large number of resources. If you need to row-by-row, try to use non-cursor technology, such as: In the client loop, with temporary tables, table variables, with subqueries, with case statements and so on. Cursors can be categorized according to the extraction options it supports: Only the rows must be fetched in the order from the first line to the last line. Fetch NEXT is the only extraction operation allowed and the default way. Scrollable can randomly extract any row anywhere in the cursor. The technique of cursors becomes powerful under SQL2000, and his purpose is to support loops. There are four concurrent options READ_ONLY: Cursor positioning update (update) is not allowed, and there are no locks in the rows that make up the result set. Optimistic with ValueS: Optimistic concurrency control is a standard part of the theory of transaction control. Optimistic concurrency control is used in situations where there is only a small opportunity for a second user to update a row in the interval between opening the cursor and updating the row. When a cursor is opened with this option, there is no lock to control the rows in it, which helps to maximize its processing power. If the user attempts to modify a row, the current value of this row is compared to the value obtained when the row was last fetched. If any value changes, the server knows that someone else has updated the row and returns an error. If the value is the same, the server performs the modification. Select this concurrency option optimistic with row VERSIONING: This optimistic concurrency control option is based on row versioning. With row versioning, the table must have a version identifier that the server can use to determine whether the row has changed since it was read into the cursor. In SQL Server, this performance is provided by the timestamp data type, which is a binary number that represents the relative order of changes in the database. Each database has a global current timestamp value: @ @DBTS. Each time a row with a timestamp column is changed in any way, SQL Server stores the current @ @DBTS value in the timestamp column, and then increases the value of the @ @DBTS. If a table has a timestamp column, the timestamp is recorded at the row level. The server can compare the current timestamp value of a row with the timestamp value stored at the last fetch to determine whether the row has been updated. The server does not have to compare the values of all columns, just compare the timestamp columns. If the application requires optimistic concurrency based on row versioning for tables that do not have timestamp columns, then Reise considers optimistic concurrency control based on numeric values. SCROLL LOCKS This option to implement pessimistic concurrency control. In pessimistic concurrency control, the application attempts to lock the database row when the row of the database is read into the cursor result set. When using a server cursor, the row is readAn update lock is placed on the cursor when it is entered. If a cursor is opened within a transaction, the transaction update lock remains until the transaction is committed or rolled back, and the cursor lock is dropped when the next row is fetched. If you open a cursor outside of a transaction, the lock is discarded when the next row is fetched. Therefore, whenever a user needs full pessimistic concurrency control, the cursor should open within the transaction. Updating a lock prevents any other task from obtaining an update or exclusive lock, preventing other tasks from updating the row. However, the update lock does not block shared locks, so it does not prevent other tasks from reading rows unless the second task also requires a read with an update lock. Scroll locks can generate scroll locks based on the lock hints specified in the SELECT statement defined in the cursor. The scroll lock is fetched on each line during extraction and is persisted to the next fetch or the cursor is closed, whichever occurs first. The next time the fetch is fetched, the server acquires a scroll lock for the row in the new fetch and releases the scroll lock from the last fetch of the rows. The scroll LOCK is independent of the transaction lock and can be persisted after a commit or rollback operation. If the option to close the cursor at commit is off, the commit statement does not close any open cursors, and the scroll lock is persisted to the commit to maintain isolation of the extracted data. The type of the obtained scroll lock depends on the cursor concurrency option and the lock hint in the cursor SELECT statement. Lock prompt Read Only optimistic value optimistic row version control lock silent unlocked unlocked Unlocked update NOLOCK unlocked unlocked unlocked unlocked HOLDLOCK shared share share update UPDLOCK error update update update TABLOCKX error unlocked Lock updates Other unlocked unlocked unlocked updates * Specifies that the NOLOCK prompt will cause the table specified for that hint to be read-only in Gianpene.


16, with Profiler to track the query, get the time required to find out the problem of SQL; optimizing indexes with the index optimizer
17, notice the difference between Union and union all. UNION All good
18, pay attention to use distinct, do not need to use, it is the same as the Union will make the query slow. Duplicate records are no problem in the query.
19, do not return the query does not need the row, column
20, use sp_configure ' query governor cost limit ' or set Query_governor_cost_limit to limit the resources that the query consumes. When an estimate query consumes more resources than the limit, the server automatically cancels the query and kills it before the query. Set Locktime the time of the lock
21, with select Top 100/10 Percent to limit the number of rows returned by the user or set rowcount to restrict the operation of the line
22, in the SQL2000 before, generally do not use the following words: "Is NULL", "<>", "!=", "!>", "!<", "not", "not EXISTS", "does not", "Don't like", and "L IKE '%500 ', because they do not walk the index is all table scan. Also, do not add a function to the column name in the WHERE clause, such as convert,substring, if you have to use a function, create a computed column and then create an index instead. You can also work around it: WHERE SUBSTRING (firstname,1,1) = ' m ' Change to where FirstName like ' m% ' (index Scan), and be sure to separate the function from the column name. And the index cannot be built too much and too large. Not in will scan the table multiple times, using EXISTS, not EXISTS, in, and left OUTER joins to replace them, especially the left-hand connection, and EXISTS is faster than in. The slowest is the not operation. If the value of the column is empty, the previous index does not work, and the 2000 optimizer is now able to handle it. The same is NULL, ' not ', ' not EXISTS ', ' not in ' can optimize her, and ' <> ' is still not optimized and cannot be indexed.
23. Use Query Analyzer to view query plans for SQL statements and evaluate whether the analysis is optimized for SQL. The average 20% of the code occupies 80% of the resources, and the focus of our optimization is these slow places.
24, if you use in or or, and so on, found that the query did not go index, use the display declaration to specify the index: SELECT * from personmember (INDEX = ix_title) WHERE ProcessID in (' Male ', ' female ')
25, will need to query the results of a prior calculation to put in the table, the query time again select. This is the most important means before SQL7.0. For example, hospital hospitalization costs are calculated.
26, MIN () and MAX () can use the appropriate index.
27, the database has a principle is the code from the data closer to the better, so priority to choose Default, followed by Rules,triggers, Constraint (constraints such as the health of the main health checkunique ..., the maximum length of the data type, etc. are constraints), Procedure. This not only maintenance work small, the writing program quality is high, and the execution speed is fast.
28, if you want to insert a large binary value to the image column, use stored procedures, do not use inline insert to insert (do not know whether Java). Because the application first converts the binary value to a string (twice times the size), the server is converted to a binary value after the character. Stored procedures do not have these actions: Method: Create procedure P_insert as insert into table ( Fimage) VALUES (@image), which call this stored procedure in the foreground to pass in binary parameters, which improves the processing speed significantly.



Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.