There are many reasons for slow query speed. The following are common causes:
1. No index or no index is used (this is the most common problem of slow query and is a defect in programming)
2. Low I/O throughput, resulting in a bottleneck effect.
3. the query is not optimized because no computing column is created.
4. Insufficient memory
5. slow network speed
6. The queried data volume is too large (you can use multiple queries to reduce the data volume in other ways)
7. Lock or deadlock (this is also the most common problem of slow query and is a defect in programming)
8. sp_lock and sp_who are active users. The reason is that they read and write competing resources.
9. Unnecessary rows and columns are returned.
10. If the query statement is not good and is not optimized, You can optimize the query as follows:
1. Place data, logs, and indexes on different I/O devices to increase the reading speed. In the past, Tempdb can be placed on RAID0, which is not supported by SQL2000. The larger the data size (size), the more important it is to increase I/O.
2. vertically and horizontally split the table to reduce the table size (sp_spaceuse)
3. upgrade hardware
4. Create an index based on the query conditions, optimize the index, optimize the access mode, and limit the data volume of the result set. Note that the fill factor should be appropriate (preferably the default value 0 ). The index should be as small as possible. Use a column with a small number of bytes to create an index (refer to the index creation). Do not create a single index for fields with a limited number of values, such as gender fields.
5. Improve network speed;
6. Expand the server memory. Windows 2000 and SQL server 2000 support 4-8 GB memory. Configure virtual memory: the virtual memory size should be configured based on services running concurrently on the computer. Run Microsoft SQL Server? 2000, you can consider setting the virtual memory size to 1.5 times the physical memory installed on your computer. If you have installed the full-text search feature and intend to run the Microsoft Search Service for full-text indexing and query, consider: set the virtual memory size to at least three times the physical memory installed on the computer. Configure the SQL Server max server memory Server configuration option to 1.5 times the physical memory (half the virtual memory size ).
7. Increase the number of server CPUs. However, you must understand that resources such as memory are more required for concurrent processing of serial processing. Whether to use parallelism or serial travel is automatically evaluated and selected by MsSQL. A single task is divided into multiple tasks and can be run on the processor. For example, if the sort, connection, scan, and group by statements of delayed queries are executed simultaneously, SQL SERVER determines the optimal parallel level based on the system load, complex queries that consume a large amount of CPU are most suitable for parallel processing. However, UPDATE, INSERT, and DELETE operations cannot be processed in parallel.
8. If you use like for query, you cannot simply use index, but the full-text index consumes space. Like 'a % 'when the index like' % a' is used and like '% a %' is not used for the query, the query time is proportional to the total length of the field value, so the CHAR type cannot be used, but VARCHAR. Create a full-text index for a long field value.
9. Separate DB Server and APPLication Server; Separate OLTP and OLAP
10. Distributed partition view can be used to implement Database Server consortium. A consortium is a group of separately managed servers, but they collaborate to share the processing load of the system. This mechanism of forming Database Server consortium through partition data can expand a group of servers to support the processing needs of large multi-layer Web sites. For more information, see designing a database federation server. (Refer to the SQL Help File 'partition view ')
A. before implementing the partition view, a horizontal partition table must be created.
B. After creating a member table, define a distributed partition view on each Member Server, and each view has the same name. In this way, queries that reference the view name of a distributed partition can run on any Member Server. System operations are the same as if each member server has a copy of the original table, but in fact each server has only one member table and a distributed partition view. The data location is transparent to the application.
11. Rebuild the index DBCC reindex, DBCC indexdefrag, shrink data and log DBCC shrinkdb, and DBCC shrinkfile. set automatic log shrinking. for large databases, do not set Automatic database growth, which will reduce the server performance. The writing of T-SQL is very important. The following lists common points: first, the process of DBMS processing the query plan is as follows:
1. query statement lexical and syntax check
2. submit the statement to the query optimizer of the DBMS.
3. optimizer performs algebra optimization and access path optimization
4. A query plan is generated by the Pre-compilation module.
5. Then, submit it to the system for processing and execution at the appropriate time.
6. Finally, return the execution result to the user. Next, let's take a look at the data storage structure of SQL SERVER: the size of a page is 8 K (8060) bytes, eight pages are a disk area and are stored in the B-tree format.
12. Difference Between commit and rollback: Roll back all things. Commit: Submit the current transaction. there is no need to write things in dynamic SQL. If you want to write things, write them out, such as begin Tran exec (@ s) Commit trans, or write dynamic SQL into functions or stored procedures.
13. Use the WHERE clause in the SELECT statement to limit the number of returned rows to avoid table scanning. If unnecessary data is returned, the server's I/O resources are wasted, this increases the burden on the network and reduces performance. If the table is large, the table is locked during the table scan and other connections are prohibited from accessing the table. The consequence is serious.
14. SQL statement comments have no impact on execution
15. Try not to use the cursor. It occupies a large amount of resources. If you need row-by-row execution, try to use non-cursor technology, such as loop on the client, using temporary tables, table variables, subqueries, and case statements. The cursor can be classified according to the extraction options it supports: only the rows must be extracted from the first row to the last row. Fetch next is the only allowed extraction operation and is also the default method. You can extract arbitrary rows randomly anywhere in the cursor. The cursor technology becomes very powerful in SQL2000, and its purpose is to support loops. There are four concurrent options read_only: update cannot be located through the cursor, and there is no lock in the rows that make up the result set. Optimistic with values: Optimistic Concurrency Control is a standard part of transaction control theory. Optimistic Concurrency control is used in this case. In the interval between opening the cursor and updating the row, there is only a small chance for the second user to update a row. When a cursor is opened with this option, there is no lock to control the rows, which will help maximize its processing capability. If you try to modify a row, the current value of the row is compared with the value obtained from the last row extraction. If any value changes, the server will know that the other person has updated the row and will return an error. If the value is the same, the server executes the modification. Select this concurrency option optimistic with row versioning: this optimistic concurrency control option is based on Row version control. Use row version control. The table must have a version identifier, which can be used by the server to determine whether the row is changed after the cursor is read. In SQL Server, this performance is provided by the timestamp data type. It is a binary number that indicates the relative sequence of changes in the database. Each database has a global current timestamp value: @ dbts. Every time you change a row with a timestamp column in any way, SQL Server first stores the current @ dbts value in the timestamp column, and then adds the value of @ dbts. If a table has a timestamp column, the timestamp is recorded as a row. The server can compare the current timestamp value of a row with the timestamp value stored during the last extraction to determine whether the row has been updated. The server does not need to compare the values of all columns. You only need to compare the timestamp column. If the application requires Optimistic Concurrency Based on Row Version Control for tables without a timestamp column, the cursor is optimistic concurrency control based on the value by default. Scroll locks implements pessimistic concurrency control. In pessimistic concurrency control, when the row of the database is read into the cursor result set, the application attempts to lock the row of the database. When a server cursor is used, an update lock is placed on the row when it is read into the cursor. If the cursor is opened in the transaction, the update lock of the transaction will be kept until the transaction is committed or rolled back. When the next row is extracted, the cursor lock will be removed. If the cursor is opened outside the transaction, the lock is discarded when the next row is extracted. Therefore, whenever you need full pessimistic concurrency control, the cursor should be opened in the transaction. The update lock prevents any other task from obtaining the update lock or exclusive lock, thus preventing other tasks from updating the row. However, the update lock does not prevent the shared lock, so it does not prevent other tasks from reading rows, unless the second task also requires reading with the update lock. Based on the lock prompts specified in the SELECT statement defined by the cursor, these cursor concurrency options can generate a scroll lock. The scroll lock is obtained on each row during extraction and is kept until the next extraction or cursor is closed. The first occurrence prevails. During the next extraction, the server obtains the scroll lock for the Newly Extracted row and releases the scroll lock of the last extracted row. The rolling lock is independent of the transaction lock and can be kept after a commit or rollback operation. If the option to close the cursor when submitting is off, the commit statement does not close any opened cursor, and the scroll lock is retained until it is committed to maintain isolation of the extracted data. The type of the obtained scroll lock depends on the cursor concurrency option and the lock prompt in the SELECT statement of the cursor. Lock prompt read-only optimistic value optimistic row version control lock No prompt not locked update nolock not locked holdlock sharing update updlock error Update tablockx Error unlocked unlocked update other unlocked update * The specified nolock prompt will make the table specified with this prompt read-only in the cursor.
16. Use profiler to track the query, obtain the time required for the query, and locate the SQL problem. Use the index optimizer to optimize the index.
17. Pay attention to the difference between Union and Union all. Good union all
18. Use distinct unless necessary. Similar to union, it slows down the query. Duplicate records are no problem in the query.
19. Do not return unwanted rows or columns during query.
20. Use sp_configure 'query Governor cost limit 'or set query_governor_cost_limit to limit the resources consumed by the query. When the resource consumed by the evaluation query exceeds the limit, the server automatically cancels the query and kills the query before the query. Set locktime: Set the lock time.
21. Use select Top 100/10 percent to limit the number of rows returned by the user or set rowcount to limit the rows to be operated.
22. Before SQL2000, do not use the following words: "Is null", "<> ","! = ","!> ","! <"," Not "," not exists "," not in "," not like ", and" like '% 100' ", because they do not leave the index and are all table scans. Do not add a function to the column name in the WHERE clause, such as convert and substring. If a function is required, create a computed column and then create an index. you can also change the syntax of where substring (firstname,) = 'M' to where firstname like'm % '(index scan). You must separate the function from the column name.