The following is not my arrangement, but after looking very good, I hereby share.
1, in the application, to ensure that the implementation of the function on the basis of minimizing the number of access to the database; Search parameters, minimize the number of access rows to the table, minimize the result set, thereby reducing the network burden; Operation as far as possible, improve the response speed of each time; When using SQL in the Data window, try to make The index is placed in the first column of the selection, the structure of the algorithm is as simple as possible, and when querying, do not use too much of the wildcard such as SELECT * FROM T1 statement, use a few columns to choose a few columns such as: Select Col1,col2 from T1, as far as possible, limit the number of result sets as possible: SELECT TOP 300 Col1,col2,col3 from T1, because in some cases the user does not need that much data. Not in Cursors are useful tools for using database cursors in your application, but they are better than using regular, set-oriented SQL Statements require greater overhead, and the lookup of data is extracted in a specific order.
2, avoid the use of incompatible data types. such as float and int, char and varchar, binary, and The varbinary is incompatible. Incompatible data types may make it impossible for the optimizer to perform some Optimized operation of the row. For example: SELECT name from employee WHERE Salary > 60000 In this statement, such as the salary field is money type, it is difficult for the optimizer to optimize it because 60000 is an integer number. We should convert an integer into a coin type when programming, rather than wait for a run-time conversion.
3. Try to avoid function or expression operations on the field in the Where clause, which will cause the engine to discard Perform a full table scan using an index. Such as: SELECT * from T1 WHERE f1/2=100 should read: SELECT * from T1 WHERE f1=100*2
SELECT * from RECORD WHERE SUBSTRING (card_no,1,4) = ' 5378 ' should read: SELECT * from RECORD WHERE card_no like ' 5,378% '
SELECT Member_number, first_name, last_name from members WHERE DATEDIFF (Yy,datofbirth,getdate ()) > 21 should read: SELECT Member_number, first_name, last_name from members WHERE dateOfBirth < DATEADD (Yy,-21,getdate ()) That is, any action on a column will cause a table scan, which includes database functions, calculation expressions, and so on, to query When possible, move the operation to the right of the equals sign.
4. Avoid operators such as! = or ">, is null," is NOT null, "in", "not", etc. Because this makes the system unusable with indexes, it can only search the data in the table directly. For example: SELECT ID from employee WHERE ID! = ' B% ' The optimizer will not be able to determine the number of rows to be fatal by the index, so it needs to search all rows of that table.
5, try to use digital field, some developers and database managers like to include the value of the letter Fields of Interest Designed as a character type, which reduces the performance of queries and connections and increases storage overhead. This is because the engine Handles queries and joins back by comparing each character in a string, whereas for a digital type you only need to compare one Times is enough.
6, reasonable use exists,not exists clause. As shown below: 1.SELECT SUM (T1. C1) from T1 WHERE ( (SELECT COUNT (*) from T2 WHERE t2.c2=t1.c2>0) 2.SELECT SUM (T1. C1) from T1where EXISTS ( SELECT * from T2 WHERE T2. C2=t1. C2) Both produce the same result, but the latter is obviously more efficient than the former. Because the latter does not generate a large number of locks Scan or index scan of a fixed table. If you want to verify that there is a record in the table, do not use COUNT (*) as inefficient, and waste Resource of the service. Can be replaced with exists. Such as: IF (SELECT COUNT (*) from table_name WHERE column_name = ' xxx ') Can be written as: IF EXISTS (SELECT * FROM table_name WHERE column_name = ' xxx ')
It is often necessary to write a t_sql statement that compares a parent result set and a child result set to find out if there is a parent There are records in the result set that are not in the child result set, such as: 1.SELECT A.hdr_key from Hdr_tbl a----tbl a means that TBL uses alias a instead Where not EXISTS (SELECT * from dtl_tbl b WHERE a.hdr_key = B.hdr_key)
2.SELECT A.hdr_key from Hdr_tbl a Left JOIN dtl_tbl B-a.hdr_key = B.hdr_key WHERE B.hdr_key is NULL
3.SELECT Hdr_key from Hdr_tbl WHERE Hdr_key not in (SELECT Hdr_key from DTL_TBL) Three kinds of writing can get the same correct results, but the efficiency is reduced in turn.
7, try to avoid in the indexed character data, the use of non-beginning letter search. This also makes the engine unable to Use the index. See the following example: SELECT * from T1 WHERE NAME like '%l% ' SELECT * from T1 WHERE substing (name,2,1) = ' L ' SELECT * from T1 WHERE NAME like ' l% ' Even if the Name field is indexed, the first two queries are still unable to take advantage of the index to speed up the operation, the engine has to Complete the task by manipulating all data on the entire table. The third query can use an index to speed up operations.
8, sub-use connection conditions, in some cases, two tables may be more than one connection condition, which It is possible to greatly improve the query speed by writing the connection conditions in the WHERE clause. Cases: SELECT SUM (A.amount) from account a,card B WHERE a.card_no = b.card_no SELECT SUM (A.amount) from account a,card B WHERE a.card_no = b.card_no and A.account_no=b.account_no The second sentence will be much faster than the first sentence.
9. Eliminate sequential access to large table row data Although there are indexes on all check columns, some form of where clause forces the optimizer to use Sequential access. Such as: SELECT * FROM Orders WHERE (customer_num=104 and order_num>1001) OR order_num=1008 The workaround can use the set to avoid sequential access: SELECT * FROM Orders WHERE customer_num=104 and order_num>1001 UNION SELECT * FROM Orders WHERE order_num=1008 This enables the query to be processed using the index path.
10. Regular expressions to avoid difficulties The LIKE keyword supports wildcard matching, which is technically called a regular expression. But this match is especially expensive Room Example: SELECT * from the customer WHERE zipcode like "98_ _ _" Even if an index is established on the ZipCode field, sequential scanning is also used in this case. Such as Change the statement to select * from Customer where zipcode > "98000" to execute the query will use the index to query, obviously will greatly increase the speed. 11. Use the view to speed up the query Sorting a subset of tables and creating views can sometimes speed up queries. It helps to avoid multiple sorting Operations, and in other ways simplifies the work of the optimizer. For example: SELECT cust.name,rcvbles.balance,......other Columns From Cust,rcvbles WHERE cust.customer_id = rcvlbes.customer_id and rcvblls.balance>0 and cust.postcode> "98000" ORDER by Cust.name If the query is to be executed multiple times and more than once, all unpaid customers can be found on a View and sort by customer's name: CREATE VIEW DBO. V_cust_rcvlbes As SELECT cust.name,rcvbles.balance,......other Columns From Cust,rcvbles WHERE cust.customer_id = rcvlbes.customer_id and rcvblls.balance>0 ORDER by Cust.name
Then query in the view in the following way: SELECT * from V_cust_rcvlbes WHERE postcode> "98000" The rows in the view are less than the rows in the primary table, and the physical order is the required order, reducing the disk I/O, so the query effort can be greatly reduced.
12, you can use between do not use in SELECT * from T1 WHERE ID in (10,11,12,13,14) Change to: SELECT * from T1 WHERE ID between and 14 Because in causes the system to not use the index, it can only search the data in the table directly.
13, distinct do not need GROUP by SELECT OrderID from Details WHERE UnitPrice > Ten GROUP by OrderID Can be changed to: SELECT DISTINCT OrderID from Details WHERE UnitPrice > 10
14. Partial Utilization Index 1.SELECT EmployeeID, FirstName, LastName From names WHERE dept = ' Prod ' or city = ' Orlando ' or division = ' food '
2.SELECT EmployeeID, FirstName, LastName from names WHERE Dept = ' Prod ' UNION All SELECT EmployeeID, FirstName, LastName from names WHERE city = ' Orlando ' UNION All SELECT EmployeeID, FirstName, LastName from names WHERE Division = ' Food ' If the Dept column has an index, query 2 can partially take advantage of the index, and query 1 cannot.
15, you can use union all do not use union UNION all does not execute the SELECT DISTINCT function, which reduces a lot of unnecessary resources
16, do not write some do not do anything about the query such as: SELECT COL1 from T1 WHERE 1=0 SELECT COL1 from T1 WHERE col1=1 and col1=2 This type of death code does not return any result sets, but it consumes system resources.
17, try not to use the SELECT INTO statement. The SELECT into statement causes the table to lock and prevent other users from accessing the table.
18. Force the query optimizer to use an index if necessary SELECT * from T1 WHERE nextprocess = 1 and ProcessID in (8,32,45) Change to: SELECT * from T1 (INDEX = ix_processid) WHERE nextprocess = 1 and ProcessID in (8,32,45) The query optimizer will forcibly use the index Ix_processid to execute the query.
19, although the update, DELETE statement is basically fixed, but also to the UPDATE statement to the point of building On: A) Try not to modify the primary key field. b) When modifying a varchar field, try to replace it with the value of the same length content. c) Minimize the update operation for the table that contains the update trigger. d) Avoid update columns that will be copied to other databases. e) Avoid the update of columns with many indexes. f) Avoid the column of the update in the WHERE clause condition.
What we mentioned above are some basic considerations for improving query speed, but in more cases it is often You need to experiment and compare different statements to get the best solution. The best way of course is to test and see the implementation phase The SQL statement with the same function has the least execution time, but if the data is very small in the database, it is less than , you can use the view execution plan, that is: the implementation of the same function of multiple SQL statements to the query points Ctrl+l, see the index used by the scan, the number of table scans (both of which have the greatest impact on performance), the overall You can see the cost percentage on the poll. Simple stored procedures can be automatically generated with wizards: Click the Run Wizard icon in the Enterprise Manager toolbar, point Click Database, create Stored Procedure Wizard. Debugging of complex stored procedures: on the left side of Query Analyzer Object Browser (no?). Press F8) to select the stored procedure to debug, right-click, point to Debug, enter parameters Execution, there is a floating toolbar, which has single step execution, breakpoint settings, etc. |