SQL statement optimization principles:
◆ 1. Use indexes to traverse tables faster
The index created by default is a non-clustered index, but sometimes it is not optimal. In a non-clustered index, data is physically stored on the data page randomly. Reasonable index design should be based on the analysis and prediction of various queries. Generally speaking: ①. you can create a cluster index for columns with a large number of duplicate values and frequent range queries (between, >,<<=, <=), order by, and group; ②. multiple columns are frequently accessed at the same time, and each column contains duplicate values. You can create a composite index. composite indexes should try to overwrite key queries. The leading column must be the most frequently used column. Although indexes can improve performance, the more indexes, the better. On the contrary, too many indexes will lead to low system efficiency. Each time you add an index to a table, you must update the index set.
◆ 2. Is null and is not null
Null cannot be used as an index. Any column containing null values will not be included in the index. Even if there are multiple columns in the index, as long as one of these columns contains null, this column will be excluded from the index. That is to say, if a column has a null value, even if the column is indexed, the performance will not be improved. Any statement optimizer that uses is null or is not null in the WHERE clause cannot use indexes.
◆ 3. In and exists
Exists is far more efficient than in. It is related to full table scan and range scan. Almost all in operator subqueries are rewritten to subqueries using exists.
◆ 4. Use as few formats as possible for massive queries.
◆ 5. in SQL Server 2000, if the stored procedure has only one parameter and is of the output type, you must give the parameter an initial value when calling the stored procedure, otherwise, a call error occurs.
◆ 6. Order by and gropu
Using the 'ORDER BY' and 'group by' phrases, any index can improve select performance. Note: If the index column contains a null value, optimizer cannot optimize it.
◆ 7. Any column operation will cause table scanning, including database functions and calculation expressions. During query, try to move the operation to the right of the equal sign.
◆ 8. In And or clauses usually use worksheets to invalidate indexes. If a large number of duplicate values are not generated, consider splitting the clause. The split clause should contain the index.
◆ 9. Set showplan_all on. DBCC checks database data integrity.
DBCC (Database Consistency Checker) is a set of programs used to verify the integrity of the SQL Server database.
◆ 10. Use cursor with caution
In some cases where a cursor must be used, you can consider transferring qualified data rows to a temporary table and then defining the cursor on the temporary table, which can significantly improve the performance.
Database Optimization
1. index problems
During performance tracking and analysis, we often find that many background program performance problems are caused by the lack of appropriate indexes, and some tables or even one index. This is often because the index is not defined when designing a table. in the initial stage of development, the performance may not be affected due to the small number of table records and whether the index is created, developers did not pay much attention to this. However, once the program is released to the production environment, more and more table records will be recorded over time. In this case, the performance will be greatly affected due to the lack of indexes.
This issue needs to be shared by database designers and developers.
Rule: do not perform the following operations on the created index data column:
◆ Avoid calculation of index fields
◆ Avoid using not on index fields. <> ,! =
◆ Avoid using is null and is not null in the index Column
◆ Avoid data type conversion in index Columns
◆ Avoid using functions on indexed fields
◆ Avoid using NULL values in indexed columns.
2. The Union all statement is used.
Because Union compares records of query subsets, the speed of union is usually much slower than Union all. In general, if Union all can meet the requirements, you must use Union all. In another case, you may ignore it, that is, although the Union of several subsets needs to filter out duplicate records, it is impossible to have duplicate records due to the particularity of the script, in this case, Union all should be used. For example, a query program in XX module once had this situation. For details, the records of several subsets in this script cannot be repeated due to the special nature of the statement, therefore, you can use Union all instead)
3. Rules for where statements
3.1 avoid using in, not in, or having in the WHERE clause.
You can use exist and not exist to replace in and not in.
You can use table links instead of exist. Having can be replaced by where. If it cannot be replaced, it can be processed in two steps.
Example
Select * from orders where customer_name not in
(Select customer_name from customer)
Optimization
Select * from orders where customer_name not exist
(Select customer_name from customer)
3.2 do not declare numbers in character format, but declare character values in digit format. (The same date) otherwise, the index will be invalid and a full table scan will be generated.
Example:
Select EMP. ename, EMP. Job from EMP where EMP. empno = 7369;
Do not use: Select EMP. ename, EMP. Job from EMP where EMP. empno = '20140901'
4. Rules for select statements
Restrict the use of select * from table in applications, packages, and processes. See the following example.
Use select empno, ename, category from EMP where empno = '000000'
Instead of using select * from EMP where empno = '20140901'
5. Sorting
Avoid resource-consuming operations. SQL statements with distinct, union, minus, intersect, and order by enable SQL engine execution and resource-consuming sorting (SORT. distinct requires a sorting operation, while other operations require at least two sorting operations.
How to optimize the SQL Server database:
There are many reasons for slow query speed. The following are common causes:
1. No index or no index is used (this is the most common problem of slow query and is a defect in programming)
2. Low I/O throughput, resulting in a bottleneck effect.
3. the query is not optimized because no computing column is created.
4. Insufficient memory
5. slow network speed
6. The queried data volume is too large (you can use multiple queries to reduce the data volume in other ways)
7. Lock or deadlock (this is also the most common problem of slow query and is a defect in programming)
8. sp_lock and sp_who are active users. The reason is that they read and write competing resources.
9. Unnecessary rows and columns are returned.
10. The query statement is not good and is not optimized.
You can optimize the query by using the following methods:
1. Place data, logs, and indexes on different I/O devices to increase the reading speed. In the past, tempdb can be placed on raid0, which is not supported by SQL2000. The larger the data size (size), the more important it is to increase I/O.
2. vertically and horizontally split the table to reduce the table size (sp_spaceuse)
3. upgrade hardware
4. Create an index based on the query conditions, optimize the index, optimize the access mode, and limit the data volume of the result set. Note that the fill factor should be appropriate (preferably the default value 0 ). The index should be as small as possible. Use a column with a small number of bytes to create an index (refer to the index creation). Do not create a single index for fields with a limited number of values, such as gender fields.
5. Improve network speed;
6. Expand the server memory. Windows 2000 and SQL Server 2000 support 4-8 GB memory. Configure virtual memory: the virtual memory size should be configured based on services running concurrently on the computer. When running Microsoft SQL Server2000, you can set the virtual memory size to 1.5 times the physical memory installed on your computer. If you have installed the full-text search feature and intend to run the Microsoft Search Service for full-text indexing and query, consider: set the virtual memory size to at least three times the physical memory installed on the computer. Configure the SQL Server Max Server Memory server configuration option to 1.5 times the physical memory (half the virtual memory size ).
7. Increase the number of server CPUs. However, you must understand that resources such as memory are more required for concurrent processing of serial processing. Whether to use parallelism or serial travel is automatically evaluated and selected by MSSQL. A single task is divided into multiple tasks and can be run on the processor. For example, if the sort, connection, scan, and group by statements of delayed queries are executed simultaneously, SQL Server determines the optimal parallel level based on the system load, complex queries that consume a large amount of CPU are most suitable for parallel processing. However, update, insert, and delete operations cannot be processed in parallel.
8. If you use like for query, you cannot simply use index, but the full-text index consumes space. Like 'a % 'when the index like' % a' is used and like '% A %' is not used for the query, the query time is proportional to the total length of the field value, so the char type cannot be used, but varchar. Create a full-text index for a long field value.
9. Separate DB server and application server; Separate OLTP and OLAP
10. Distributed partition view can be used to implement Database Server consortium. A consortium is a group of separately managed servers, but they collaborate to share the processing load of the system. This mechanism of forming Database Server consortium through partition data can expand a group of servers to support the processing needs of large multi-layer Web sites. For more information, see designing a database federation server. (Refer to the SQL Help File 'partition view ')
A. before implementing the partition view, a horizontal partition table must be created.
B. After creating a member table, define a distributed partition view on each Member Server, and each view has the same name. In this way, queries that reference the view name of a distributed partition can run on any Member Server. System operations are the same as if each member server has a copy of the original table, but in fact each server has only one member table and a distributed partition view. The data location is transparent to the application.
11. Rebuild the index DBCC reindex, DBCC indexdefrag, shrink data and log DBCC shrinkdb, and DBCC shrinkfile. set automatic log shrinking. for large databases, do not set Automatic database growth, which will reduce the server performance. The writing of T-SQL is very important. The following lists common points: first, the process of DBMS processing the query plan is as follows:
1. query statement lexical and syntax check
2. submit the statement to the query optimizer of the DBMS.
3. optimizer performs algebra optimization and access path optimization
4. A query plan is generated by the Pre-compilation module.
5. Then, submit it to the system for processing and execution at the appropriate time.
6. Finally, return the execution result to the user. Next, let's take a look at the data storage structure of SQL SERVER: the size of a page is 8 K (8060) bytes, eight pages are a disk area and are stored in the B-tree format.
12. Difference Between commit and rollback: Roll back all things. Commit: Submit the current transaction. there is no need to write things in dynamic SQL. If you want to write things, write them out, such as begin Tran exec (@ s) Commit trans, or write dynamic SQL into functions or stored procedures.
13. Use the WHERE clause in the SELECT statement to limit the number of returned rows to avoid table scanning. If unnecessary data is returned, the server's I/O resources are wasted, this increases the burden on the network and reduces performance. If the table is large, the table is locked during the table scan and other connections are prohibited from accessing the table. The consequence is serious.
14. SQL statement comments have no impact on execution
15. Try not to use the cursor. It occupies a large amount of resources. If you need row-by-row execution, try to use non-cursor technology, such as loop on the client, using temporary tables, table variables, subqueries, and case statements. The cursor can be classified according to the extraction options it supports: only the rows must be extracted from the first row to the last row. Fetch next is the only allowed extraction operation and is also the default method. You can extract arbitrary rows randomly anywhere in the cursor. The cursor technology becomes very powerful in SQL2000, and its purpose is to support loops. There are four concurrent options read_only: update cannot be located through the cursor, and there is no lock in the rows that make up the result set. Optimistic with values: Optimistic Concurrency Control is a standard part of transaction control theory. Optimistic Concurrency control is used in this case. In the interval between opening the cursor and updating the row, there is only a small chance for the second user to update a row. When a cursor is opened with this option, there is no lock to control the rows, which will help maximize its processing capability. If you try to modify a row, the current value of the row is compared with the value obtained from the last row extraction. If any value changes, the server will know that the other person has updated the row and will return an error. If the value is the same, the server executes the modification. Select this concurrency option optimistic with row versioning: this optimistic concurrency control option is based on Row version control. Use row version control. The table must have a version identifier, which can be used by the server to determine whether the row is changed after the cursor is read. In SQL Server, this performance is provided by the timestamp data type. It is a binary number that indicates the relative sequence of changes in the database. Each database has a global current timestamp value: @ dbts. Every time you change a row with a timestamp column in any way, SQL Server first stores the current @ dbts value in the timestamp column, and then adds the value of @ dbts. If a table has a timestamp column, the timestamp is recorded as a row. The server can compare the current timestamp value of a row with the timestamp value stored during the last extraction to determine whether the row has been updated. The server does not need to compare the values of all columns. You only need to compare the timestamp column. If the application requires Optimistic Concurrency Based on Row Version Control for tables without a timestamp column, the cursor is optimistic concurrency control based on the value by default. Scroll locks implements pessimistic concurrency control. In pessimistic concurrency control, when the row of the database is read into the cursor result set, the application attempts to lock the row of the database. When a server cursor is used, an update lock is placed on the row when it is read into the cursor. If the cursor is opened in the transaction, the update lock of the transaction will be kept until the transaction is committed or rolled back. When the next row is extracted, the cursor lock will be removed. If the cursor is opened outside the transaction, the lock is discarded when the next row is extracted. Therefore, whenever you need full pessimistic concurrency control, the cursor should be opened in the transaction. The update lock prevents any other task from obtaining the update lock or exclusive lock, thus preventing other tasks from updating the row. However, the update lock does not prevent the shared lock, so it does not prevent other tasks from reading rows, unless the second task also requires reading with the update lock. Based on the lock prompts specified in the SELECT statement defined by the cursor, these cursor concurrency options can generate a scroll lock. The scroll lock is obtained on each row during extraction and is kept until the next extraction or cursor is closed. The first occurrence prevails. During the next extraction, the server obtains the scroll lock for the Newly Extracted row and releases the scroll lock of the last extracted row. The rolling lock is independent of the transaction lock and can be kept after a commit or rollback operation. If the option to close the cursor when submitting is off, the commit statement does not close any opened cursor, and the scroll lock is retained until it is committed to maintain isolation of the extracted data. The type of the obtained scroll lock depends on the cursor concurrency option and the lock prompt in the SELECT statement of the cursor. Lock prompt read-only optimistic value optimistic row version control lock No prompt not locked update nolock not locked holdlock sharing update updlock error Update tablockx Error unlocked unlocked update other unlocked update * The specified nolock prompt will make the table specified with this prompt read-only in the cursor.
16. Use profiler to track the query, obtain the time required for the query, and locate the SQL problem. Use the index optimizer to optimize the index.
17. Pay attention to the difference between Union and Union all. Good union all
18. Use distinct unless necessary. Similar to union, it slows down the query. Duplicate records are no problem in the query.
19. Do not return unwanted rows or columns during query.
20. Use sp_configure 'query Governor cost limit 'or set query_governor_cost_limit to limit the resources consumed by the query. When the resource consumed by the evaluation query exceeds the limit, the server automatically cancels the query and kills the query before the query. Set locktime: Set the lock time.
21. Use select Top 100/10 percent to limit the number of rows returned by the user or set rowcount to limit the rows to be operated.
22. Before SQL2000, do not use the following words: "Is null", "<> ","! = ","!> ","! <"," Not "," not exists "," not in "," not like ", and" like '% 100' ", because they do not leave the index and are all table scans. Do not add a function to the column name in the WHERE clause, such as convert and substring. If a function is required, create a computed column and then create an index. you can also change the syntax of where substring (firstname,) = 'M' to where firstname like'm % '(index scan). You must separate the function from the column name. In addition, the index cannot be too large or too large. Not in scans the table multiple times and uses exists, not exists, in, left Outer Join instead, especially the left join. exists is faster than in, and the slowest operation is not. if the column value is null, its index does not work in the past. Now the 2000 optimizer can process it. The same is null, "not", "not exists", "not in" can optimize her, but "<>" cannot be optimized, and no index is used.
23. Use query analyzer to check the SQL statement query plan and evaluate and analyze whether the SQL statement is optimized. Generally, 20% of the Code occupies 80% of the resources, and our optimization focuses on these slow points.
24. If the in or query is not indexed, use the display statement to specify the index: Select * From personmember (Index = ix_title) Where processid in ('male ', female ')
25. Pre-calculate the results to be queried and place them in the table. Select the results when querying. This was the most important method before sql7.0. For example, hospital hospitalization fee calculation.
26. Appropriate indexes can be used for Min () and max.
27. There is a principle in the database that the code is closer to the data, the better. Therefore, the default is the preferred one, which is rules, triggers, and constraint (constraints such as the external key checkunique ......, The maximum length of the data type, etc. are constraints), procedure. This not only requires low maintenance work, high programming quality, and fast execution speed.
28. If you want to insert a large binary value to the image column, use the stored procedure. Do not insert the value using an embedded insert Statement (whether Java is used or not ). In this way, the application first converts the binary value to a string (twice the size of the string), and then converts it to a binary value after the server receives the character. the stored procedure does not have these actions: Method: Create procedure p_insert as insert into table (fimage) values (@ image), and calls this stored procedure on the foreground to pass in binary parameters, this significantly improves the processing speed.
29. Between is faster in some cases than in, and between can locate the range based on the index faster. Use the query optimizer to see the difference. Select * From chineseresume where title in ('male', 'female ') Select * From chineseresume where between 'male' and 'female' are the same. Because in may be more than once, it may be slower sometimes.
30. If it is necessary to create an index for a global or local temporary table, it may increase the speed, but not necessarily because the index also consumes a lot of resources. Its creation is the same as that of the actual table.
31. Do not create useless things, such as wasting resources when generating reports. Use it only when necessary.
32. The or clause can be divided into multiple queries and connected to multiple queries through Union. Their speed is only related to whether an index is used. If a query requires a Union Index, Union all is more efficient. no index is used for multiple or statements, and it is rewritten to the form of union to try to match the index. Whether or not indexes are used in a key issue.
33. Use a view as little as possible, which is less efficient. Operations on a view are slower than operations on a table. You can replace it with stored procedure. In particular, do not use nested views. nested views increase the difficulty of searching for original data. Let's look at the essence of the View: it is an optimized SQL statement stored on the server that has produced a query plan. When retrieving data from a single table, do not use a view pointing to multiple tables. Read data directly from the view that only contains the table. Otherwise, unnecessary overhead is added, the query is disturbed. to speed up View query, MSSQL adds the View index function.
34. Do not use distinct or order by unless necessary. These actions can be executed on the client. They increase additional overhead. This is the same as Union and Union all.
Select top 20 ad. companyName, comid, position, AD. referenceid, worklocation, convert (varchar (10), Ad. postdate, 120) as postdate1, workyear, degreedescription from your ad where referenceid in ('jcnad00329667 ', 'jcnad132168', 'jcnad00337748', 'jcnad00338345 ',
'Jcnad00333138 ', 'jcnad00303570', 'jcnad00303569 ',
'Jcnad00303568 ', 'jcnad00306698', 'jcnad00231935 ', 'jcnad00231933 ',
'Jcnad00254567', 'jcnad00254585 ', 'jcnad00254608 ',
'Jcnad00254607 ', 'jcnad00258524', 'jcnad00332379', 'jcnad00268618 ',
'Jcnad00279196 ', 'jcnad00268613') order by postdate DESC
35. In the post-in nominal value list, place the most frequent values at the beginning and the least value at the end to reduce the number of judgments.
36. When select into is used, it locks the system table (sysobjects, sysindexes, etc.) and blocks access from other connections. When creating a temporary table, use the show statement instead of select. drop table t_lxh begin Tran select * into t_lxh from chineseresume where name = 'xyz' -- commit in another connection select * From sysobjects You Can See That select into locks the system table, create Table also locks the system table (whether it is a temporary table or a system table ). So never use it in things !!! In this case, use real tables or temporary table variables for temporary tables that are frequently used.
37. Generally, redundant rows can be removed before group by having clauses, so try not to use them for row removal. Their execution sequence should be optimal as follows: Select WHERE clause Selects all appropriate rows, group by is used to group statistical rows, and having clause is used to remove redundant groups. In this way, the consumption of group by having is small, and queries are fast. Grouping and having large data rows consumes a lot of resources. If the purpose of group by is not to include computing, but to group, it is faster to use distinct.
38. Updating multiple records at a time is faster than updating multiple records at a time, that is, batch processing is good.
39. Use less temporary tables and replace them with result sets and table variables. Table variables are better than temporary tables.
40. In SQL2000, calculated fields can be indexed. The following conditions must be met:
A. The expression of calculated fields is definite.
B. Data Types of text, ntext, and image cannot be used.
C. The following options must be prepared: ansi_nulls = on, ansi_paddings = on ,.......
41. Try to put data processing on the server to reduce network overhead, such as using stored procedures. Stored procedures are compiled, optimized, organized into an execution plan, and stored in the database as SQL statements. They are a collection of control flow languages and are fast. You can use a temporary stored procedure to execute dynamic SQL statements repeatedly. This process (temporary table) is stored in tempdb. In the past, because SQL server did not support complex mathematical computing, it had to put this job on another layer to increase network overhead. SQL2000 supports udfs and now supports complex mathematical computing. the return value of a function is not too large, which is costly. User-Defined Functions consume a large amount of resources like the cursor. If a large result is returned, the stored procedure is used.
42. Do not use the same function repeatedly in one sentence, waste resources, and put the result in a variable before calling it faster.
43. The efficiency of select count (*) is low. Try to change his writing style as much as possible, while exists is fast. note the difference: the return values of select count (field of null) from table and select count (field of not null) from table are different !!!
44. When the server has enough memory, the number of preparation threads = the maximum number of connections + 5, which can maximize the efficiency; otherwise, use the number of prepared threads <maximum number of connections to enable the SQL Server thread pool. If the number is equal to the maximum number of connections + 5, the performance of the server is seriously damaged.
45. Access your table in a certain order. If you lock table A and table B first, lock them in this order in all stored procedures. If you first lock table B in a stored procedure and then lock Table A, this may lead to a deadlock. If the lock sequence is not designed in detail in advance, it is difficult to find deadlocks.
46. Monitor the load of the corresponding hardware through SQL server performance monitor memory: Page faults/sec counters. If this value increases occasionally, it indicates that there were threads competing for memory. If it continues high, memory may be the bottleneck.
Process:
1.% DPC time indicates the percentage of the processor used to receive and provide services during the sample interval in the deferred program call (DPC. (DPC is running at a lower priority interval than the standard interval ). Because DPC is executed in privileged mode, the percentage of DPC time is part of the privileged time percentage. These time values are calculated separately and are not part of the total number of interval values. This total number shows the average busy hours as the percentage of instance time.
2. If the value of % processor time counter exceeds 95%, the bottleneck is the CPU. You can consider adding a processor or changing a faster processor.
3.% privileged time indicates the percentage of idle processor time used in privileged mode. (Privileged mode is a processing mode designed for operating system components and operating hardware drivers. It allows direct access to hardware and all memory. Another mode is the user mode. It is a finite processing mode designed for applications, Environment subsystems, and integer subsystems. The operating system converts the application thread to the privileged mode to access the Operating System Service ). The privileged time % includes the time when the service is interrupted and the DPC is provided. The high privileged time ratio may be caused by a large number of failed device intervals. This counter displays the average busy hours as part of the sample time.
4.% USER time indicates CPU-consuming database operations, such as sorting and executing Aggregate functions. If the value is very high, you can consider increasing the index and try to reduce the value by using simple table join and horizontal table segmentation methods. Physical Disk: curretn disk queue length counter this value should not exceed 1.5 ~ 2 times. To improve performance, you can add disks. Sqlserver: cache hit ratio counter. The higher the value, the better. If the duration is lower than 80%, consider increasing the memory. Note that the value of this parameter is accumulated after SQL Server is started. Therefore, after running for a period of time, this value cannot reflect the current value of the system.
47. Analyze select emp_name form employee where salary> 3000 in this statement, if salary is of the float type, the optimizer optimizes it to convert (float, 3000 ), because 3000 is an integer, we should use 3000.0 during programming, instead of waiting for the DBMS to convert at runtime. Conversion of the same character and integer data.
48. query Association and write order
Select a. personmemberid, * From chineseresume A, personmember B where personmemberid = B. referenceid and A. personmemberid = 'jcnprh1_1 '(A = B, B = 'number ')
Select. personmemberid, * From chineseresume A, personmember B where. personmemberid = B. referenceid and. personmemberid = 'jcnprh1_1 'and B. referenceid = 'cnprh00001' (A = B, B = 'number', A = 'number ')
Select. personmemberid, * From chineseresume A, personmember B where B. referenceid = 'jcnprh1_1 'and. personmemberid = 'jcnprh1_1' (B = 'number', A = 'number ')
49,
(1) If no owner code is entered, then code1 = 0 code2 = 9999 else code1 = code2 = owner code end if the SQL statement executed is: Select owner name from p2000 where owner code> =: code1 and owner code <=: code2
(2) If no owner code is entered, then select owner name from p2000 else code = owner code select owner code from p2000 where owner code =: the first method of code end if only uses one SQL statement, and the second method uses two SQL statements. When no owner code is entered, the second method is obviously more efficient than the first method because it has no restrictions. When the owner code is entered, the second method is still more efficient than the first method. It not only lacks one restriction condition, but also is the fastest query operation because of equality. Do not worry about writing programs.
50. The new method for querying pages in jobcn is as follows: Use the performance optimizer to analyze performance bottlenecks. If I/O or network speed is used, the following method is effective. It is better to use the current method on the CPU or memory. Please differentiate the following methods, indicating that the smaller the index, the better.
Begin
Declare @ local_variable table (FID int identity (1, 1), referenceid varchar (20 ))
Insert into @ local_variable (referenceid)
Select top 100000 referenceid from chineseresume order by referenceid
Select * From @ local_variable where FID> 40 and FID <= 60
End and
Begin
Declare @ local_variable table (FID int identity (1, 1), referenceid varchar (20 ))
Insert into @ local_variable (referenceid)
Select top 100000 referenceid from chineseresume order by updatedate
Select * From @ local_variable where FID> 40 and FID <= 60
End
Begin
Create Table # temp (FID int identity (1, 1), referenceid varchar (20 ))
Insert into # temp (referenceid)
Select top 100000 referenceid from chineseresume order by updatedate
Select * from # temp where FID> 40 and FID <= 60 drop table # temp
End
Storage Process Writing experience and Optimization Measures
1) suitable for readers: database developers, who have a large amount of data in the database and who are interested in optimizing the SP (stored procedure.
Ii) Introduction: complex business logic and database operations are often encountered during database development. In this case, SP is used to encapsulate database operations. If there are many SP projects and there is no certain specification for writing, it will affect the difficulties of system maintenance and the difficulty of understanding the big SP logic in the future, in addition, if the database has a large amount of data or the project has high performance requirements for the SP, you will encounter optimization problems. Otherwise, the speed may be slow. After hands-on experience, an Optimized SP is hundreds of times more efficient than an optimized SP with poor performance.
3) content:
1. If developers use tables or views of other databases, they must create a view in the current database to perform cross-database operations. It is best not to directly use "Databse. DBO. table_name ", because sp_depends cannot display the cross-database table or view used by the SP, it is not convenient to verify.
2. Before submitting the SP, the developer must have used set showplan on to analyze the query plan and perform its own query optimization check.
3. High program running efficiency and application optimization. Pay attention to the following points during SP writing:
A) SQL usage specifications:
I. Avoid large transaction operations as much as possible. Use the holdlock clause with caution to improve the system concurrency capability.
Ii. Try to avoid repeated accesses to the same or several tables, especially tables with large data volumes. You can consider extracting data to a temporary table based on the conditions and then connecting it.
III. avoid using a cursor whenever possible because the cursor is inefficient. If the cursor operation contains more than 10 thousand rows of data, it should be rewritten. If the cursor is used, try to avoid table join operations in the cursor loop.
IV. note that when writing where statements, the order of statements must be taken into account. The order before and after condition clauses should be determined based on the index order and range size, and the field order should be consistent with the index order as much as possible, the range is from large to small.
V. do not perform functions, arithmetic operations, or other expression operations on the left side of "=" in the WHERE clause. Otherwise, the system may not be able to correctly use the index.
VI. use exists instead of select count (1) to determine whether a record exists. The count function is used only when all the rows in the statistical table are used, and count (1) is more efficient than count.
VII. Try to use "> =" instead of "> ".
VIII. Note the replacement between the or clause and the union clause.
IX. Pay attention to the data types connected between tables to avoid the connection between different types of data.
X. Pay attention to the relationship between parameters and data types in stored procedures.
XI. Pay attention to the data volume of insert and update operations to prevent conflicts with other applications. If the data volume exceeds 200 data pages (400 Kb), the system will update the lock and the page lock will be upgraded to the table lock.
B) Specification for indexing:
I. You should consider creating indexes in combination with applications. We recommend that you create a large OLTP table with no more than six indexes.
Ii. Try to use the index field as the query condition, especially the clustered index. If necessary, you can use index index_name to forcibly specify the index.
Iii. Avoid performing table scan when querying large tables. If necessary, create an index.
IV. when using an index field as a condition, if the index is a joint index, you must use the first field in the index as the condition to ensure that the system uses the index, otherwise, the index will not be used.
V. Pay attention to index maintenance, rebuild indexes periodically, and recompile the stored procedure.
C) use of tempdb:
I. Avoid using distinct, order by, group by, having, join, and cumpute as much as possible, because these statements will increase the burden on tempdb.
Ii. Avoid frequent creation and deletion of temporary tables and reduce the consumption of system table resources.
III. when creating a temporary table, if a large amount of data is inserted at one time, you can use select into instead of create table to avoid logs and increase the speed. If the data volume is small, in order to ease the system table resources, we recommend that you first create table and then insert.
IV. if the temporary table has a large amount of data and requires an index, you should place the process of creating a temporary table and creating an index in a single sub-storage process, in this way, the system can use the index of the temporary table.
V. if a temporary table is used, you must explicitly delete all temporary tables at the end of the stored procedure. First truncate the table and then drop the table, so that the system table can be locked for a long time.
Vi. Use caution when connecting large temporary tables to other large tables to query and modify them, reducing the burden on the system table, because this operation will use the tempdb system table multiple times in one statement.
D) Reasonable algorithm usage:
Based on the SQL optimization technology mentioned above and the SQL Optimization content in the ASE tuning manual, combined with practical applications, a variety of algorithms are used for comparison to obtain the method with the least resource consumption and the highest efficiency. Specific ASE optimization commands are available: Set statistics Io on, set statistics time on, set showplan on, and so on.
51. Set showplan_all on. DBCC checks database data integrity. DBCC (Database Consistency Checker) is a set of programs used to verify the integrity of the SQL Server database.
52. Use cursors with caution
In some cases where a cursor must be used, you can consider transferring qualified data rows to a temporary table and then defining the cursor on the temporary table, which can significantly improve the performance.