13 Rules for writing high-performance SQL statements

Source: Internet
Author: User

1. First, you must understand what an execution plan is?

An execution plan is a query solution made by the database based on the statistics of SQL statements and related tables. This solution is automatically generated by the query optimizer, for example, if an SQL statement is used to query one record from a 0.1 million-record table, the query optimizer selects the "index search" method. If the table is archived, currently, there are only 5000 records left, and the query optimizer will change the solution by using the "full table scan" method.

Obviously, the execution plan is not fixed. It is "personalized ". It is important to generate a correct "Execution Plan:

(1) Does the SQL statement clearly tell the query optimizer what it wants?

(2) is the database statistics obtained by the query optimizer up-to-date and correct?

2. Unified SQL statement writing

For the following two SQL statements, the programmer thinks they are the same and the database query optimizer thinks they are different.

 
 
  1. select * from dual  
  2. select * From dual 

In fact, the query analyzer considers it to be two different SQL statements and must be parsed twice. Generate two execution plans. Therefore, as a programmer, make sure that the same query statement is consistent anywhere, and that no space can be added!

3. Do not write SQL statements too complex.

I often see that two sheets of A4 paper are printed out of an SQL statement captured from the database. Generally, such complex statements are problematic. I took the two-page long SQL statement to consult the original author. As a result, he said that the time was too long and he could not understand it for a moment. As you can imagine, even the original author may be confused about SQL statements, and the database may also be confused.

Generally, the result of a Select statement is used as a subset and then queried from this subset. This layer of nested statements is still common. However, based on experience, more than three layers of nesting are supported, the query optimizer can easily give an incorrect execution plan. It is dizzy. Something similar to artificial intelligence is worse than human resolution after all. If people are dizzy, I can ensure that the database will also be dizzy.

In addition, execution plans can be reused. The simpler SQL statements, the higher the possibility of reuse. Complex SQL statements must be re-parsed as long as there is a change in the character, and then put this pile of garbage in the memory. We can imagine how low the database efficiency will be.

4. Use "temporary table" to save intermediate results

An important way to simplify SQL statements is to use temporary tables to store intermediate results. However, temporary tables have far more benefits than this. Temporary results are saved to temporary tables, the subsequent query is in tempdb, which can avoid scanning the master table multiple times in the program. It also greatly reduces the blocking of the "Update lock" in program execution and reduces blocking, improves concurrency performance.

5. the SQL statement of the OLTP system must use the bound variable.

 
 
  1. select * from orderheader where changetime > ‘2010-10-20 00:00:01’  
  2. select * from orderheader where changetime > ‘2010-09-22 00:00:01’ 

The query optimizer considers the preceding two statements as different SQL statements and needs to be parsed twice. If you bind a variable

 
 
  1. select * from orderheader where changetime > @chgtime 

The @ chgtime variable can input any value, so that a large number of similar queries can reuse the execution plan, which can greatly reduce the burden on the database to parse SQL statements. One resolution and multiple reuse are principles for improving database efficiency.

6. Bind a variable for a preview

Everything has two sides. Variable binding is applicable to most OLTP processing, but there are exceptions. For example, when the field in the where condition is "skewed field.

"Skewed field" indicates that the vast majority of values in this column are the same. For example, in a population survey table, more than 90% of the values in the "nationality" column are Han nationality. If an SQL statement is used to query the population of the 30-Year-Old Han nationality, the "nationality" column must be placed in the where condition. At this time, if you bind the variable @ nation, there will be a big problem.

Imagine if the first value passed in by @ nation is "Han nationality", then the entire execution plan will inevitably Select Table scanning. Then, the second value is passed into the Buyi family. It is reasonable to say that the proportion of the Buyi family may be only one in one thousandth. index search should be used. However, because the execution plan of the first parsed "Han" is reused, the second scan will also be performed using the table scan method. This is the famous "Bind Variable preview". We recommend that you do not bind variables to "skewed fields.

7. Use begin tran only when necessary

By default, an SQL statement in SQL Server is a transaction, and is committed by default after the statement is executed. In fact, this is a minimal form of tran in tran. For example, a begin tran is hidden at the beginning of each statement, and a commit is hidden at the end.

In some cases, we need to explicitly declare begin tran. For example, to perform the "insert, delete, and modify" operation, we need to modify several tables at the same time, either of which must be successfully modified, either it fails. Begin tran can play such a role, it can set a number of SQL statements for execution together, and finally commit together. The advantage is that data consistency is ensured, but nothing is perfect. The price paid by tran in tran is that all resources locked by SQL statements cannot be released until the commit is completed.

It can be seen that if Begin tran has too many SQL statements, the database performance will be poor. Before a large transaction is committed, other statements will be blocked, resulting in a lot of blocks.

The principle used by tran in tran is that, while ensuring data consistency, the fewer SQL statements occupied by Begin tran, the better! In some cases, you can use a trigger to synchronize data, instead of using tran in tran.

8. Add nolock to some SQL query statements.

Adding nolock to SQL statements is an important means to improve SQL Server's concurrent performance. This is not required in oracle because the oracle structure is more reasonable and the undo tablespace stores the "Data shadow ", if the data has not been commit in the modification, you will read the copy before the modification, which is placed in the undo tablespace. In this way, oracle reading and writing can not affect each other, which is also widely praised by oracle. SQL Server reads and writes are mutually blocked. To improve the concurrency performance, you can add nolock to some queries so that you can allow write during read, however, it is possible to read uncommitted dirty data. There are three principles for using nolock.

(1) nolock cannot be added if the query result is used for "insertion, deletion, and modification!

(2) The queried table is frequently split by pages. Use nolock with caution!

(3) The temporary table can be used to save the "Data shadow", which is similar to the oracle undo tablespace function,

Use a temporary table to improve the concurrency performance. Do not use nolock.

9. The clustered index is not created on the table's ordered field. This table is prone to page splitting.

For example, if the order table contains the order number orderid and the customer number contactid, which field should the clustered index be added? For this table, order numbers are sequentially added. If you add clustered indexes to orderid, the newly added rows are all added at the end, which is not prone to page splitting. However, since most queries are based on customer numbers, adding clustered indexes to contactid makes sense. Contactid is not an order field for the Order table.

For example, if "Michael"'s "contactid" is 001, the order information of "Michael" must be placed on the first data page of this table, if you place a new order in "Michael" today, the order information cannot be placed on the last page of the table, but on the first page! What if the first page is full? Sorry, all the data in this table must be moved back to the place where this record is located.

The index of SQL Server is different from that of Oracle. The index of SQL Server is actually sorted by the order of clustered index fields, which is equivalent to the index organization table of oracle. The clustered index of SQL Server is an organizational form of the table, so it is very efficient. For this reason, the position of a record is not just put, but in order placed on the data page. If the data page has no space, the page is split. Therefore, it is clear that the clustered index is not created on the table's sequence field, and the table is prone to page split.

I have encountered a situation where, after a buddy's table re-creates an index, the insertion efficiency is greatly reduced. This is probably the case. The clustered index of the table may not be created on the sequence field of the table, and the table is often archived. Therefore, the data in the table exists in a sparse State. For example, if Michael has placed 20 orders and there are only 5 orders in the last three months, and the archive policy is to keep data for three months, then Michael has archived 15 orders in the past, leave 15 vacancies, which can be reused when insert occurs. In this case, no page splitting will occur because there is space available for use. However, the query performance is relatively low, because the blank space without data must be scanned during the query.

After the clustered index is re-built, the situation changes, because the re-built clustered index is to re-arrange the data in the table. The original vacancy is gone, and the page filling rate is very high, page splitting often occurs when data is inserted, so the performance is greatly reduced.

If the clustered index is not created on the sequence field, do you want to give a relatively low page filling rate? Do you want to avoid rebuilding clustered indexes? Is a question worth consideration!

10. After nolock is added, query tables that frequently split pages will easily generate skip or duplicate reads.

After nolock is added, you can perform queries during "insertion, deletion, and modification". However, because "insertion, deletion, and modification" occur at the same time, in some cases, once the data page is full, the page split is inevitable, and the nolock query is happening. For example, records that have been read on page 100th may be split into 101st pages, this may cause the nolock query to read the data repeatedly when reading the 101 page, resulting in "repeated read ". Similarly, if the data on the 100 page has not been read, it will be divided into 99 pages, then the nolock query may miss this record and generate a "Skip read ".

As mentioned above, when nolock is added, some operations may report errors. It is estimated that the nolock query produces repeated reads and two identical records are inserted into other tables, of course, a primary key conflict occurs.

11. Pay attention when using like for fuzzy search

Sometimes you need to perform some fuzzy queries, such

 
 
  1. select * from contact where username like ‘%yue%’ 

Keyword % yue %, because "%" is used before yue, the query must undergo full table scan. Unless necessary, do not add % before the keyword,

12. Impact of implicit conversion of data types on query efficiency

In the SQL server 2000 database, when submitting SQL statements, our program does not use a strong type to submit the value of this field. The data type is automatically converted by SQL server, the input parameters are inconsistent with the primary key field type. In this case, SQL server 2000 may use full table scan. This problem was not found on Sql2005, but you should pay attention to it.

13. SQL Server table Connection Methods

(1) Merge Join

(2) Nested Loop Join

(3) Hash Join

SQL Server 2000 has only one join method-Nested Loop Join. If the result set of A is small, it is used as the External table by default. Each record of A must be scanned in B, the number of rows actually scanned is equivalent to the number of rows in result set A x the number of rows in result set B. Therefore, if both result sets are large, the Join result is very bad.

In SQL Server 2005, Merge Join is added. If the Join fields of table A and table B are exactly the fields where the clustered index is located, the order of the table is well arranged, as long as the two sides are spelled together, the overhead of this join operation is equivalent to adding the number of rows in the result set of Table A to the number of rows in the result set of Table B. One is addition and the other is multiplication. It can be seen that the effect of merge join is much better than that of Nested Loop Join.

If no index is available for the connected fields, the efficiency of SQL2000 is quite low, while SQL2005 provides Hash join, which is equivalent to temporarily adding an index to the result set of table A and table B, therefore, SQL2005 is much more efficient than SQL2000. I think this is an important reason.

To sum up, pay attention to the following points during table join:

(1) Select fields where the clustered index is located as far as possible.

(2) carefully consider the where condition to minimize the result set of tables A and B.

(3) If many join fields are missing indexes, and you are still using SQL Server 2000, upgrade them quickly.

Edit recommendations]

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.