SQL Query Optimization Method

Source: Internet
Author: User
Tags dname

Everyone is discussing about database optimization and has just participated in the development of a data warehouse project. The following is a bit of learning about database optimization + some practical experiences, share it with everyone. Thank you for your criticism!
SQL statement:
Is the only way to operate databases (data;
70% ~ 90% of database resources; independent of the program design logic, compared to the optimization of the program source code, the optimization of SQL statements at a low cost of time and risk;
It can be written in different ways. It is easy to learn and difficult to master.
SQL optimization:
Fixed SQL writing habits. The same queries should be kept as much as possible, and the storage process is more efficient.
Statements in the same format should be written, including uppercase and lowercase letters, punctuation marks, and line breaks.
Oracle optimizer:
The expression is evaluated whenever possible and the specific syntax structure is converted to an equivalent structure.
Either the result expression can be faster than the source expression
Either the source expression is only an equivalent Semantic Structure of the Result Expression
Different SQL structures sometimes have the same operation (for example, = any (subquery) and in (subquery). Oracle maps them to a single semantic structure.
1 constant optimization:
Constant calculation is completed at one time when the statement is optimized, rather than at each execution. The following is an expression for retrieving a monthly salary greater than 2000:
SAL> 24000/12
SAL> 2000
Sal * 12 & gt; 24000
If the SQL statement includes the first case, the optimizer simply converts it to the second case.
The optimizer does not simplify the expressions that span the comparison operator. For example, in the third statement, you should try to write the expression that compares the constant with the field, rather than placing the field in the expression. Otherwise, there is no way to optimize it. For example, if there is an index on Sal, the first and second can be used, and the third is difficult to use.
2 operator optimization:
The optimizer converts a search expression consisting of the like operator and an expression without wildcards into a "=" operator expression.
For example, the optimizer converts the expression ename like 'Smith 'to ename = 'Smith'
The optimizer can only convert expressions that involve variable-length data types. In the previous example, if the type of the ename field is Char (10), the optimizer will not perform any conversion.
Generally, like is difficult to optimize.
Where:
~~ In operator optimization:
The optimizer replaces the search expression using the In comparison operator with the equivalent search expression using the "=" and "or" operators.
For example, the optimizer replaces the expression ename in ('Smith ', 'King', 'Jones')
Ename = 'Smith 'or ename = 'King' or ename = 'Jones'
~~ Any and some operators are optimized:
The optimizer replaces any and some search conditions that follow the Value List with expressions consisting of equivalent operators and "or.
For example, the optimizer replaces the first statement with the second statement as follows:
SAL> Any (: first_sal,: second_sal)
SAL>: first_sal or SAL>: second_sal
The optimizer converts any and some search conditions that follow subqueries into a search expression consisting of "exists" and a corresponding subquery.
For example, the optimizer replaces the first statement with the second statement as follows:
X> Any (select Sal from EMP where job = 'analyst ')
Exists (select Sal from EMP where job = 'analyst' and x> Sal)
~~ All operator optimization:
The optimizer replaces the all operator following the Value List with an equivalent expression consisting of "=" and. For example:
SAL> All (: first_sal,: second_sal) is replaced:
SAL>: first_sal and Sal>: second_sal
For the all expression that follows the subquery, the optimizer replaces it with an expression consisting of any and another suitable comparison operator. For example
X> All (select Sal from EMP where deptno = 10):
Not (x <= any (select Sal from EMP where deptno = 10 ))
Next, the optimizer converts the conversion rule of the second expression that applies to the any expression to the following expression:
Not exists (select Sal from EMP where deptno = 10 and x <= Sal)
~~ Between operator optimization:
The optimizer always uses the comparison operators "> =" and "<=" to replace the between operator.
For example, the optimizer replaces the expression Sal between 2000 and 3000 with SAL> = 2000 and Sal <= 3000.
~~ Not operator optimization:
The optimizer always tries to simplify the search conditions to eliminate the impact of the "not" logical operator, which involves the elimination of the "not" operator and the generation of the corresponding comparison operator.
For example, the optimizer replaces the following first statement with the second statement:
Not deptno = (select deptno from EMP where ename = 'taylor ')
Deptno <> (select deptno from EMP where ename = 'taylor ')
Generally, there are many different statements containing the not operator. The optimizer's conversion principle is to make the clauses behind the "not" operator as simple as possible, even if the result expression may contain more "not" operators.
For example, the optimizer replaces the first statement with the second statement as follows:
Not (SAL <1000 or comm is null)
Not Sal <1000 and comm is not null SAL> = 1000 and comm is not null
How to compile efficient SQL statements:
Of course, you must consider the optimization of SQL constants and operators. In addition, you also need:
1. Reasonable index design:
For example, a table with 620000 rows of record and rows with different indexes can run the following SQL statements:
Statement
Select count (*) from record
Where date> '123' and date <'123' and amount> 19991201
Statement B
Select count (*) from record
Where date> '2013' and place in ('bj ', 'sh ')
Statement C
Select date, sum (amount) from record
Group by date
1. Create a non-clustered index on Date
A: (25 seconds)
B: (27 seconds)
C: (55 seconds)
Analysis:
There are a large number of duplicate values on date. In a non-clustered index, data is physically stored on the data page at random. During range search, you must perform a table scan to find all rows in this range.
2. A clustered index on Date
A: (14 seconds)
B: (14 seconds)
C: (28 seconds)
Analysis:
Under the clustered index, data is physically stored on the data page in order, and duplicate values are arranged together. Therefore, you can first find the starting and ending points of this range during range search, in addition, only data pages are scanned within this range, which avoids large-scale scanning and improves the query speed.
3 composite indexes on place, date, and amount
A: (26 seconds)
C: (27 seconds)
B :( <1 second)
Analysis:
This is an unreasonable composite index, because its leading column is place, the first and second SQL statements do not reference place, so the upper index is not used; the third SQL uses place, all referenced columns are included in the composite index, which forms an index overwrite, so it is very fast.
4 composite indexes on date, place, and amount
A: (<1 second)
B :( <1 second)
C: (11 seconds)
Analysis:
This is a reasonable composite index. It uses date as the leading column, so that each SQL can use the index, and the index coverage is formed in the first and third SQL statements, so the performance is optimal.
Conclusion 1
By default, an index is a non-clustered index, but sometimes it is not the best. A reasonable index design should be based on the analysis and prediction of various queries. Generally speaking:
Columns with a large number of duplicate values and frequent range queries (between, >,<,>=, <=), order by, and group by are considered to create a clustered index;
Multiple columns are frequently accessed at the same time, and each column contains duplicate values. You can consider creating a composite index. You can create a search for columns with many different values that are frequently used in conditional expressions, do not create indexes on columns with less values. For example, in the "gender" column of the employee table, there are only two different values: "male" and "female", so there is no need to create an index. If an index is created, the query efficiency is not improved, but the update speed is greatly reduced.
Composite indexes should try to overwrite key queries. The leading column must be the most frequently used column.
2. Avoid using incompatible data types:
For example, float and INT, char and varchar, binary, and varbinary are incompatible. Data Type incompatibility may make the optimizer unable to perform some optimization operations that can be performed originally. For example:
Select name from employee where salary> 60000
In this statement, if the salary field is of the money type, it is difficult for the optimizer to optimize it because 60000 is an integer. We should convert the integer type into a coin type during programming, instead of waiting for the conversion at runtime.
3 is null and is not null:
Null cannot be used as an index. Any column containing null values will not be included in the index. Even if there are multiple columns in the index, as long as one of these columns contains null, this column will be excluded from the index. That is to say, if a column has a null value, even if the column is indexed, the performance will not be improved. Any statement optimizer that uses is null or is not null in the WHERE clause cannot use indexes.
4 In and exists:
Exists is far more efficient than in. It is related to full table scan and range scan. Almost all in operator subqueries are rewritten to subqueries using exists.
Example:
Statement 1
Select dname, deptno from Dept
Where deptno not in
(Select deptno from EMP );
Statement 2
Select dname, deptno from Dept
Where not exists
(Select deptno from EMP
Where Dept. deptno = EMP. deptno );
Obviously, 2 is much better than 1.
Because full table scan is performed on EMP in 1, this is a waste of time. And the index of EMP is not used in 1,
Because there is no WHERE clause. The statement in 2 performs range scan on EMP.
5 In and or clauses usually use worksheets to invalidate indexes:
If a large number of duplicate values are not generated, consider splitting the clause. The split clause should contain the index.
6. Avoid or simplify sorting:
Duplicate sorting of large tables should be simplified or avoided. When indexes can be used to automatically generate output in the appropriate order, the optimizer avoids the sorting step. The following are some influencing factors:
The index does not contain one or more columns to be sorted;
The order of columns in the group by or order by clause is different from that of the index;
Sort columns from different tables.
In order to avoid unnecessary sorting, We need to correctly add indexes and reasonably merge database tables (although it may affect table standardization sometimes, it is worthwhile to Improve the efficiency ). If sorting is unavoidable, you should try to simplify it, such as narrowing the column range of sorting.
7. Eliminate sequential access to the data of large table rows:
In nested queries, sequential access to a table may have a fatal impact on query efficiency. For example, the sequential access policy is used to create a nested layer-3 query. IF 1000 rows are queried at each layer, 1 billion rows of data are queried. The primary way to avoid this is to index the connected columns. For example, two tables: Student table (student ID, name, age ??) And Course Selection form (student ID, course number, score ). If you want to connect two tables, you need to create an index on the join field "student ID.
Union can also be used to avoid sequential access. Although all check columns are indexed, some forms of where clauses force the optimizer to use sequential access. The following query forces sequential operations on the orders table:
Select * from orders where (customer_num = 104 and order_num> 1001) or order_num = 1008
Although indexes are created on customer_num and order_num, the optimizer still uses sequential access paths to scan the entire table in the preceding statement. Because this statement is used to retrieve the set of separated rows, it should be changed to the following statement:
Select * from orders where customer_num = 104 and order_num> 1001
Union
Select * from orders where order_num = 1008
In this way, you can use the index path to process queries.
8. Avoid subqueries:
The label of a column appears in both the primary query and the where clause query. It is very likely that after the column value in the primary query changes, the subquery must perform a new query. The more nested query layers, the lower the efficiency. Therefore, avoid subqueries as much as possible. If the subquery is unavoidable, filter as many rows as possible in the subquery.
9 avoid difficult Regular Expressions:
Matches and like keywords support wildcard matching, technically called regular expressions. However, this matching is especially time-consuming. Example: Select * from customer where zipcode like "98 ___"
Even if an index is created on the zipcode field, sequential scanning is used in this case. If you change the statement to select * from customer where zipcode> "98000", the query will be executed using the index, which will obviously increase the speed.
In addition, avoid non-starting substrings. For example, if select * from customer where zipcode [2, 3]> "80" is used in the WHERE clause, non-starting substrings are used. Therefore, this statement does not use indexes.
10 unfilled connection conditions:
For example, the table card has 7896 rows, there is a non-clustered index on card_no, the table account has 191122 rows, and there is a non-clustered index on account_no, explain execution of two SQL statements under different table connection conditions:
Select sum (A. Amount) from account A, card B where a. card_no = B. card_no
(20 seconds)
Change SQL:
Select sum (A. Amount) from account A, card B where a. card_no = B. card_no and A. account_no = B. account_no
(<1 second)
Analysis:
Under the first join condition, the optimal query scheme is to use the account as the outer table, and the card as the inner table. The I/O times of the card can be estimated by the following formula:
Outer table account page 22541 + (the first row of the outer table account * the third page corresponding to the first row of the outer table on the card of the inner table) = 191122 times I/O
Under the second join condition, the best query scheme is to use card as the outer table and account as the inner table. The number of I/O times of the account can be estimated by the following formula:
1944 page + on the outer table card (the fourth row of the outer table card * The fourth page corresponding to each row of the outer table on the inner table account) = 7896 times I/O
It can be seen that only a full set of connection conditions can be executed for the best solution.
Before a multi-table operation is executed, the query optimizer lists several possible connection schemes based on the connection conditions and finds the optimal scheme with the minimum system overhead. The join conditions must fully consider the tables with indexes and tables with multiple rows. The selection of the internal and external tables can be determined by the formula: Number of matched rows in the outer table * Number of times each query is performed in the inner table, the minimum product is the best solution.
Unoptimized WHERE clause
Example 1
The columns in the following SQL condition statements have an appropriate index, but the execution speed is very slow:
Select * from record where substring (card_no, 5378) = '20140901'
(13 seconds)
Select * from record where amount/30 <1000
(11 seconds)
Select * from record where convert (char (10), date, 112) = '20140901'
(10 seconds)
Analysis:
Any operation results on the column in The WHERE clause are calculated by column one by one during SQL Execution. Therefore, it has to perform table search without using the index on the column; if these results are obtained during query compilation, they can be optimized by the SQL optimizer and indexed to avoid table search. Therefore, rewrite the SQL statement as follows:
Select * from record where card_no like '201312'
(<1 second)
Select * from record where amount <1000*30
(<1 second)
Select * from record where date = '2014/1/01'
(<1 second)
11 optimize queries using temporary tables during storage:
Example
1. read data from the parven table in the order of vendor_num:
Select part_num, vendor_num, price from parven order by vendor_num
Into temp pv_by_vn
This statement reads parven (50 pages) sequentially, writes a temporary table (50 pages), and sorts it. Assume that the sorting overhead is 200 pages, which is 300 pages in total.
2. Connect the temporary table to the vendor table, output the result to a temporary table, and sort by part_num:
Select pv_by_vn, * vendor. vendor_num from pv_by_vn, vendor
Where pv_by_vn.vendor_num = vendor. vendor_num
Order by pv_by_vn.part_num
Into TMP pvvn_by_pn
Drop table pv_by_vn
This query reads pv_by_vn (50 pages) and uses indexes to access the vendor table for 15 thousand times. However, due to the vendor_num order, in fact, the vendor table is read in an indexed ORDER (40 + 2 = 42 pages). The output table contains about 95 rows on each page, with a total of 160 pages. Writing and accessing these pages triggers 5*160 = 800 reads and writes, and the index reads and writes 892 pages.
3. Connect the output and the part to get the final result:
Select pvvn_by_pn. *, part. part_desc from pvvn_by_pn, part
Where pvvn_by_pn.part_num = part. part_num
Drop table pvvn_by_pn
In this way, the query reads pvvn_by_pn sequentially (160 pages) and reads the part table 15 thousand times through the index. As the index is built, 1772 disk reads and writes are actually performed. The optimized ratio is.
All right.
In fact, all kinds of databases are interconnected through SQL optimization.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.