In the "field plus & Appointment queuing" project, "Dealer Troubleshooting" is not a problem when testing online, but after the line, due to the large amount of data on the line, resulting in the execution of the query system crashes; Later, after searching, found that the SQL is not reasonable, detected a lot of data, modified SQL , so that the "dealer troubleshooting Tasks" can be used normally. From this, we find some ways to improve the efficiency of SQL query, I hope we can help.
1. Proper use of indexes
An index is an important data structure in a database, and its fundamental purpose is to improve query efficiency. Most database products now use the ISAM index structure first proposed by IBM. The use of indexes is just right, and the following principles are used:
Indexes are established on columns that are frequently connected but not specified as foreign keys, and fields that are not frequently connected are automatically indexed by the optimizer.
An index on a column that is frequently sorted or grouped (that is, a group by or order by operation).
Build a search on columns that are often used in conditional expressions with more different values, and do not index on columns with fewer values. For example, in the employee table, the "Gender" column is only "male" and "female" two different values, so there is no need to build an index. If you build an index, it will not improve the query efficiency, but can seriously reduce the update speed.
If there are multiple columns to sort, you can create a composite index on those columns (compound index).
Use System Tools. If the Informix database has a Tbcheck tool, it can be checked on a suspicious index. On some database servers, indexes can be invalidated or slow to read because of frequent operations, and if a query that uses an index is slowly slowing down, try using the Tbcheck tool to check the integrity of the index and fix it if necessary. In addition, deleting and rebuilding an index can improve query speed when the database table updates a large amount of data.
For an index, refer to:
Http://database.51cto.com/art/201107/275006_all.htm
2. Avoid or simplify sorting
You should simplify or avoid repeating the ordering of large tables. The optimizer avoids sequencing steps when it is possible to automatically generate output in the appropriate order using the index. Here are some of the factors that affect:
One or several columns to be sorted are not included in the index;
The order of the columns in the group BY or ORDER BY clause is not the same as the order of the indexes;
The sorted column comes from a different table.
In order to avoid unnecessary sorting, it is necessary to construct the index correctly and merge the database tables reasonably (although sometimes it may affect the normalization of the table, but it is worthwhile to improve the efficiency). If sorting is unavoidable, you should try to simplify it, such as narrowing the range of sorted columns.
3. Eliminate sequential access to large table row data
In nested queries, sequential access to a table can have a fatal effect on query efficiency. For example, a sequential access strategy, a nested 3-tier query, if each layer query 1000 rows, then the query will query 1 billion rows of data. The primary way to avoid this situation is to index the concatenated columns. For example, two tables: Student table (school number, name, age ...) and choose the timetable (school number, course number, results). If two tables are to be connected, an index will be created on the connection field of the "Learning number".
You can also use a set to avoid sequential access. Although there are indexes on all the check columns, some forms of where clauses force the optimizer to use sequential access. The following query forces a sequential operation on the Orders table:
- SELECT * FROM Orders WHERE (customer_num=104 and order_num>1001) OR order_num=1008
Although indexes are built on Customer_num and Order_num, the optimizer uses sequential access paths to scan the entire table in the above statement. Because this statement retrieves a collection of detached rows, it should be changed to the following statement:
- SELECT * FROM Orders WHERE customer_num=104 and order_num>1001
- UNION
- SELECT * FROM Orders WHERE order_num=1008
This enables the query to be processed using the index path.
4. Avoid the related sub-query
When a column's label appears in both the main query and the query in the WHERE clause, it is likely that the subquery must be queried again once the column values in the main query have changed. The more nested levels of queries, the lower the efficiency, so you should avoid subqueries as much as possible. If the subquery is unavoidable, filter out as many rows as possible in the subquery.
5. Regular expressions to avoid difficulties
The matches and like keywords support wildcard matching, which is technically known as a regular expression. But this is a particularly time-consuming match. For example:
- SELECT * FROM customer WHERE zipcode like "98_ _ _"
Even if an index is established on the ZipCode field, sequential scanning is also used in this case. If you change the statement to select * from Customer where zipcode > "98000", the index will be used to query when executing the query, which obviously will greatly increase the speed.
Also, avoid non-starting substrings. For example, the statement: SELECT * from Customer where zipcode[2,3]> "80", in the WHERE clause with a non-starting substring, so the statement will not use the index.
6. Accelerating queries with temporary tables
Sorting a subset of tables and creating temporary tables can sometimes speed up queries. It helps to avoid multiple sorting operations, and in other ways simplifies the work of the optimizer. For example:
- SELECT cust.name,rcvbles.balance,......other Columns
- From Cust,rcvbles
- WHERE cust.customer_id = rcvlbes.customer_id
- and rcvblls.balance>0
- and cust.postcode> "98000"
- ORDER by Cust.name
If the query is to be executed more than once, all unpaid customers can be found in a temporary file and sorted by the customer's name:
- SELECT cust.name,rcvbles.balance,......other Columns
- From Cust,rcvbles
- WHERE cust.customer_id = rcvlbes.customer_id
- and rcvblls.balance>0
- ORDER by Cust.name
- Into TEMP cust_with_balance
Then query in the temporary table in the following way:
- SELECT * from Cust_with_balance
- WHERE postcode> "98000"
The rows in the staging table are less than the rows in the primary table, and the physical order is the required order, reducing disk I/O, so the query effort can be significantly reduced.
Note: Changes to the primary table are not reflected when the staging table is created. When data is frequently modified in the primary table, be careful not to lose data.
7. Replacing non-sequential access with sorting
Non-sequential disk access is the slowest operation, which is represented by the movement of the disk's access arms back and forth. The SQL statement hides this situation, making it easy to write queries that require access to a large number of non-sequential pages when writing applications. In some cases, the ability to sort databases is used instead of non-sequential access to improve queries.
MySQL Efficient Query