Shows the execution process of an SQL command. To enable efficient execution of each statement, we should try our best to complete all the operations at the link layer. For example:
Select top 10 p. Name, P. Color, PSC. Name as subcategoryname, PC. Name as categoryname, D. documentsummary,
Pp. largephoto, sum (linetotal) linetotal
From production. product P
Join production. productsubcategory PSC
On P. productsubcategoryid = psc. productsubcategoryid
Join production. productcategory PC
On PSC. productcategoryid = pc. productcategoryid
Join production. productproductphoto PPP
On P. productid = PPP. productid
Join production. productphoto PP
On pp. productphotoid = PPP. productphotoid
Join sales. salesorderdetail OD
On OD. productid = P. productid
Left join production. productdocument PD
On PD. productid = P. productid
Left join production. Document d
On D. entid = Pd. entientid
Group by P. Name, P. Color, PSC. Name, PC. Name, D. documentsummary,
Pp. largephoto
Order by linetotal DESC
This statement violates the objective of operating on the link layer as much as possible. It extracts all the results and then sorts them according to linetotal. This statement takes only the first 10 results after sorting, but the result set for sorting is the whole query result. Therefore, the memory operated is very large, so we should reduce the size of the sorting result set. The following query first finds the first 10 orders in a subquery, and then associates with other tables, reducing the memory size of sorting operations. The ratio of query overhead is 99: 1. We can see that when writing the paging process, we first need to filter out the key values of the fields to be associated Based on the sorting field before connecting the relevant table.
Select P. Name, P. Color, PSC. Name as subcategoryname, PC. Name as categoryname, D. documentsummary,
Pp. largephoto, OD. linetotal
From production. product P
Join production. productsubcategory PSC
On P. productsubcategoryid = psc. productsubcategoryid
Join production. productcategory PC
On PSC. productcategoryid = pc. productcategoryid
Join production. productproductphoto PPP
On P. productid = PPP. productid
Join production. productphoto PP
On pp. productphotoid = PPP. productphotoid
Join
(
Select top 10 productid, sum (linetotal) linetotal
From sales. salesorderdetail
Group by productid
Order by linetotal DESC
) OD
On OD. productid = P. productid
Left join production. productdocument PD
On PD. productid = P. productid
Left join production. Document d
On D. entid = Pd. entientid
In addition, there will be a total number of records during the paging process. In some cases, two queries are executed and the number of records is returned at one time, one query is performed again based on the filtering conditions and the records that need to be returned. If the filtering conditions are not accurate enough, it takes a lot of time to scan the table. Therefore, it is better to put the result into a table variable in a single query. This table variable contains an auto-increment field that filters records on the current page based on the current page. Use @ rowcount to get the total number of records.
Usage of @ rowcount: for a table, if the record exists, it is updated. If the record does not exist, it is inserted. In the process, if exists (select * from table where key = 'A') may be used to update table
Else insert into table.
The preceding query actually has one more operation. If @ rowcount is 0 after direct update. Insert directly to avoid judging the existence of the query. Update table if @ rowcount = 0 insert
- Obtain remote data
When operating on the connected database, we should minimize the number of operations, which is the same as sending commands from the program to the database. When you directly operate the table of a remote machine on the local machine, all the filtering conditions will not be directly applied to the remote machine. Instead, it returns the result to the local machine and then filters data from all remote tables. Therefore, you can only call the stored procedure of a remote machine to efficiently execute related commands.
- Multiple Data Files
I found that all projects are a master file group, a data file, and a log file. Because logs are written in sequence, increasing the number of log files will not improve the performance. However, if the database is applied to images or log transmission and so on, increasing the number of log files will slightly improve the read performance. However, data files are not written in sequence. We should save some tables with read/write overheating to different physical disk files. This may not improve much performance when the disk is striped. However, when multiple files are restored to the database, you can store different files on different disks to solve the problem of insufficient storage space. Now 2005 has the Data Partition Function, we may think of creating multiple files. However, partitioning is a very dangerous operation, especially in systems with frequent write operations, which can easily cause resource contention.
- Use trigger with caution
We know that indexes not only occupy disk space, but also perform additional index maintenance when the index field data is modified. Each time an index is added, the record insertion speed is more than doubled. A trigger is actually more powerful than an index, and its overhead is more than twice that of an index. Therefore, you should not define too many triggers on the table unless you cannot.
- Solution when a record is returned in the process
In many processes, only one record is returned and then returned to the program as a result set. No matter whether datareader or adapter is used in the program, a result set is operated. This result set contains metadata that results in additional bytes transmission. Therefore, using output parameters in this process can reduce the number of transmitted bytes.