server| optimization
In the actual work, especially in the production environment, the optimization of SQL statements is very important, and it also plays a significant role in improving the performance of the database. We are always complaining about the performance of the machine, always complaining about the trivial issues of concurrent access, but if we are not optimizing a single SQL statement, Although it cannot be said to solve all the problems, at least it can solve most of the problems.
1.Top sorting problem.
We often want to sort the fields of a table and then take the first n names. So we'll write the following SQL statement:
Select Top * FROM table
ORDER BY Score Desc
If the table is very large, such an operation is very resource intensive, because SQL Server sorts the entire table and then takes the top N records. Such an affectation is done in the Temdb, so the error of the log full is reported at the extreme. To avoid a full table sort, All we have to do is index on the score, because the score index's leaf level is ordered, as long as the first 100 pages in score, and then find the actual records based on the bookmark, so that the performance of the DB will be greatly improved.
2. Question of the same day.
We often have to look up records on the same day as a date, so we write back the following SQL statements;
DECLARE @DateTime DateTime
Set @DateTime =getdate ()
SELECT * FROM table
where convert (char (), f_time,120) =convert (char, @DateTime, 120)
But the problem with this SQL statement is that you can't use the index on the F_time. In order to use the index on the f_time, we can use the time period query to replace the above statement.
DECLARE @start datetime
DECLARE @end datetime
DECLARE @datetime datetime
Set @datetime =getdate ()
Set @start =convert (char (), @datetime, 120)--The beginning of the day
Set @end =dateadd (Ss,-1,dateadd (d,1, @start))--The end time of the day
SELECT *
From table
where F_time between @start and @end
This solves the problem of not using an index.
3. Use the index to group operations.
We often want to group a field and then aggregate the other fields. If we use the index appropriately for the grouped fields, we can speed up our grouping. Below is an example of the Orders table for Northwind:
--an index on the EmployeeID of the Orders table.
Select Employeeid,count (*)
From Orders
GROUP BY EmployeeID
--View the execution plan, which leverages the index on the EmployeeID. If you change to the following query:
Select Employeeid,sum (Freight)
From Orders
GROUP BY EmployeeID
--View the execution plan, which does not use the index on the EmployeeID. Instead, the full table scan is used. So what is the reason? Because the freight is not on the EmployeeID index, the results cannot be obtained through the index. And if the cost of searching through bookmarks is too high, So SQL Server chose to use full table scans. And what if we perform a composite index on EmployeeID and freight?
Create INDEX Idx_employeeid on Orders (employeeid,freight)
--Executes the second query again. View the execution plan. The index that we built using SQL Server. Only need to use the index can query the results, greatly improve our query speed.