transferred from: http://www.cnblogs.com/knowledgesea/p/3686105.htmlStraight to the point, the problem lies
SQL statement performance is not up to your requirements, the implementation of efficiency so that you can not tolerate, generally will be the following several situations.
- Speed is not power, unstable.
- There is not enough memory on the server, or there is not enough memory allocated for SQL.
- SQL statement Design is unreasonable
- There is no corresponding index, the index is unreasonable
- No valid indexed views
- Table data too large no effective partition design
- Database design too 2, there is a lot of data redundancy
- The appropriate statistics are missing from the index column or the statistics expire
- ....
So how do we find out what causes slow performance?
- First you need to know whether it is related to SQL statements, to ensure that the machine is not turned on, the server hardware configuration is poor, no net you say P AH
- Then you use the 2 Conan SQL performance Detection tool that I mentioned in my last article--sql Server Profiler, analyze the SQL slow related statement, that is, the execution time is too long, consumes system resources, CPU too much
- And then here's what this article is about, SQL optimization methods and techniques, avoiding some unreasonable SQL statements, taking the advantage of SQL
- Then judge whether to use, reasonable statistics. SQL Server can automatically statistics in the data distribution information, timing according to the data situation, update statistical information, it is necessary
- A reasonable index is used in the confirmation table, which I have mentioned in the previous blog, but after that blog post, I will also write an article about the index
- Too many data tables, to partition, to narrow the search scope
Analyze and compare execution time plan reads
SELECT * FROM dbo. Product
Executing the above statement generally gives you only the result and the number of rows executed, then how do you analyze, how to know that you optimize after the difference with no optimization?
Here are a few ways to tell you.
1. View execution time and CPU consumption time
SET STATISTICS Time Onselect * FROM dbo. Productset Statistics Time off
Open your query after the message inside can see.
2. View the operation of the query on i/0
SET STATISTICS IO onselect * FROM dbo. Productset Statistics IO off
After execution
Scan count: Number of indexes or table scans
Logical reads: number of pages read in the data cache
Physical reads: Number of pages read from disk
Read-ahead: Number of pages that are cached from disk during query
LOB logical reads: The number of pages of image,text,ntext or large data read from the data cache
LOB physical reads: The number of pages that are read from disk, Image,text,ntext, or large data
LOB pre-read: The number of pages of image,text,ntext or large data that is placed from the disk during a query
If the number of physical reads and pre-read times is much more, the index can be used for optimization.
If you do not want to use the SQL statement commands to view the content, there are methods, the elder brother teaches you more simple.
Query--->> query options--->> advanced
By the Red Trap on the 2 selected, remove the SQL statement SET Statistics IO/TIME on/off trial effect. Oh, you did it.
3. Review the execution plan and the execution plan
Select the query statement, click and then look at the message, the following legend appears
First of all, the statement of my example is too simple, your whole complex, forgive Ah.
Analysis : The mouse on the icon will show the details of this step execution, each table below shows a cost percentage, the number of analysis station is a block, can be based on the redesign of the data structure, or this rewrite SQL statement to optimize this. If there is a scan table, or scan the clustered index, which means that your index in the current query is inappropriate, is not playing a role, then you have to modify the perfect optimization of your index, how to do, you can according to my previous article in the SQL Optimization tool-Database Engine Tuning Advisor to the Index analysis optimization.
Select query Art
1. Ensure that you do not query the extra columns and rows.
- Avoid the existence of select *, use specific columns instead of * to avoid unnecessary columns
- Use where to qualify specific data to be queried to avoid unnecessary rows
- Reduce redundant rows by using the TOP,DISTINCT keyword
2. Use DISTINCT keyword carefully
Distinct is used in the case of querying a field or a few fields, which avoids duplication of data and results in optimization of the query.
However, a large number of query fields are used, which can greatly reduce the query efficiency.
By this figure, the following analysis:
It is obvious that the statements with distinct have higher CPU time and time consuming than statements without distinct. The reason is that when querying a lot of fields, if you use distinct, the database engine will compare the data and filter out duplicate data, however this comparison, the filtering process will take up the system resources, CPU time.
3. Use the Union keyword with caution
The main function of this keyword is to combine the result set of each query statement into a single result set to return to you. Usage
<select Statement 1>union<select statement 2>union<select statement 3>
The statement that satisfies the Union must satisfy: 1. The number of columns is the same. 2. The data type of the corresponding number of columns should be kept compatible.
Execution process:
Execute the SELECT statement-->> merge the result set--->> sort the result set and filter the duplicate records.
SELECT * FROM ((Orde o orderproduct op in o.ordernum=op.ordernum) inner join product p on Op.pronum=p.produ Ctnum) where P.id<10000unionselect * FROM ((Orde o left join orderproduct op on o.ordernum=op.ordernum) inner Join product p on Op.pronum=p.productnum) where p.id<20000 and P.id>=10000unionselect * FROM ((Orde o Left Join Orderproduct op on o.ordernum=op.ordernum) inner join product p on Op.pronum=p.productnum) where p.id>20000
---Here can write p.id>100 results like that Because he screened it.----------------------------------Compare the top and bottom two statements-----------------------------------------select * FROM (Orde o
This shows that efficiency is indeed low, so it is not necessary to avoid the use of. Actually, he did. The third part: Sorting the result set, filtering the duplicate records. You can see that it's not a good bird. However, without sorting the result set, it is obvious that the efficiency is higher than the Union, so what is the keyword of the sorting filter? A, yes, he is union all, and the union all can be used to optimize the union.
4. Determine if data exists in the table
Select COUNT (*) from the product select TOP (1) ID from product
Obviously, the following is a victory
5. Optimization of connection queries
It's important to first understand what the data you want to look like, and then make a decision about which connection to use.
The value sizes of the various connections are:
- The inner join result set size depends on the number of conditions that the left and right tables meet
- Left join depends on the size of the left table, right opposite.
- Total connection and cross-connect depending on the amount of data in the two tables left and right
SELECT * FROM ((SELECT * from Orde where orderid>10000) o left join orderproduct op on o.ordernum=op.ordernum) sel ECT * FROM (Orde O-Left Join orderproduct op on o.ordernum=op.ordernum) where o.orderid>10000
This shows that reducing the number of data connected to tables can improve efficiency.
Insert Insertion Optimization
--Creating a temporary table CREATE TABLE #tb1 (id int, name nvarchar (), createtime datetime) DECLARE @i intdeclare @sql varchar (set @i=0w) Hile (@i<100000) --loop insert 10w data begin Set @[email protected]+1 Set @sql = ' INSERT into #tb1 values (' + CONVERT (varchar), @i + ', ' Erzi ' +convert (nvarchar (+), @i) + ', ' +convert (nvarchar (+), GETDATE ()) + ') ' EXEC (@sql) end
I'm running here for 51 seconds.
--Create a temporary table #tb2 (id int, name nvarchar (), createtime datetime) DECLARE @i intdeclare @sql varchar (8000) DECLARE @ J Intset @i=0while (@i<10000) --loop insert 10w data begin Set @j=0 set @sql = ' INSERT into #tb2 select ' +convert (varchar ( @i*[email protected]) + ', ' Erzi ' +convert (nvarchar (+), @i*[email protected]) + "," "+convert (varchar (50), GETDATE ()) + "" Set @[email protected]+1 while (@j<10) begin Set @[email protected]+ ' UNION ALL Select ' +convert (VA Rchar (Ten), @i*[email protected]) + ', ' Erzi ' +convert (nvarchar (+), @i*[email protected]) + "", "" +convert (varchar (50) , GETDATE ()) + "' set @[email protected]+1 end exec (@sql) enddrop table #tb2select count (1) from #tb2
I've been running here for about 20 seconds.
Analysis Note: INSERT into Select Batch inserts, significantly improve efficiency. So try to avoid a loop insert later.
Refine Modify DELETE statement
If you modify or delete too much data at the same time, it can result in high CPU utilization and affect the access of other people to the database.
If you delete or modify too much data and use a single loop, then it will be inefficient, that is, the operation time will be very long.
So what are you going to do about it?
The tradeoff is to operate the data in batches.
Delete product where Id<1000delete product where id>=1000 and Id<2000delete product where id>=2000 and id<3 000 .....
Of course, this optimization method is not necessarily the best choice, in fact, these three ways are possible, depending on your system's access to the heat to decide, the key you have to understand what kind of statement is what kind of effect.
Summary: Optimization, the most important thing is that you usually design statements, database habits, the way. If you do not care, aggregated to a piece of optimization, you need to be patient analysis, but the process of analysis depends on your perception, needs, knowledge level.
Optimization Analysis of SQL statements