SQL SERVER SQL Statement optimization method Summary _mssql

Source: Internet
Author: User
Tags rand
1, SQL Server 2005 performance tools have SQL Server Profiler and Database Engine Tuning Advisor, excellent Dongdong, must be skilled use.

2. When querying SQL statements, open "Show estimated execution plan" and analyze the situation of each step

3, the primary approach, when the CPU occupancy rate is high, open SQL Server Profiler run, will run down the data into the file, and then open Database Engine Tuning Advisor call that file for analysis, SQL Server provides index optimization recommendations. Adopt its index optimization section.

4, but the above practice often will not run out of what you need, in the recent optimization process CPU occupancy rate is very high, but simply do not give me the optimization recommendations I need, in particular, some statements are stored in the process and more than table joint. At this point, you need to use a middle-level approach to the CPU-intensive statements.

5, or Run SQL Server Profiler, save the results of the run to a new table in a library (the name system will build on its own). Let it run for a while and then you can use
Select top * from test where textdata are NOT null ORDER by duration DESC
This can choose a long run of the statement, in order by can be replaced by the CPU, reads, to select the CPU takes a long time and read too much data statements.
After you locate the offending statement, you can analyze it concretely. Some statements can clearly see the problem in the execution plan.
Common there is no index or index set up unreasonable, will appear table scan or index scan, usually see scan, means will do full table or full index Scan, this is to bring the inevitable is to read too many times. What we expect to see is seek or key lookup.

6, how to look at the implementation of the SQL statement is very fastidious, beginners will be too concerned about the cost of the proportion shown inside, and in fact this is sometimes misleading. I was found in the actual optimization process, one index scan the execution cost accounted for only 25%, the other key lookup cost accounted for 50%, while the key lookup part is not optimized, the SEEK predicate is id=xxx this is based on the primary key lookup. And careful analysis can be seen, the latter CPU overhead 0.00015,i/o overhead 0.0013. The former, CPU overhead 1.4xxxx,i/o overhead is much larger than the latter. Therefore, the optimization should focus on the former.

7, how to optimize a single section, a complex SQL statement, SQL Server will be very clever reorganization where the statement, trying to match the index. Select the step with optimization, select the ' Properties ' next to it, select the ' predicate ', and copy the portion of it, which is the decomposed where statement, and then select the ' predicate ' in the query interface where the ' from table ' has just been copied. This is the part that needs to be optimized, and now that you've come this far, most people should be able to index them manually, because the where statement is a lot simpler than before. (In my project the where part of the original SELECT statement has 10 combinations of 6 fields, the portion extracted to be optimized is 4 conditions, involving 3 fields.) After the new index is established, the CPU occupancy rate drops suddenly, and the newly established index involves the fields that are infrequently updated, and frequent reads and writes do not affect the efficiency of the update.

8, the above is the idea of optimization, and finally some optimization process or system design needs attention to the problem.
A, try to avoid using the SELECT * from XXX where ABC like '%XXX ' type of fuzzy query, because% in front of the words is unable to use the index, will inevitably cause a full scan operation. You should find a workaround or use a precondition statement to minimize the number of rows before the like lookup.
B, try to avoid randomly recorded operations on large table data for select top n * to XXX where xxxx order by NEWID (). The NEWID () operation reads the full amount of data and then sorts it. Also consumes a large amount of CPU and read operations. Consider using the rand () function, which I'm still working on, such as id>= (select Max (ID) from table) *rand () for whole table operations. But if you take a random record of local data, consider it.
C, in the SQL Server Profiler records will see audit logout will occupy a large number of CPU and read and write operations. There are some data that say that the total number of SQL statements executed by a link during a connection is not overly worrying. Look down it does seem like this, many audit logout CPU and IO consumption and previously optimized statements are basically consistent. So in the 5th I mentioned the SQL statement to textdata the audit logout to the null condition.
D, two different fields or statements can cause a full table scan. For example where M=1 or n=1. If you create an index of M and N, it can also cause scan, and the solution is to index m and N separately. Test 120,000 data table, indexing error in the case of IO overhead up to 10.xxx, the index is indexed, all become 0.003, this contrast is very large. Although it can cause performance problems with insert operations, most bottlenecks are in the read operation of SELECT.
E, index lookup and index Scan, what we need is the former, and the reason for the latter is usually that the fields in one index are redundant, for example, the index is built in two fields A and B, and we just look for a, which leads to the index Scan. It is recommended that you establish an index for a separate A to form an index lookup.
F, for small tables do not recommend indexing, especially hundreds of of the amount of data, only thousands of thousands of levels of data to establish an index to have effect.

Database optimization is a very deep learning, in the database design should pay attention to, especially the last mentioned a, b two points, as far as possible in the early design to avoid.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.