SQL Server optimization method)

Source: Internet
Author: User

After all, the CPU of the database server was reduced from over 50% (opening five program threads) to 100% (opening 10 program threads. I found some tips and summarized them.

1. SQL server 2005's performance tools include SQL Server Profiler and database engine optimization consultant. Excellent Dongdong and must be used skillfully.
2. When querying SQL statements, enable "show estimated execution plans" to analyze the situation of each step.
3. In the initial practice, when the CPU usage is high, enable SQL Server Profiler to run, save the running data to the file, and then open the database engine optimization consultant to call the file for analysis, SQL SERVER provides INDEX OPTIMIZATION suggestions. Adopt its index optimization section.
4. However, the above practices often do not run out of what you need. In the recent optimization process, the CPU usage is extremely high, but I cannot provide the optimization suggestions I need, in particular, some statements are stored in multiple tables. In this case, you need to use the intermediate method to locate statements that occupy a high CPU.
5. run SQL Server Profiler and save the running result to a new table of a database (the system will create a name by itself ). Run it for a period of time and then use
Select top 100 * from test where textdata is not null order by duration desc
This can be used to select the statement that runs for a long time. In order by, you can replace the statement with the CPU and READS to select the statement that consumes a long CPU and has read too much data.
After locating the problematic statement, you can analyze it in detail. Some statements clearly show the problem in the execution plan.
If index or index creation is unreasonable, table scan or index scan may occur. If you see SCAN, full table or full index scan will be performed, this must be caused by too many reads. What we expect is seek or key search.
6. The plan for SQL statement execution is very exquisite. Beginners will pay too much attention to the overhead ratio shown in it. In fact, this is sometimes misleading. I was found in the actual optimization process that the execution item overhead of one index scan only accounts for 25%, and the overhead of another key search accounts for 50%. The key search part is not optimizable at all, the SEEK predicate is the search created on the primary key of ID = XXX. After careful analysis, we can see that the latter has a CPU overhead of 0.00015, and the I/O overhead is 0.0013. In the former case, the CPU overhead is 1.4 xxxx, and the I/O overhead is far greater than the latter. Therefore, the optimization should focus on the former.
7. How to optimize a single part and a complex SQL statement, SQL SERVER will smartly reorganize the WHERE statement and try to match the index. Select the optimization step, select "attribute" next to it, and then select "Predicate" to copy some of them. This part is the WHERE statement after decomposition, then, in the query interface, select * from table where is the "predicate" just copied ". This is the part to be optimized. Now that we have reached this step, most people should be able to manually create indexes, because the WHERE statement here is certainly much simpler than the previous one. (In my project, the WHERE part of the original SELECT statement has a combination of 10 conditions, involving six fields. The extracted part requires four conditions and involves three fields. After the new index is created, the CPU usage is reduced at once, and the fields involved in the new index are not updated frequently, so frequent read/write operations will not affect the UPDATE efficiency)
8. The above is the optimization idea. At last, we will discuss some issues that need attention during the optimization process or system design.
A. Avoid fuzzy queries of the select * from xxx where abc like '% XXX' type as much as possible. Because % cannot be used before, it will inevitably cause full SCAN operations. You should find an alternative method or use a precondition to minimize the number of rows before like.
B. Avoid performing the select top n * from xxx where xxxx order by newid () operation on large table data. The newid () operation reads full data and then sorts it. It also occupies a lot of CPU and read operations. We can use the RAND () function for implementation. In this regard, I am still studying it and it is better to perform the whole table operation, such as id> = (select max (id) from table) * rand (). However, you still need to consider the random records of local data.
C. In the SQL Server Profiler record, we can see that Audit Logout occupies a lot of CPU and read/write operations. I checked some information about the total number of SQL statements executed by a link during a connection, so don't worry too much. It seems that the CPU and IO consumption of many Audit logouts are basically the same as those of the previously optimized statements. Therefore, the SQL statement I mentioned at uses the textdata is not null condition to hide the Audit Logout.
D. Two OR statements with different fields may cause full table scan. For example, where m = 1 or n = 1. If an index is set to m and n, scan is also triggered. The solution is to create an index for m and n respectively. For a table with 0.12 million data records, the I/O overhead is as high as 10.xxxwhen the index is incorrect. After the index is created separately, the total cost is changed to 0.003. This contrast is very huge. Although it may cause performance problems of INSERT operations, after all, most of the bottlenecks are in SELECT read operations.
E. Index Seek and Index Scan. We need the former, which is usually caused by excessive search for fields in an Index, for example, if an INDEX is created in fields A and B, if we look for A, INDEX SCAN will occur. We recommend that you create an index for individual A to create an index search.
F. It is not recommended to create indexes for small tables, especially for hundreds of data records. Only the indexing of tens of thousands of data levels is effective.
Database optimization is A deep learning. You should pay attention to it during database design, especially the two points A and B mentioned at the end, so as to avoid it at the initial stage of design.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.