SQL Server Performance Tuning entry (graphic version)
Step 1: capture sample data during business peaks(About 2 hours ). The tool used is the profiler that comes with sqlserver, also known as the event probe. For example, after entering, click the button on the leftmost side to create a new trail: DBO permission is required for logon, therefore, you can log on with SA or use Windows integrated verification (if sqlserver is currently logged on) to create a trail. There are four tabs for configuration. First, check the first one. You do not need to change the trail name. The default trail name is enough. There are two ways to save: one is file, the extension is. TRC (this method is convenient for you to send the tracking result from the customer to you), and the other is the table in the database. For ease of analysis, we save it as a table. At this time, the SQL prompts you to log on again. Here we save the table to the Master. Suppose the table name is JQ (if there are duplicates, the system will prompt whether to overwrite the table) after confirming, return to the first tab page. Switch to the second tab. The event class is listed on the left, and the existing event class is displayed on the right. For performance optimization, we do not need security review or session information. Click the delete button to continue to switch to the third tab. The data column here is enough by default. Of course, if you are not pleasing to the eye, you can delete all appname/NT username and so on. On the last tab page, we need to block the event ID generated by the system: Check the excluded System ID. For example, after all the projects are configured, click "run. It takes about two hours to continue running (during peak business hours, the typical response to the customer's business model in the recent period). The first step of preparation is complete. After a while, we started to check the JQ table automatically saved to the master.
Step 2: start to find the locations that affect the speed.Open the query analyzer, log on to the master, and read several SQL statements from the JQ table in descending order of I/O. According to my habits, I usually read 1000 records. Why I/O instead of time? The reason is very simple. For SQL Execution, "stable" is I/O, and duration is an unstable factor. The purpose of SQL optimization is to reduce I/O costs and improve efficiency. (In general, I/O is reduced, and duration will naturally decrease) details. After the execution is complete, let's take a closer look at the output below. 1. xl_tally_proc04 the maximum number of reads of this SP is nearly 100 W, and the duration has reached more than 25 seconds. 2. the I/O of the SP erp_im_gmbill_getbill is not big, only 7 W, and the duration average is more than 1 second. However, this SP is executed many times. After asking the customer, xl_tally_proc04 has a low execution frequency, which is once or twice a day, but erp_im_gmbill_getbill takes about 5 minutes. In this way, the overall I/O is very large. So here we should focus on analyzing the SP erp_im_gmbill_getbill, rather than the first one! The following principle is summarized:
The adjustment focuses on the customer's most important content, which is the one with the highest execution frequency and a large I/O size. I/O is the largest, not necessarily the content we want to prioritize.Step 3: analyze the statement you just saw. Since we want to analyze the I/O, we need to open the I/O so that every time we adjust the SQL, we can see the I/O changes at any time. This sentence is very useful: Set statistics Io on, simply looking at I/O changes, we will faint. Because we don't know how any changes we make affect I/O. Therefore, it depends on how the SQL Execution Plan is implemented. In the query analyzer, press Ctrl + k, or choose check from the menu. Well, the preparations are all done, and the following work begins. First, let's take a look at the optimization of SQL statements. Suppose the following SQL statement has poor performance: the preceding SQL statements read a total of 6636 data records, and the logic read is 1126. Is this I/O reasonable? Is it big or small? Is there room for improvement? Let's look at the execution plan: Oh, there are four things in total. Index seek costs 2%, index scan accounts for 47%, hash match accounts for 51%, and select accounts for 0%. We should keep in mind the second principle. All indexes should follow index seek as much as possible. Let's take a look at billsoflading's index information: Why is the current index going through scan? I will not talk about it here. If you are interested, you can find a book about database indexes. According to my previous blog description, we know that a composite index (also called convered index) should be created: boldate + companyid + bolcode. Then we can re-execute the SQL statement to check the I/O changes: oh, very cool! Logical reads is reduced to 50. Why? Let's take a look at the execution plan: Index scan turns into index seek, which naturally improves the efficiency! This is basically the way for SQL statements to be optimized in index. Let's continue to look at sp. For SP optimization, one thing is different from SQL optimization: the internal logic processing of SP may be very complicated. From the query analyzer alone, we cannot know which small SQL statement executes the largest I/O. We can only see a general description. Therefore, we need to know the internal information of the SP. First, understand the current spid. One method is select @ spid, and the other is to query the status bar information under the analyzer. Oh, my spid is 101. (The bottom Tips) Then I re-open profiler (the event probe) and re-create a trail, where the information of the second tab page needs to be modified, add the smtpcompleted in the "Stored Procedure" column on the left as follows: Modify the 4th tab pages and enter the spid = 101 information you just saw: click "run, in this way, profiler can only capture the SQL statement sent in the window spid = 101 in the query analyzer. We switch back to the query analyzer and execute the problematic sp. After the execution is complete, we return to profiler and click the stop button. All SQL statements executed within an SP are separated! The result is stored in the JQ2 table. All the executed small SQL statements are listed. The first one is the overall result after SP execution, and the I/O value is 62328, this is the SP. The second is to insert data into the temporary table. The I/O value is 61514. We can easily see that this sentence occupies more than 95% of the total SP cost. If we fix this insert into # temptable statement, the cost of the entire SP will naturally come down. Therefore, we need to insert this sentence. But it's slow! In the case of default, SQL Server results only displays a few characters. The SQL statement in the second line is incomplete, so we need to modify the settings. In the options menu of the analyzer tool, switch to the "result" tab, modify the maximum number of characters in each column to 8192 (this is the maximum allowed value), and click "OK, read Information from JQ2 again. You may ask, what if an SQL statement is particularly long? In fact, it is very easy to write this SQL statement into the log in your code, or directly modify the SP and print it out. OK. After the insert SQL statement is captured, it is placed in the query analyzer. Because temptable does not have its structure, we comment out the insert part and look at the SELECT statement. After execution, the cost on the goodsmovement table is 57834. In the old way, let's continue to look at the execution plan: in fact, now we are back to the SQL optimization step. I will not write the following work! This step looks very simple. I hope you will have such a concept in your mind about SQL optimization (INDEX), know what to do in the first step, and what to do in the second step. In other words, index optimization is basically the simplest. However, we should pay more attention to simple things. If you look for an app to track down, various low-efficiency indexes will make you really #¥ *#(**...... ¥
From: http://blog.joycode.com/juqiang/archive/2007/01/19/91848.aspx