SQL Server Performance Tuning entry (graphic version)

Source: Internet
Author: User
Step 1: capture sample data during business peaks(About 2 hours ). The tool used is the profiler that comes with sqlserver, also known as the event probe, such:

After entering, click the left button to create a new trail:

DBO permission is required for logon, so you can log on with SA or use Windows integrated authentication (if you are currently logged on to sqlserver)

Create a trail. A total of four tab pages are configured. First, check the first one. You do not need to change the trail name. The default trail name is enough. There are two ways to save: one is file, the extension is. TRC (this method is convenient for you to send the tracking result from the customer to you), and the other is the table in the database.

For ease of analysis, we save it as a table. At this time, the SQL prompts you to log on again. Here we save the table to the master.

Suppose the table name is JQ (if there are duplicates, the system will prompt whether to overwrite)

After confirmation, return to the first tab page:

Switch to the second tab:

Event classes are listed on the left, and existing event classes are displayed on the right. For performance optimization, we do not need security review or session information. Click the delete button:

Switch to the third tab page, and the data column here is enough by default. Of course, if you are not pleasing to the eye, you can delete the appname/NT username and so on.

On the last tab page, we need to block the event ID generated by the system:

Check the excluded System ID, for example:

After all the projects are configured, click "run. It can last for about two hours (Business peak hours can reflect the customer's business model in the recent period of time)

Now, the first step of preparation is complete. After a period of time, we begin to check the JQ table automatically saved to the master.

Step 2: start to find the locations that affect the speed.

Open the query analyzer, log on to the master, and read several SQL statements from the JQ table in descending order of I/O. According to my habits, I usually read 1000 records. Why I/O instead of time? The reason is very simple. For SQL Execution, "stable" is I/O, and duration is an unstable factor. The purpose of SQL optimization is to reduce I/O costs and improve efficiency. (In general, I/O down, duration will naturally be reduced) For details, refer to my previous post: http://blog.joycode.com/juqiang

After the execution is complete, let's take a closer look at the output below.

1. xl_tally_proc04 the maximum number of reads of this SP is nearly 100 W, and the duration has reached more than 25 seconds.

2. the I/O of the SP erp_im_gmbill_getbill is not big, only 7 W, and the duration average is more than 1 second. However, this SP is executed many times.

After asking the customer, xl_tally_proc04 has a low execution frequency, which is once or twice a day, but erp_im_gmbill_getbill takes about 5 minutes. In this way, the overall I/O is very large.

So here we should focus on analyzing the SP erp_im_gmbill_getbill, rather than the first one!

The following principle is summarized:The adjustment focuses on the customer's most important content, which is the one with the highest execution frequency and a large I/O size. I/O is the largest, not necessarily the content we want to prioritize.

Step 3: analyze the statement you just saw. Since we want to analyze the I/O, we need to open the I/O so that every time we adjust the SQL, we can see the I/O changes at any time. This sentence is very useful: Set statistics Io on

Looking at the I/O changes, we will faint. Because we don't know how any changes we make affect I/O. Therefore, it depends on how the SQL Execution Plan is implemented. In the query analyzer, press Ctrl + k, or choose check from the menu.

Well, the preparations are all done, and the following work begins.

First, let's take a look at the optimization of SQL statements. Suppose the following SQL statement has low performance:

The preceding SQL statement reads a total of 6636 data records, and the logical read value is 1126. Is this I/O reasonable? Is it big or small? Is there room for improvement? Let's look at the execution plan:

Oh, there are four things in total. Index seek costs 2%, index scan accounts for 47%, hash match accounts for 51%, and select accounts for 0%. We should keep in mind the second principle. All indexes should follow index seek as much as possible.

Let's take a look at billsoflading's index information:

Why does the current index go through scan? If you are interested, you can find a book about database indexes. According to my previous blog description, we know that a composite index (also called convered index) should be created: boldate + companyid + bolcode

Then run the SQL statement again to check the I/O changes:

Oh, very cool! Logical reads is reduced to 50. Why? Let's take a look at the execution plan:

It turns out that index scan has changed to index seek, which naturally improves the efficiency!

This is basically the way for SQL statements to be optimized in index. Let's continue to look at sp.

 

For SP optimization, one thing is different from SQL optimization: the internal logic processing of SP may be very complicated. From the query analyzer alone, we cannot know which small SQL statement executes the largest I/O. We can only see a general description. Therefore, we need to know the internal information of the SP.

First, understand the current spid. One method is select @ spid, and the other is to query the status bar information under the analyzer.

Oh, my spid is 101. (The bottom Tips)

Then I re-open profiler (the event probe) and re-create a trail. Here I want to modify the information on the second tab page and add smtpcompleted in the "Stored Procedure" column on the left

The added image is as follows:

Then, modify the 4th tab pages and fill in the spid = 101 information you just saw:

After clicking run, the profiler can only capture the SQL statement sent in the window spid = 101 in the query analyzer. We switch back to the query analyzer and execute the problematic sp. After the execution is complete, we return to profiler and click the stop button. All SQL statements executed within an SP are separated!

The result is assumed to be saved in the JQ2 table. All executed small SQL pieces are listed as follows:

The first is the overall result after SP execution. If I/O is 62328, It is the SP's own. The second is to insert data into the temporary table. The I/O value is 61514. We can easily see that this sentence occupies more than 95% of the total SP cost. If we fix this insert into # temptable statement, the cost of the entire SP will naturally come down. Therefore, we need to insert this sentence.

But it's slow! In the case of default, SQL Server results only displays a few characters. The SQL statement in the second line is incomplete, so we need to modify the settings. In the options menu of the analyzer tool, switch to the "result" tab, modify the maximum number of characters in each column to 8192 (this is the maximum allowed value), and click "OK, read Information from JQ2 again. You may ask, what if an SQL statement is particularly long? In fact, it is very easy to write this SQL statement into the log in your code, or directly modify the SP and print it out.

OK. After the insert SQL statement is captured, it is placed in the query analyzer. Because temptable does not have its structure, we comment out the insert part and look at the SELECT statement. After execution, the cost on the goodsmovement table is 57834.

In the old way, we will continue to look at the execution plan:

In fact, now I have returned to the SQL optimization step. I will not write the following work!

 

This step looks very simple. I hope you will have such a concept in your mind about SQL optimization (INDEX), know what to do in the first step, and what to do in the second step. In other words, index optimization is basically the simplest. However, we should pay more attention to simple things. If you look for an app to track down, various low-efficiency indexes will make you really #¥ *#(**...... ¥

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.