Turn: massive database query optimization and paging algorithm solution 3

Source: Internet
Author: User

I saw thisArticleIt is really a boost in the spirit, and I think the idea is very good. Later, I was working on an office automation system (Asp. net + C # + SQL Server), suddenly remembered this article, I think if you modify this statement, this may be a very good paging storage process. So I searched for this article on the Internet. I did not expect this article to be found, but I found a paging storage process written according to this statement, this storage process is also a popular paging storage process. I regret that I didn't rush to transform this text into a storage process:


       
        
Create procedure pagination2 (@ SQL nvarchar (4000), -- SQL statement without sorting statement @ page int, -- page number @ recsperpage int, -- number of records per page @ ID varchar (255), -- Non-repeated idnumber @ sort varchar (255) -- sorting field and rule) as declare @ STR nvarchar (4000) set @ STR = 'select top' + Cast (@ recsperpage as varchar (20) + '* from (' + @ SQL + ') t where T. '+ @ ID +' not in (select top '+ Cast (@ recsperpage * (@ page-1) as varchar (20 )) + ''+ @ ID + 'from (' + @ SQL + ') T9 order by' + @ sort + ') order by '+ @ sort print @ STR exec sp_executesql @ STR go

       

In fact, the preceding statement can be simplified:


        
         
Select top page size * From Table1 where (id not in (select top page size * Page ID from Table order by ID) Order by ID

        

However, this stored procedure has a fatal drawback, that it contains not in words. Although I can transform it:


        
         
Select top page size * From Table1 where not exists (select * from (select top (page size * page size) * From Table1 order by ID) B where B. id =. ID) order by ID

        

That is to say, not exists is used to replace not in, but we have already discussed that there is no difference in the execution efficiency between the two.

In this case, the combination of top and not in is faster than using a cursor.

Although using not exists does not save the efficiency of the last stored procedure, using top keywords in SQL Server is a wise choice. Because the ultimate goal of paging optimization is to avoid generating too many record sets, we have mentioned the top advantage in the previous sections. Using Top, we can control the data volume.

In PagingAlgorithmThere are two key factors that affect the query speed: Top and not in. Top can increase our query speed, while not in will slow down our query speed. Therefore, to increase the speed of our entire paging algorithm, we need to completely transform not in, replace it with other methods.

We know that we can use max (field) or min (field) to extract the maximum or minimum values of almost any field, so if this field is not repeated, then, we can use the max or min of these non-repeated fields as the watershed to make them a reference object for separating each page in the paging algorithm. Here, we can use the operator ">" or "<" to accomplish this mission, so that the query statement conforms to the Sarg format. For example:


        
         
Select top 10 * From Table1 where ID> 200

        

The following paging solution is available:


        
         
Select top page size * From Table1 where ID> (select max (ID) from (select top (page number-1) * page size) ID from Table1 order by ID) as t) order by ID

        

When selecting a column with no repeated values and easy to tell the size, we usually select a primary key. The following table lists the tables in the office automation system with 10 million data .) For sorting columns and extracting GID, fariqi, and title fields, take pages 1st, 10, 100, 500, 1000, 10 thousand, 0.1 million, and 0.25 million as examples, test the execution speed of the preceding three paging schemes: (unit: milliseconds)


        
         
Page number scheme 1 scheme 2 Scheme 3 1 60 30 76 10 46 16 63 100 1076 720 130 500 540 12943 1000 17110 470 250 10 thousand 24796 4500 140 0.1 million 38326 42283 1553 0.25 million 28140 128720 2330 0.5 million 121686 127946 7168

        

From the table above, we can see that the three stored procedures can be trusted when executing paging commands below 100 pages, and the speed is good. However, in the first solution, after more than 1000 pages are executed, the speed will decrease. The second solution is that the speed starts to decrease after more than 10 thousand pages are executed. However, the third solution has never been greatly downgraded, And the stamina is still very strong.

After determining the third paging scheme, we can write a stored procedure accordingly. As you know, the stored procedure of SQL Server is compiled in advance, and its execution efficiency is higher than that of SQL statements sent through web pages. The following stored procedure not only contains the paging scheme, but also determines whether to make statistics on the total number of data based on the parameters sent from the page.


       
        
-- Get the data of the specified page create procedure pagination3 @ tblname varchar (255), -- table name @ strgetfields varchar (1000) = '*', -- the column to be returned @ fldname varchar (255) = '', -- Name of the sorted field @ pagesize Int = 10, -- page size @ pageindex Int = 1, -- page number @ docount bit = 0, -- total number of records returned, if the value is not 0, @ ordertype bit = 0 is returned. -- set the sorting type. If the value is not 0, @ strwhere varchar (1500) = ''is returned in descending order. (Note: do not add where) as declare @ strsql varchar (5000) -- subject sentence declare @ strtmp varchar (11 0) -- temporary variable declare @ strorder varchar (400) -- sorting type if @ docount! = 0 begin if @ strwhere! = ''Set @ strsql = "select count (*) as total from [" + @ tblname + "] Where" + @ strwhere else set @ strsql = "select count (*) as total from ["+ @ tblname +"] "end -- above
         Code It means that if @ docount is not passed over 0, the total number of statistics will be executed. All the following code is in the case that @ docount is 0, else begin if @ ordertype! = 0 begin set @ strtmp = "<(select Min" set @ strorder = "order by [" + @ fldname + "] DESC" -- if @ ordertype is not 0, this sentence is very important! End else begin set @ strtmp = "> (select Max" set @ strorder = "order by [" + @ fldname + "] ASC" end if @ pageindex = 1 begin if @ strwhere! = ''Set @ strsql = "select top" + STR (@ pagesize) + "" + @ strgetfields + "from [" + @ tblname + "] Where" + @ strwhere + "" + @ strorder else set @ strsql = "select top" + STR (@ pagesize) + "" + @ strgetfields + "from [" + @ tblname + "]" + @ strorder -- execute the above Code on the first page, this will speed up the execution of end else begin -- the following code gives @ strsql the SQL code to be actually executed set @ strsql = "select top" + STR (@ pagesize) + "" + @ strgetfields + "from [" + @ tblname + "] Where ["+ @ fldname +"] "+ @ strtmp +" (["+ @ fldname +"]) from (select top "+ STR (@ PageIndex-1) * @ pagesize) + "[" + @ fldname + "] from [" + @ tblname + "]" + @ strorder + ") as tbltmp) "+ @ strorder if @ strwhere! = ''Set @ strsql = "select top" + STR (@ pagesize) + "" + @ strgetfields + "from [" + @ tblname + "] Where [" + @ fldname + "]" + @ strtmp + "([" + @ fldname +" ]) from (select top "+ STR (@ PageIndex-1) * @ pagesize) + "[" + @ fldname + "] from [" + @ tblname + "] Where" + @ strwhere + "" + @ strorder + ") as tbltmp) and "+ @ strwhere +" "+ @ strorder end exec (@ strsql) Go
       

The above stored procedure is a general stored procedure, and its annotations have been written in it.

In the case of large data volumes, especially when querying the last few pages, the query time generally does not exceed 9 seconds. Other stored procedures may cause timeout in practice, therefore, this stored procedure is very suitable for queries of large-capacity databases.

I hope that through the analysis of the above stored procedures, we can provide some inspiration and improve the efficiency of our work. At the same time, I hope our peers can propose better real-time data paging algorithms.

4. Importance of clustered indexes and how to select clustered Indexes

In the title of the previous section, I wrote: The general paging display stored process for small data volumes and massive data. This is because when we apply this stored procedure to the "office automation" system, the author finds that the third stored procedure has the following phenomena when there is a small amount of data:

1. The paging speed is generally between 1 second and 3 seconds.

2. When querying the last page, the speed is generally 5 to 8 seconds, even if the total number of pages is only 3 or 0.3 million pages.

Although the implementation of this paging process is very fast in the case of ultra-large capacity, but in the first few pages, this 1-3 second speed is slower than the first non-optimized paging method. In the user's words, it is "No Access database is faster ", this recognition is sufficient to prevent users from using your developed system.

I have analyzed this. The crux of this problem is so simple, but so important: the sorting field is not a clustered index!

The title of this article is "query optimization and paging algorithm solution ". The author does not put together the topics of "query optimization" and "paging algorithm" because both of them need a very important thing-clustered index.

As we mentioned earlier, clustered indexes have two major advantages:

1. Narrow the query range as quickly as possible.

2. Sort fields as quickly as possible.

1st are mostly used for query optimization, while 2nd are mostly used for data sorting during paging.

Clustered indexes can only be created in one table, which makes clustered indexes more important. The selection of clustered indexes can be said to be the most critical factor for "query optimization" and "efficient paging.

However, clustering index columns must meet both the needs of query columns and the needs of sorting columns. This is usually a contradiction.

In my previous "Index" discussion, I used fariqi, that is, the user's published date as the starting column of the clustered index. The date accuracy is "day ". The advantages of this method have been mentioned earlier. In the quick query of the time range, it is more advantageous than using the id Primary Key column.

However, because duplicate records exist in the clustered index column during pagination, Max or min cannot be used as the most paging reference object, thus making sorting more efficient. If the id Primary Key column is used as the clustered index, the clustered index is useless except for sorting. In fact, this is a waste of valuable resources.

To solve this problem, I later added a date column, whose default value is getdate (). When a user writes a record, this column automatically writes the current time, accurate to milliseconds. Even so, to avoid a small possibility of overlap, you also need to create a unique constraint on this column. Use this date column as the clustered index column.

With this time-based clustered index column, you can use this column to query a certain period of time when you insert data, and use it as a unique column to implement Max or Min, it is a reference object for paging algorithms.

After such optimization, the author found that the paging speed is usually dozens of milliseconds or even 0 milliseconds in the case of large data volumes or small data volumes. However, the query speed for narrowing the range by date segments is no slower than the original query speed.

Clustered index is so important and precious, so I have summarized that clustered index must be built on:

1. The most frequently used field to narrow the query scope;

2. The most frequently used field to be sorted.

Conclusion:

This article brings together my recent experiences in using databases, and is an accumulation of practical experience in "office automation" systems. I hope this article will not only help you in your work, but also help you understand the problem analysis methods. The most important thing is that I hope this article will be helpful, we are setting off everyone's interest in learning and discussion, so as to jointly promote and jointly make our best efforts for the public security, technology, and police business and the Golden Shield Project.

Finally, it should be noted that during the experiment, I found that the biggest impact on the database speed is not the memory size, but the CPU. When I tried it on my P4 2.4 machine, I checked "Resource Manager". The CPU usage often continued to reach 100%, but the memory usage did not change or changed significantly. Even on our HP ml 350 G3 server, the peak CPU usage reaches 90%, typically around 70%.

The experimental data in this article is from our HP ml 350 server. Server Configuration: Dual-Inter Xeon hyper-threading CPU 2.4 GB, memory 1 GB, operating system Windows Server 2003 Enterprise Edition, Database SQL Server 2000 sp3.

You can visit the following ghost web site or Internet web site to experience office automation (ASP. NET + C #) of our "10 million-level" database ).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.