Creating a web application requires paging. This problem is very common in database processing. The typical data paging method is the ADO record set paging method, that is, using the paging function provided by ADO (using the cursor) to implement paging. However, this paging method is only applicable to small data volumes, because the cursor itself has a disadvantage: the cursor is stored in the memory and is very memory-intensive. When the game tag is set up, the related records are locked until the cursor is canceled. A cursor provides a method to scan data row by row in a specific set. Generally, a cursor is used to traverse data row by row and perform different operations based on different data conditions. For the multi-table and Big Table-defined cursors (large data sets) loop, it is easy for the program to enter a long wait or even crash.
More importantly, for a very large data model, it is a waste of resources to load the entire data source every time during paging retrieval. Currently, the popular paging method is to retrieve data in the block area of the page size, instead of all the data, and then execute the current row in one step.
The earliest method to extract data based on the page size and page number is probably the "Russian stored procedure ". This stored procedure uses a cursor. Due to the limitations of the cursor, this method has not been widely recognized.
Later, someone modified the stored procedure on the Internet. The following stored procedure is a paging stored procedure written in conjunction with our office automation instance:
Create procedure pagination1 (@ pagesize int, -- page size, such as storing 20 records per page @ pageindex int -- current page number) asset nocount onbegindeclare @ indextable table (ID int identity ), nid int) -- Define table variable declare @ pagelowerbound int -- Define the bottom code of this page declare @ pageupperbound int -- Define the top code set @ pagelowerbound = (@ pageindex-1) of this page) * @ pagesizeset @ pageupperbound = @ pagelowerbound + @ pagesizeset rowcount @ pageupperboundinsert into @ indextable (NID) Select GID from tgongwenwhere fariqi> dateadd (day,-365, getdate ()) order by fariqi descselect O. GID, O. mid, O. title, O. fadanwei, O. fariqi from tgongwen o, @ indextable twhere O. gid = T. NID and T. id> @ pagelowerboundand T. ID <= @ pageupperbound order by T. idendset nocount off
The above stored procedures use the latest SQL server technology-Table variables. It should be said that this stored procedure is also a very good paging stored procedure. Of course, in this process, you can also write the table variables as temporary tables: Create Table # temp. But it is obvious that in SQL Server, using temporary tables does not use Quick table variables. So when I first started using this stored procedure, I felt very good and the speed was better than that of the original ADO. But later, I found a better method than this method.
The author once saw a short article on the Internet, "how to retrieve records from the data table from the nth to the nth". The full text is as follows:
Retrieve the N to M records from the publish table: Select top M-n + 1 * From publishwhere (id not in (select top n-1 idfrom publish )) keywords with ID publish table
When I saw this article at the time, I was truly inspired by the spirit and thought that the idea was very good. Later, I was working on an office automation system (Asp. net + C # + SQL Server), suddenly remembered this article, I think if you modify this statement, this may be a very good paging storage process. So I searched for this article on the Internet. I did not expect this article to be found, but I found a paging storage process written according to this statement, this storage process is also a popular paging storage process. I regret that I didn't rush to transform this text into a storage process:
Create procedure pagination2 (@ SQL nvarchar (4000), -- SQL statement without sorting statement @ page int, -- page number @ recsperpage int, -- number of records per page @ ID varchar (255), -- Non-repeated idnumber @ sort varchar (255) -- sorting field and rule) asdeclare @ STR nvarchar (4000) set @ STR = ''select top'' + Cast (@ recsperpage as varchar (20 )) + ''' * from (''+ @ SQL +'') t where T. ''+ @ ID +'' not in (select top ''+ Cast (@ recsperpage * (@ page-1) as varchar (20 )) + ''' + @ ID + ''' from (''+ @ SQL +'') T9 order by ''+ @ sort + '') order by ''+ @ sortprint @ strexec sp_executesql @ strgo
In fact, the preceding statement can be simplified:
Select top page size * From Table1 where (id not in (select top page size * Page ID from Table order by ID) Order by ID
However, this stored procedure has a fatal drawback, that it contains not in words. Although I can transform it:
Select top page size * From Table1 where not exists (select * from (select top (page size * page size) * From Table1 order by ID) B where B. id =. ID) order by ID
That is to say, not exists is used to replace not in, but we have already discussed that there is no difference in the execution efficiency between the two. In this case, the combination of top and not in is faster than using a cursor.
Although using not exists does not save the efficiency of the last stored procedure, using top keywords in SQL Server is a wise choice. Because the ultimate goal of paging optimization is to avoid generating too many record sets, we have mentioned the top advantage in the previous sections. Using Top, we can control the data volume.
In paging algorithms, there are two key factors that affect the query speed: Top and not in. Top can increase our query speed, while not in will slow down our query speed. Therefore, to increase the speed of our entire paging algorithm, we need to completely transform not in, replace it with other methods.
We know that we can use max (field) or min (field) to extract the maximum or minimum values of almost any field, so if this field is not repeated, then, we can use the max or min of these non-repeated fields as the watershed to make them a reference object for separating each page in the paging algorithm. Here, we can use the operator ">" or "<" to accomplish this mission, so that the query statement conforms to the Sarg format. For example:
Select top 10 * from table1 where id>200
The following paging solution is available:
Select top page size * From table1where ID> (select max (ID) from (select top (page number-1) * page size) ID from Table1 order by ID) as t) order by ID
When selecting a column with no repeated values and easy to tell the size, we usually select a primary key. The following table lists the tables in the office automation system with 10 million data .) For sorting columns and extracting GID, fariqi, and title fields, take pages 1st, 10, 100, 500, 1000, 10 thousand, 0.1 million, and 0.25 million as examples, test the execution speed of the preceding three paging schemes: (unit: milliseconds)
Page number |
Solution 1 |
Solution 2 |
Solution 3 |
1 |
60 |
30 |
76 |
10 |
46 |
16 |
63 |
100 |
1076 |
720 |
130 |
500 |
540 |
12943 |
83 |
1000 |
17110 |
470 |
250 |
10000 |
24796 |
4500 |
140 |
100000 |
38326 |
42283 |
1553 |
250000 |
28140 |
128720 |
2330 |
500000 |
121686 |
127846 |
7168 |
From the table above, we can see that the three stored procedures can be trusted when executing paging commands below 100 pages, and the speed is good. However, in the first solution, after more than 1000 pages are executed, the speed will decrease. The second solution is that the speed starts to decrease after more than 10 thousand pages are executed. However, the third solution has never been greatly downgraded, And the stamina is still very strong.
After determining the third paging scheme, we can write a stored procedure accordingly. As you know, the stored procedure of SQL Server is compiled in advance, and its execution efficiency is higher than that of SQL statements sent through web pages. The following stored procedure not only contains the paging scheme, but also determines whether to make statistics on the total number of data based on the parameters sent from the page.
Obtain data on a specified page:
Create procedure pagination3 @ tblname varchar (255), -- table name @ strgetfields varchar (1000) = ''', -- columns to be returned @ fldname varchar (255) = ''', -- Name of the sorted field @ pagesize Int = 10, -- page size @ pageindex Int = 1, -- page number @ docount bit = 0, -- total number of returned records, if the value is not 0, @ ordertype bit = 0 is returned. -- set the sorting type. If the value is not 0, @ strwhere varchar (1500) = ''' is returned in descending order. (Note: do not add where) asdeclare @ strsql varchar (5000) -- subject sentence declare @ strtmp varchar (110) -- temporary variable declare @ Strorder varchar (400) -- sorting type if @ docount! = 0 beginif @ strwhere! = ''' Set @ strsql = "select count (*) as total from [" + @ tblname + "] Where" + @ strwhereelseset @ strsql = "select count (*) as total from ["+ @ tblname +"] "End
The above code means that if @ docount is not passed over 0, the total number of statistics will be executed. All the following code is 0 @ docount:
elsebeginif @OrderType != 0beginset @strTmp = "<(select min"set @strOrder = " order by [" + @fldName +"] desc"
If @ ordertype is not 0, execute descending order. This sentence is very important!
endelsebeginset @strTmp = ">(select max"set @strOrder = " order by [" + @fldName +"] asc"endif @PageIndex = 1beginif @strWhere != ''''set @strSQL = "select top " + str(@PageSize) +" "+@strGetFields+ "from [" + @tblName + "] where " + @strWhere + " " + @strOrderelseset @strSQL = "select top " + str(@PageSize) +" "+@strGetFields+ "from ["+ @tblName + "] "+ @strOrder
If the above code is executed on the first page, it will speed up the execution.
endelsebegin
The following code grants @ strsql the SQL code for real execution
set @strSQL = "select top " + str(@PageSize) +" "+@strGetFields+ " from ["+ @tblName + "] where [" + @fldName + "]" + @strTmp + "(["+ @fldName + "])from (select top " + str((@PageIndex-1)*@PageSize) + " ["+ @fldName + "]from [" + @tblName + "]" + @strOrder + ") as tblTmp)"+ @strOrderif @strWhere != ''''set @strSQL = "select top " + str(@PageSize) +" "+@strGetFields+ " from ["+ @tblName + "] where [" + @fldName + "]" + @strTmp + "(["+ @fldName + "]) from (select top " + str((@PageIndex-1)*@PageSize) + " ["+ @fldName + "] from [" + @tblName + "] where " + @strWhere + " "+ @strOrder + ") as tblTmp) and " + @strWhere + " " + @strOrderendendexec (@strSQL)GO
The above stored procedure is a general stored procedure, and its annotations have been written in it. In the case of large data volumes, especially when querying the last few pages, the query time generally does not exceed 9 seconds. Other stored procedures may cause timeout in practice, therefore, this stored procedure is very suitable for queries of large-capacity databases. I hope that through the analysis of the above stored procedures, we can provide some inspiration and improve the efficiency of our work. At the same time, I hope our peers can propose better real-time data paging algorithms.
In the case of small data volumes, the third stored procedure is as follows:
1. The paging speed is generally between 1 second and 3 seconds.
2. When querying the last page, the speed is generally 5 to 8 seconds, even if the total number of pages is only 3 or 0.3 million pages.
Although the implementation of this paging process is very fast in the case of ultra-large capacity, but in the first few pages, this 1-3 second speed is slower than the first non-optimized paging method. In the user's words, it is "No Access database is faster ", this recognition is sufficient to prevent users from using your developed system.
I have analyzed this. The crux of this problem is so simple, but so important: the sorting field is not a clustered index!
The author does not put together the topics of "query optimization" and "paging algorithm" because both of them need a very important thing-clustered index.
As we mentioned earlier, clustered indexes have two major advantages:
1. Narrow the query range as quickly as possible.
2. Sort fields as quickly as possible.
1st are mostly used for query optimization, while 2nd are mostly used for data sorting during paging.
Clustered indexes can only be created in one table, which makes clustered indexes more important. The selection of clustered indexes can be said to be the most critical factor for "query optimization" and "efficient paging.
However, clustering index columns must meet both the needs of query columns and the needs of sorting columns. This is usually a contradiction. In my previous "Index" discussion, I used fariqi, that is, the user's published date as the starting column of the clustered index. The date accuracy is "day ". The advantages of this method have been mentioned earlier. In the quick query of the time range, it is more advantageous than using the id Primary Key column.
However, because duplicate records exist in the clustered index column during pagination, Max or min cannot be used as the most paging reference object, thus making sorting more efficient. If the id Primary Key column is used as the clustered index, the clustered index is useless except for sorting. In fact, this is a waste of valuable resources.
To solve this problem, I later added a date column, whose default value is getdate (). When a user writes a record, this column automatically writes the current time, accurate to milliseconds. Even so, to avoid a small possibility of overlap, you also need to create a unique constraint on this column. Use this date column as the clustered index column.
With this time-based clustered index column, you can use this column to query a certain period of time when you insert data, and use it as a unique column to implement Max or Min, it is a reference object for paging algorithms.
After such optimization, the author found that the paging speed is usually dozens of milliseconds or even 0 milliseconds in the case of large data volumes or small data volumes. However, the query speed for narrowing the range by date segments is no slower than the original query speed. Clustered index is so important and precious, so I have summarized that clustered index must be built on:
1. The most frequently used field to narrow the query scope;
2. The most frequently used field to be sorted.
Information System Project Manager Network: http://www.cnitpm.com