(currently only for SQL Server 2000) can improve a lot
Virtual hard disk: is to use memory as a hard disk, such as 2G of memory, then you can take out 1G of memory as a hard disk to use.
Since I know the "virtual hard disk" This dongdong, I have been thinking about how to put this virtual hard drive to the extreme, the last one also wrote a number of simple applications, of course, improve the efficiency is not much, not very ideal. The most I want to improve is to improve the reading speed of the database, that is, improve paging efficiency. The first is to put the database file into the virtual hard disk, so read speed is not happy? But when I put a 2.5 million-record database on a virtual hard disk for testing, the results are not ideal.
2.5 million records, using the primary key sorting (clustered index), you can improve the efficiency of several times, but if replaced by the general field (nvarchar type, no index), the 1000th page of the data when the time to go out?! This is too ... Is it still "Ram"? Accidentally looked at a hard drive indicator, this lamp in the display of data when incredibly has been on the bright! Will not put, from the virtual hard disk reading data, hard drive lights why? It appears that SQL Server2000 writes data to the hard disk when a large amount of data is sorted. Looking at the discovery that SQL Server2000 will write data to the tempdb database and write more than 100 m of data, it seems that this is the reason for the timeout.
So can you change tempdb to a virtual hard drive? At the beginning with a stupid, unexpectedly put the SQL Server2000 to kill, harm I reload again. It was later discovered that there are out-of-the-box SQL statements to use. Okay, now you can change the location of tempdb and test it again. This effect is more ideal, turned to page No. 5000, spent 11 seconds, no timeout.
USE master
Go
ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, FILENAME = 't:\data\tempdb.mdf')
Go
ALTER DATABASE tempdb MODIFY FILE (NAME = templog, FILENAME = 't:\data\templog.ldf')
Go
No important data is stored in the tempdb database, lost also won't have any big problem, just need to restart the computer after the file copy the past can be, the cost is small, efficiency can improve a lot, have reduced the hard disk read and write times, protect the hard disk, it should be a good method.
The specific situation of the test:
SQL Server, Northwind database Products table, the table contains 2,523,136 records, each page 15 records, a total of 168210 pages.
The ProductID field is the primary key, the clustered index, and no other fields are indexed.
1, using the inverted top of the page algorithm, according to Productname,productid to sort. At first, it takes more than 20 seconds (to read the data to tempdb), and then it's a little bit faster, about 11 seconds. That is, turn to the 1000th page, No. 5000 page only need about 11 seconds, hard drive lights are not how bright. Page 10,000th, 12 seconds. Page 100,000th, 22 seconds. (Note: This is the result of 2.5 million records sorted by nvarchar field)
The SQL statement used on page No. 99999:
select * from Products where productid in
( select top 15 productid from
( select top 1499985 productname,productid from Products
order by productname,productid ) as t
order by t.productname desc,t.productid desc )
order by productname,productid