When there is too much SQL paging (for example, limit 1000000, 20), the efficiency is significantly reduced, how to optimize?
SQL Server uses a similar:
| The code is as follows |
Copy Code |
| SELECT Top * from USER order by UID ASC; |
MySQL uses a similar:
| The code is as follows |
Copy Code |
| SELECT * from USER ORDER by UID ASC LIMIT 0, 10; |
If you want to display data for the second page, it is common practice to
| The code is as follows |
Copy Code |
| SELECT * from USER ORDER by UID ASC LIMIT 10, 10; |
The problem is, when the amount of data is too much, the more slowly the more backward page, then what is the simple solution? There are several ways to refer to:
First, if the continuous paging query, through the last results of the maximum ID, directly locate the next page of the data collection.
1, the first page:
| The code is as follows |
Copy Code |
| SELECT * from USER ORDER by UID ASC LIMIT 0, 10; |
2, find the last record of the UID, $uid.
3, the second page:
| The code is as follows |
Copy Code |
| SELECT * from USER WHERE uid > $uid ORDER by uid ASC LIMIT 0, 10; |
In this way, because the query out of the result set becomes smaller, so the efficiency is high.
And the limit condition of the rear basically does not need to change.
Second, for the independent ID continuous table, you can first through the program to calculate the required paging position ID starting value, and then through the between. and way to submit the query.
Third, use the subquery to get the primary key value of the page, make full use of the primary key index, such as:
| The code is as follows |
Copy Code |
Select t.* from (SELECT ID to you_table ORDER by ID LIMIT 1000000.) s JOIN your_table t on t.id = s.id; Www.111cn.net SELECT * FROM your_table WHERE ID >= (select ID to your_table ORDER by ID ASC LIMIT 1000001,1) LIMIT 20; |
Use NoSQL or separate table as index table, or according to the data of new and old hot and cold rules of the table storage.