People who have used MySQL must know that MySQL provides native paging functionality-----the Limit keyword. The LIMIT clause can be used to force the SELECT statement to return the specified number of records. LIMIT accepts one or two numeric parameters . parameter must be an integer constant. Given two parameters, the first parameter specifies the offset of the first return record row , and the second parameter specifies the maximum number of rows to return records . TIPS: The initial record line offset is 0 (not 1).
But I actually use the LIMIT clause in the process of the project, in the case of a large number of pages (tens of thousands of pages), the speed of the next page is significantly slower than before, especially curious about the cause of the phenomenon, so explain a bit, only to find MySQL LIMIT 10000, 20 means the scan satisfies 10020 rows of the condition, discards the previous 10000 rows, and returns the last 20 rows. As shown below:
SELECT * from ORDER by DESC 10000,5
1. Row **************
Id:1
Select_type:simple
Table:message
Type:index
Possible_keys:null
Key:primary
Key_len:4
Ref:null
rows:10020
Extra:
1 row in Set (0.00 sec)
This is equivalent to turning the page back to the same full-table scan, so that in a high-concurrency application, performance is certainly not hold. But the performance of a statement such as limit n is no problem, because only n rows are scanned. So I want to find a solution, as follows:
Solution1: Using limit N
"Clue": the approximate idea is to make some restrictions in the where statement based on the ID or other fields. But this idea can only provide a "previous page", "next page" such a jump, because you need to get the previous page or the next page of the ID and then filter according to the where statement, and then use limit N to do. That way, no matter how many pages you turn, only 20 rows are scanned per query.
SELECT * from WHERE >= 9500 5
Solution2: Using subqueries
SELECT * from WHERE id >= (SELECTfromORDERby100001 -
The subquery is done on the index, and the normal query is done on the data file, generally, the index file is much smaller than the data file, so it is more efficient to operate.
Solution3: Using join paging, this has not been tried, there is a chance to try.
You can actually use a similar strategy mode to deal with paging, such as judging if it is within 100 pages, using the most basic paging method, more than 100 pages, then use the sub-query paging method.
Problems with MySQL paging