Large Skip
When paging the data, it is generally necessary to skip how many records and limit how many records, such as in MySQL:
SELECT * from large_table ORDER by ' id ' LIMIT 10000, 30
This process is slow because the database needs to be scanned from the first record to the 10,000th record, which is time consuming.
In http://idning.github.io/point-large-skip.html, the above SQL code summarizes two optimization methods:
Method 1:
Select t.*from ( select ID from large_table ORDER by ID LIMIT 10000, + ) Qjoin large_table tON t.id = q.id
Method 2:
SELECT * from large WHERE ID > 10000 ORDER by ID LIMIT 30
Method 2 has a problem, if there is no record of ID 12 in the database, then method 2 results are not the same as expected
Similarly, in MongoDB there are similar problems, a better solution and the above MySQL Method 2 is basically the same.
Count
In addition, the count () query also has a slow problem, and the optimization method is as follows:
Method 1:try count (ID) instead of Count (*), where ID is an indexed column, and have no NULLs in it. That may run faster.
Method 2:if you ' re storing the binary data of the files in the Longblob, your table would be massive, which would slow things do Wn.
Method 3:mysql uses the MyISAM index, which has a built-in counter.
Reference: http://idning.github.io/point-large-skip.html
Http://stackoverflow.com/questions/7228169/slow-pagination-over-tons-of-records-in-mongo
Http://stackoverflow.com/questions/10764187/mongo-db-skip-takes-too-long-time
Http://stackoverflow.com/questions/15402141/mysql-query-very-slow-count-on-indexed-column
Http://xue.uplook.cn/database/mysqlsjk/2835.html
Large skip issues, count issues
in MongoDB and MySQL