As we all know, MySQL paging writes:
SELECT * from ' yourtable ' limit start,rows
Now I have 9969W data in a table in my database. It's called Tweet_data.
Select COUNT (*) from Tweet_data
Run the first SQL statement, look at the beginning of the 6 million article 10, to see the query time
watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvytgznze5oty4nq==/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast ">
60s how slow this must be!
Solution One,
someone immediately thought of using indexes to improve efficiency. So let's use the primary key, so we have the following SQL
SELECT * FROM tweet_data where ID >= (select id from tweet_data limit 60000000,1) limit 10
Look at the effect
watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvytgznze5oty4nq==/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast ">
Sure enough, but assuming that the amount of data in the millions, the efficiency can be many times, but! It obviously didn't meet our requirements.
Solution Method Two,
Then there is the following SQL statement
SELECT * from Tweet_data where id_auto_increase between 60000000 and 60000010
There is a picture for proof!
Here we are only manipulating the data on a table, and the amount of data is about 100 million, but suppose we have a larger amount of data?
Then there will be a lot of other knowledge involved! Little brother's humble opinion!
The fastest MySQL paging