Test environment:
Let's familiarize ourselves with the basic SQL statements to see the basics of the table we're going to test.
Use Infomation_schema
SELECT * from TABLES WHERE table_schema = ' dbname ' and table_name = ' product '
Query results:
From the above diagram we can see the basic information of the table:
Number of table rows: 866633
Average data length per line: 5133 bytes
Single Table size: 4448700632 bytes
The units for rows and table sizes are bytes, and we've been able to calculate
Average line length: About 5k
Tanku Total size: 4.1g
The fields in the table have varchar, datetime, text, etc., the ID field is the primary key
Test experiment
1. Direct use of limit start, Count page statements, but also the method used in my program:
SELECT * from product limit start, count
When the start page is small, the query does not have a performance problem, we look at 10, 100, 1000, 10000 to start the paging execution time (20 per page), as follows:
SELECT * FROM product limit 0.016 seconds select * from
product limit 0.016 seconds
SELECT * from Product Limit 1000, 0.047 seconds
SELECT * from product limit 10000, 0.094 seconds
We've seen that as the starting record increases, the time increases, which means that the paging statement limit with the starting page number, so we change the start record to 40w (that is, the general record) SELECT * from Product limit 400000, 20 3.229 seconds.
And look at the time we took the last page of records.
SELECT * FROM Product limit 866613, 20 37.44 seconds
No wonder search engines often report timeouts when crawling our pages, such as when the page-top page is clearly
Between the two is unbearable.
We can also sum up two things from this:
1 The query time of the limit statement is proportional to the position of the starting record
2 MySQL limit statement is very convenient, but the record of a lot of tables is not suitable for direct use.
2. The performance optimization method of the limit paging problem
Use Table overlay Index to speed paging query
As we all know, if you include only that indexed column (overriding an index) in a statement that uses an indexed query, this situation can be queried quickly.
This saves a lot of time because the index lookup has an optimization algorithm and the data is on the query index and no longer has to look for the relevant data address. In addition MySQL also has the related index cache, in the concurrent high time uses the cache to have the effect to be better.
In our example, we know that the ID field is the primary key and naturally contains the default primary key index. Now let's take a look at how the query works with overriding indexes:
This time we are querying the last page of data (using the overlay index, which contains only the ID column), as follows:
Select ID from product limit 866613, 20 0.2 seconds
It increases the speed of about 100 times relative to the 37.44 seconds of querying all columns.
So if we want to query all the columns, there are two ways, one is the id>= form, the other is using join to see the actual situation:
SELECT * FROM Product WHERE ID > = (select ID from Product limit 866613, 1) limit 20
Query time of 0.2 seconds, is a qualitative leap ah, haha
Another way of writing
SELECT * FROM Product a JOIN (select id from product limit 866613.) b on a.id = b.ID
The query time is also very short, praise!
In fact both use is a principle, so the effect is similar