How Mysql limit is optimized correctly 2010-05-17 17:09 Anonymous Blog Parkfont Size:T | T
What we are mainly sharing with you today is the description of the contents of the MySQL limit brief introduction and how to optimize it, the following is the main content of the text description.
Ad:51cto Net + the first App innovation contest hot Start-----------super million resources for you to take!
The following article is a brief introduction to MySQL limit, and we all know that the limit clause is generally used to limit the actual number of rows returned by the SELECT statement. Limit takes 1 or 2 numeric parameters, if given is 2 arguments, the first specifies the offset of the first row to return, and the second specifies the maximum number of rows to return.
Offset of the initial row
The volume is 0 (not 1).
- MySQL> select * FROM Table LIMIT 6, 10;
Get 第7-16 Line
If given a parameter, it indicates the maximum number of rows returned.
- MySQL> select * from table LIMIT 5;
Get the first 5 lines
In other words, limit n is equivalent to MySQL limit 0,n. The optimization of MySQL is very important. The other most commonly used and most need to optimize is limit. MySQL's limit brings great convenience to paging, but when the amount of data is large, limit performance drops sharply. Same as 10 data
- SELECT * FROM Yanxue8_visit limit 10000,10
And
- SELECT * FROM Yanxue8_visit limit 0,10
is not a level of quantity.
There are also many five optimization criteria for limit, which are translated from MySQL manual, although correct but not practical. Today found an article written about limit optimization, very good. Original address: http://www.zhenhua.org/article.asp?id=200 (with the original text below)
Instead of using limit directly, you first get the ID of offset and then use MySQL limit size directly to get the data. According to his data, it is significantly better to use limit directly. Here I use the data in two different situations to test. (Test environment WIN2033+P4 dual core (3GHZ) +4g memory MySQL 5.0.19)
1, offset is relatively small time.
- SELECT * FROM Yanxue8_visit limit 10,10
Run multiple times with 0.0004-0.0005 time remaining
- Select * from Yanxue8_visit Where vid >= (
- Select vid from Yanxue8_visit Order by vid limit 10,1
- ) Limit 10
Run multiple times, the time remains between 0.0005-0.0006, mainly 0.0006 conclusions: When offset offsets are smaller, direct use of limit is preferred. This is clearly the cause of the subquery.
2, when the big offset
- SELECT * FROM Yanxue8_visit limit 10000,10
Run multiple times and keep time at around 0.0187
- Select * from Yanxue8_visit Where vid >= (
- Select vid from Yanxue8_visit Order by vid limit 10000,1
- ) Limit 10
Run several times, the time remains around 0.0061, only the former 1/3. You can expect the larger the offset, the better the latter.
Attached Original:
SELECT * FROM Table LIMIT 5, 10; #返回第6-15 rows of data
SELECT * FROM table LIMIT 5; #返回前5行
SELECT * FROM table LIMIT 0, 5; #返回前5行
Performance optimization:
Based on the high performance of MySQL limit in MySQL5.0, I have a new understanding of data paging.
- Select * FROM Cyclopedia Where ID>= (
- Select Max (ID) from (
- Select ID from Cyclopedia Order by ID limit 90001
- ) as TMP
- ) limit 100;
- Select * FROM Cyclopedia Where ID>= (
- Select Max (ID) from (
- Select ID from Cyclopedia Order by ID limit 90000,1
- ) as TMP
- ) limit 100;
The same is taken 90,000 after 100 records, the 1th sentence fast or the 2nd sentence fast?
The 1th sentence is to take the first 90,001 records, take one of the largest ID value as a starting mark, and then use it to quickly locate the next 100 records
The 2nd choice is only to take 90,000 records after 1, and then take the ID value as the starting point to locate the next 100 records
The 1th sentence executes the result. + Rows in Set (0.23) sec
The 2nd sentence executes the result. + Rows in Set (0.19) sec
It is obvious that the 2nd sentence wins. It seems that limit doesn't seem to be exactly what I thought it would be. The full table scan returns the limit offset+length record, so it seems that the limit is higher than the Ms-sql top performance.
In fact, the 2nd sentence can be simplified into
- Select * FROM Cyclopedia Where ID>= (
- Select ID from Cyclopedia limit 90000,1
- ) limit 100;
Direct use of the ID of the No. 90000 record, do not go through the max operation, so the theoretical efficiency is higher, but in actual use almost do not see the effect, because its own location ID returned is 1 records, Max does not have to work to get results, but this write clearer clarity, save the painting snake that foot.
However, since MySQL has limit can directly control the location to take out records, why not simply use SELECT * FROM Cyclopedia limit 90000,1? Wouldn't it be more concise?
This is wrong, try to know, the result is: 1 row in Set (8.88) sec, what, scary enough, reminds me of yesterday in 4.1 than this has a "high score." SELECT * Best not to use, in line with what, choose what principle, select the more fields, the greater the amount of field data, the slower the speed. The above 2 kinds of pagination is much better than the 1 sentence, although it looks like the number of queries more, but in fact, at a small price for efficient performance, is very worthwhile.
The 1th option is also available for ms-sql, and may be the best. Because it is always quickest to locate the starting segment by the primary key ID.
- Select Top * from Cyclopedia Where ID>= (
- Select Top 90001 Max (ID) from (
- Select ID from Cyclopedia Order by ID
- ) as TMP
- )
However, whether the implementation is a storage process or direct code, the bottleneck is always that the top of the ms-sql is always going to return the top N records, which is not very deep when the amount of data is small, but if millions of million, the efficiency will certainly be low. mysql limit has a lot of advantages in comparison, Perform:
- Select ID from Cyclopedia limit 90000
- Select ID from Cyclopedia limit 90000,1
While Ms-sql can only use the Select Top 90000 ID from Cyclopedia execution time is 390ms, performing the same operation time is less than the MySQL 360ms.
How Mysql limit is optimized correctly