The solution for which limit and in cannot be used concurrently in MySQL.

Source: Internet
Author: User
Tags mysql manual mysql tutorial mysql index

the solution for which limit and in cannot be used concurrently in MySQL.Category: MySQL2011-10-31 13:53 1277 people read comments (0) favorite reports Mysqlsubquery MySQL5.1 Neutron query cannot use limit, error: "This version of the MySQL doesn ' t yet support ' limit & in/all/any/some subquery ‘ "

Such statements are not executed correctly.
SELECT * from the message where ID in (the Select ID from the message order by ID DESC limit 10);

But, as long as you have another layer on the line. Such as:
SELECT * from message where ID in (select m.id from (SELECT * from message ORDER BY id DESC limit) as M) Order by ID ASC

This allows you to bypass the problem of the limit subquery query.

MySQL limit Big data paging optimization step MySQL limit Big data volume paging optimization method

The optimization of MySQL is very important. The other most commonly used and most need to optimize is limit. MySQL's limit brings great convenience to paging, but when the amount of data is large, limit performance drops sharply.

Same as 10 data

SELECT * from Yanxue8_visit limit 10000,10 and

SELECT * FROM Yanxue8_visit limit 0,10
is not a level of quantity.

There are also many five optimization criteria for limit, which are translated from MySQL manual, although correct but not practical. Today found an article written about limit optimization, very good.

Instead of using limit directly, you first get the ID of offset and then use the limit size directly to get the data. According to his data, it is significantly better to use limit directly. Here I use the data in two different situations to test. (Test environment WIN2033+P4 dual core (3GHZ) +4g memory Mysql 5.0.19)

1, offset is relatively small time.

SELECT * FROM Yanxue8_visit limit 10,10

Run multiple times with a time of 0.0004-0.0005 http://www.zhutiai.com

Select * from Yanxue8_visit Where vid >= (

Select vid from Yanxue8_visit Order by vid limit 10,1

) Limit 10
Run multiple times, the time remains between 0.0005-0.0006, mainly 0.0006

Conclusion: The direct use of limit is better when the offset is smaller. This is clearly the cause of the subquery.

2. When the offset is large.

 

SELECT * FROM Yanxue8_visit limit 10000,10

Run multiple times and keep time at around 0.0187

Select * from Yanxue8_visit Where vid >= (

Select vid from Yanxue8_visit Order by vid limit 10000,1

) limit
10

Run several times, the time remains around 0.0061, only the former 1/3. You can expect the larger the offset, the better the latter.

In the future to pay attention to correct their limit statement, to optimize the MySQL

According to the table collect (ID, title, info, VType) on these 4 fields, where title is fixed length, info with text, ID
is gradual, VType is Tinyint,vtype is index. This is a simple model of a basic news system. Now fill in the data, fill in 100,000 news.

The last collect is 100,000 records, the Database tutorial table occupies hard disk 1.6G. OK, look at the following SQL statement:

Select Id,title from collect limit 1000, 10; Very quickly, basically 0.01 seconds ok, and then look at the following

Select Id,title from collect limit 90000, 10; Starting from 90,000 pages, results?
8-9 seconds to complete, my God what's wrong???? In fact, to optimize this data, online find the answer. Look at the following statement:

Select ID from collect order by ID limit 90000, 10; Soon, 0.04 seconds will be OK.
Why? Because using the ID primary key to do the index is of course fast. The method of online modification is:

Select Id,title from collect where id>= (select ID from collect ORDER by ID
Limit 90000,1) limit 10;
This is the result of indexing with an ID. But the problem is so complicated that it's finished. Look at the following statement

Select ID from collect where vtype=1 the order by ID of limit 90000, 10;
It's slow, it took 8-9 seconds!

Here I believe a lot of people will be like me, have a crash feeling! VType, did you index it? How can it be slow? VType did an index is nice, you direct select ID from
Collect where vtype=1 limit 1000, 10;
is very fast, basically 0.05 seconds, but increase 90 times times, starting from 90,000, that is 0.05*90=4.5 second speed. And the test results were 8-9 seconds to an order of magnitude. From here on the idea of a sub-table was raised, and this and discuz
Forum is the same way of thinking. Ideas are as follows:

Build an Index Table: T (id,title,vtype) and set the fixed length, then do the paging, page out the results and then to collect inside to find info.
Is it feasible? I know it under the experiment.

100,000 records to T (id,title,vtype), data table size about 20M. Use

Select ID from t where vtype=1 the order by ID of limit 90000, 10;
Soon enough. Basically 0.1-0.2 seconds can run out. Why is that? I guess it's because collect data is too much, so it's a long way to go. Limit
is completely related to the size of the data table. In fact, this is still a full-scale scan, just because the amount of data is small, only 100,000 fast. OK, a crazy experiment, add to 1 million, test performance.

With 10 times times the data, the T-table is now over 200 m, and it's a fixed length. Or just the query statement, the time is 0.1-0.2 seconds to complete! Table Performance No problem? Wrong! Because our limit is still 90,000, so fast. Give me a big one, 900,000 start.

Select ID from t where vtype=1 the order by ID of limit 900000, 10; Look at the results, the time is 1-2 seconds!
Why?? The time is still so long, very depressed! Someone agreed long will improve the performance of limit, I also think, because the length of a record is fixed, MySQL tutorial
You should be able to figure out the 900,000 position. But we overestimate the intelligence of MySQL, he is not a business database, it turns out that fixed length and non-fixed length have little effect on limit? No wonder someone says
Discuz to 1 million records will be very slow, I believe this is true, this is related to database design!

Can't mysql exceed the 1 million limit??? To 1 million of the page is really to the limit???

The answer is: NO!!!!
Why can't break through 1 million because not design MySQL caused. The Non-table method is described below, to a crazy test! A table for 1 million records, and 10G
Database, how fast paging!

Well, our test goes back to the Collect table, and the test concludes:
300,000 data, using the Sub-table method is feasible, more than 300,000 of his speed will a slow line you unbearable! Of course, if I use the Sub-table + This method, it is absolutely perfect. But after using this method of mine, I can solve the problem without a table.

The answer is: Composite Index! Once the MySQL index was designed, inadvertently found that the index name can be taken, you can choose a few fields come in, what is the use of it? Start Select ID from
Collect order by ID limit 90000, 10; So fast is because of the index, but if the addition of where will not go index. With the idea of a try.
Index such as search (Vtype,id). and then test
Select ID from collect where vtype=1 limit 90000, 10; Very fast! 0.04 seconds to complete!

Re-test: Select ID, title from collect where vtype=1 limit 90000, 10;
Very sorry, 8-9 seconds, did not go search index!

Re-test: Search (Id,vtype), or select ID this statement, also very regrettable, 0.5 seconds.
In summary: If for a where condition and want to go index with limit, you must design an index, the where
Put the first place, limit the primary key to put 2nd bit, and only select the primary key!

The perfect solution to the paging problem. You can quickly return to the ID to have the hope of optimizing the limit, according to such logic, millions limit should be in 0.0x seconds can be divided. Looks like MySQL
The optimization and indexing of statements is very important!
Well, back to the original question, how to apply the above research successfully to the development? If you use compound queries, my lightweight framework is useless. Paging string you have to write it yourself, how much trouble? Here we look at an example, and the idea comes out:
SELECT * from collect where ID in (9000,12,50,7000); 0 seconds to check it out!
Mygod, MySQL's index is also valid for in statements! It seems that the online say in cannot be indexed is wrong!
With this conclusion, it is easy to apply the lightweight framework:
The code is as follows:


Copy the code code as follows:

$db =dblink ();
$db->pagesize=20;
$sql = "SELECT ID from collect where vtype= $vtype";
$db->execute ($sql);
$strpage = $db->strpage ();
Saves the paging string in a temporary variable for easy output
while ($rs = $db->fetch_array ()) {
$strid. = $rs [' id ']. ', ';
}
$strid =substr ($strid, 0,strlen ($strid)-1);
Constructs an ID string
$db->pagesize=0;
Is critical, in the case of non-logoff class, will be paged out, so that only need to use a database connection, no need to open;
$db->execute ("Select
Id,title,url,stime,gtime,vtype,tag from collect where ID in ($strid) ");
<?php Tutorial while ($rs = $db->fetch_array ()):?>
<tr>
<td> <?php echo $rs [' id '];? ></td>
<td> <?php echo $rs [' url '];? ></td>
<td> <?php echo $rs [' stime '];? ></td>
<td> <?php echo $rs [' gtime '];? ></td>
<td> <?php echo $rs [' vtype '];? ></td>
<td> <a href= "? act=show&id=<?php echo $rs [' id '];? > "
target= "_blank" ><?php echo $rs [' title '];? ></a></td>
<td> <?php Echo
$rs [' tag '];? ></td>
</tr>
<?php Endwhile;
?>
</table>
<?php
Echo $strpage;
?> through a simple transformation, in fact the idea is simple: 1) by optimizing the index, find the ID, and spell "123,90000,12000" such a string. 2) The 2nd query finds the results.
Small index + a little bit of change makes it possible for MySQL to support millions or even tens efficient paging!
From the example here, I reflect on the point: for large systems, PHP must not use the framework, especially the kind of SQL statements can not see the framework! Because the beginning of my lightweight framework almost collapsed! Only for the rapid development of small applications, for Erp,oa, large Web sites, the data layer including the logical layer of things can not be used in the framework. If the programmer loses control of the SQL statement, the risk of the project will increase exponentially! Especially with MySQL.
, MySQL must require a professional DBA to perform his best performance. The performance difference caused by an index can be thousands of thousand!


Performance optimization:
Based on the high performance of limit in MySQL5.0, I have a new understanding of data paging.

1.
Select * FROM Cyclopedia Where id>= (
Select Max (ID) from (
Select ID from Cyclopedia Order by ID limit 90001
) as TMP
) limit 100;

2.
Select * FROM Cyclopedia Where id>= (
Select Max (ID) from (
Select ID from Cyclopedia Order by ID limit 90000,1
) as TMP
) limit 100;
The same is taken 90,000 after 100 records, the 1th sentence fast or the 2nd sentence fast?
The 1th sentence is to take the first 90,001 records, take one of the largest ID value as a starting mark, and then use it to quickly locate the next 100 records
The 2nd choice is only to take 90,000 records after 1, and then take the ID value as the starting point to locate the next 100 records
The 1th sentence executes the result. + Rows in Set (0.23) sec
The 2nd sentence executes the result. + Rows in Set (0.19) sec

It is obvious that the 2nd sentence wins. It seems that limit doesn't seem to be exactly what I thought it would be. The full table scan returns the limit offset+length record, so it seems that the limit is higher than the Ms-sql top performance.

In fact, the 2nd sentence can be simplified into

Select * FROM Cyclopedia Where id>= (
Select ID from Cyclopedia limit 90000,1
) limit 100;
Direct use of the ID of the No. 90000 record, do not go through the max operation, so the theoretical efficiency is higher, but in actual use almost do not see the effect, because its own location ID returned is 1 records, Max does not have to work to get results, but this write clearer clarity, save the painting snake that foot.

However, since MySQL has limit can directly control the location to take out records, why not simply use SELECT * FROM Cyclopedia limit 90000,1? Wouldn't it be more concise?
This is wrong, try to know, the result is: 1 row in Set (8.88) sec, what, scary enough, reminds me of yesterday in 4.1 than this has a "high score." SELECT * Best not to use, in line with what, choose what principle, select the more fields, the greater the amount of field data, the slower the speed. The above 2 kinds of pagination is much better than the 1 sentence, although it looks like the number of queries more, but in fact, at a small price for efficient performance, is very worthwhile.

The 1th option is also available for ms-sql, and may be the best. Because it is always quickest to locate the starting segment by the primary key ID.

Select Top * FROM Cyclopedia Where id>= (
Select Top 90001 Max (ID) from (
Select ID from Cyclopedia Order by ID
) as TMP
)
However, whether the implementation is a storage process or direct code, the bottleneck is always that the top of the ms-sql is always going to return the top N records, which is not very deep when the amount of data is small, but if millions of million, efficiency will certainly be low. In contrast, MySQL's limit has a lot of advantages, execution:

Select ID from Cyclopedia limit 90000
Select ID from Cyclopedia limit 90000,1
The results were:
90000 rows in Set (0.36) sec
1 row in Set (0.06) sec
While Ms-sql can only use the Select Top 90000 ID from Cyclopedia execution time is 390ms, performing the same operation time is less than the MySQL 360ms.

The solution for which limit and in cannot be used concurrently in MySQL.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.