Mysql limit optimization, million to tens fast paging,--composite index reference and apply to lightweight framework

Source: Internet
Author: User
Tags mysql index

How high can MySql performance be? Using PHP for more than half a year, really so deep to think about this problem or from the day before yesterday. There has been pain and despair, to now full of confidence! MYSQL This database is definitely suitable for DBA level master to play, generally do a little 10,000 news small system How to write all can, with XX framework can achieve rapid development. But the amount of data to 100,000, million to tens of millions, his performance can be so high? A little bit of error, may cause the entire system rewrite, even more this system does not work properly! Well, not so much nonsense. To speak with facts, see examples:

Data table collect (ID, title, info, VType) on these 4 fields, where title is fixed length, info with text, ID is gradual, VType is Tinyint,vtype is index. This is a simple model of a basic news system. Now fill in the data, fill in 100,000 news.

The last collect is 100,000 records, the database table occupies the hard disk 1.6G. OK, look at the following SQL statement:

Select Id,title from collect limit 1000, 10; Very quickly, basically 0.01 seconds ok, and then look at the following

Select Id,title from collect limit 90000, 10; Starting from 90,000 pages, results?

8-9 seconds to complete, my God what's wrong???? In fact, to optimize this data, online find the answer. Look at the following statement:

Select ID from collect order by ID limit 90000, 10; Soon, 0.04 seconds will be OK. Why? Because using the ID primary key to do the index is of course fast. The method of online modification is:

Select Id,title from collect where id>= (select ID from collect order by ID limit 90000,1) limit 10;

This is the result of indexing with an ID. But the problem is so complicated that it's finished. Look at the following statement

Select ID from collect where vtype=1 the order by ID of limit 90000, 10; It's slow, it took 8-9 seconds!

Here I believe a lot of people will be like me, have a crash feeling! VType, did you index it? How can it be slow? VType did an index is nice, you direct select ID from collect where vtype=1 limit 1000, 10; is very fast, basically 0.05 seconds, but increase 90 times times, starting from 90,000, that is 0.05*90=4.5 second speed. And the test results were 8-9 seconds to an order of magnitude. From here, some people put forward the idea of the sub-table, this and Discuz forum is the same idea. Ideas are as follows:

Build an Index Table: T (id,title,vtype) and set the fixed length, then do the paging, page out the results and then to collect inside to find info. Is it feasible? I know it under the experiment.

100,000 records to T (id,title,vtype), data table size about 20M. Use

Select ID from t where vtype=1 the order by ID of limit 90000, 10; Soon enough. Basically 0.1-0.2 seconds can run out. Why is that? I guess it's because collect data is too much, so it's a long way to go. Limit is fully related to the size of the data table. In fact, this is still a full-scale scan, just because the amount of data is small, only 100,000 fast. OK, a crazy experiment, add to 1 million, test performance.

With 10 times times the data, the T-table is now over 200 m, and it's a fixed length. Or just the query statement, the time is 0.1-0.2 seconds to complete! Table Performance No problem? Wrong! Because our limit is still 90,000, so fast. Give me a big one, 900,000 start.

Select ID from t where vtype=1 the order by ID of limit 900000, 10; Look at the results, the time is 1-2 seconds!

Why?? The time is still so long, very depressed! Someone agreed long will improve the performance of limit, at first I also think, because a record length is fixed, MySQL should be able to calculate the position of 900,000 is right ah? But we overestimate the intelligence of MySQL, he is not a business database, it turns out that fixed length and non-fixed length have little effect on limit? No wonder someone said Discuz to 1 million records will be very slow, I believe this is true, this and database design related!

Can't mysql exceed the 1 million limit??? To 1 million of the page is really to the limit???

The answer is: NO!!!! Why can't break through 1 million because not design MySQL caused. The Non-table method is described below, to a crazy test! A table of 1 million records, and 10G database, how to quickly page!

Well, our test goes back to the Collect table, the test conclusion is: 300,000 data, using the table method is feasible, more than 300,000 of his speed will a slow line you unbearable! Of course, if I use the Sub-table + This method, it is absolutely perfect. But after using this method of mine, I can solve the problem without a table.

The answer is: Composite Index! Once the MySQL index was designed, inadvertently found that the index name can be taken, you can choose a few fields come in, what is the use of it? Start select ID from collect order by ID limit 90000, 10; So fast is because of the index, but if the addition of where will not go index. With the idea of a try, add the index such as search (Vtype,id). and then test

Select ID from collect where vtype=1 limit 90000, 10; Very fast! 0.04 seconds to complete!

Re-test: Select ID, title from collect where vtype=1 limit 90000, 10; Very sorry, 8-9 seconds, did not go search index!

Re-test: Search (Id,vtype), or select ID this statement, also very regrettable, 0.5 seconds.

In summary: If for a where condition, and want to go index with limit, you must design an index, where the first place, limit the primary key to put 2nd bit, and only select the primary key!

The perfect solution to the paging problem. You can quickly return to the ID to have the hope of optimizing the limit, according to such logic, millions limit should be in 0.0x seconds can be divided. It seems that MySQL statement optimization and indexing is very important!

Well, back to the original question, how to apply the above research successfully to the development? If you use compound queries, my lightweight framework is useless. Paging string you have to write it yourself, how much trouble? Here we look at an example, and the idea comes out:

SELECT * from collect where ID in (9000,12,50,7000); 0 seconds to check it out!

Mygod, MySQL's index is also valid for in statements! It seems that the online say in cannot be indexed is wrong!

With this conclusion, it is easy to apply the lightweight framework:

The code is as follows:

$db =dblink ();
$db->pagesize=20;

$sql = "SELECT ID from collect where vtype= $vtype";

$db->execute ($sql);
$strpage = $db->strpage ();//The paging string is saved in a temporary variable for easy output
while ($rs = $db->fetch_array ()) {
   $strid . = $rs [' id ']. ', ';
}
$strid =substr ($strid, 0,strlen ($strid)-1);//construct the ID string
$db->pagesize=0;//It is critical that the paging be emptied without unregistering the class, This requires only a single database connection and no need to open it;
$db->execute ("Select Id,title,url,stime,gtime,vtype,tag from collect where ID in ($strid)" );

<?php while ($rs = $db->fetch_array ()):?>
<tr>
<td>&nbsp;<?php echo $rs [' id '];? ></td>
<td>&nbsp;<?php echo $rs [' url '];? ></td>
<td>&nbsp;<?php echo $rs [' stime '];? ></td>
<td>&nbsp;<?php echo $rs [' gtime '];? ></td>
<td>&nbsp;<?php echo $rs [' vtype '];? ></td>
<td>&nbsp;<a href= "? act=show&id=<?php echo $rs [' id '];? > "target=" _blank "><?php echo $rs [' title '];? ></a></td>
<td>&nbsp;<?php echo $rs [' tag '];? ></td>
</tr>
<?php Endwhile;?>
</table>
<?php
Echo $strpage;

With a simple transformation, the idea is simple: 1) by optimizing the index, find the ID and spell the string "123,90000,12000". 2) The 2nd query finds the results.

Small index + a little bit of change makes it possible for MySQL to support millions or even tens efficient paging!

From the example here, I reflect on the point: for large systems, PHP must not use the framework, especially the kind of SQL statements can not see the framework! Because the beginning of my lightweight framework almost collapsed! Only for the rapid development of small applications, for Erp,oa, large Web sites, the data layer including the logical layer of things can not be used in the framework. If the programmer loses control of the SQL statement, the risk of the project will increase exponentially! Especially with MySQL, MySQL must be a professional DBA to perform his best performance. The performance difference caused by an index can be thousands of thousand!

PS: After the actual test, to 1 million of the data, 1.6 million data, 15G table, 190M index, even if the index, limit to 0.49 seconds. So it is best not to let other people see 100,000 of data later , otherwise it will be very slow! Even with an index. After this optimization, MySQL to the millions paging is a limit! But having such a score has been pretty good if you are using SQL Server to definitely get stuck! And 1.6 million of the data with ID in (STR) quickly, basic or 0 seconds. If so, tens's data, MySQL should also be easy to cope with.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.