Mysql limit optimization, million to tens quick pagination composite index references and applied to lightweight frames _mysql

Source: Internet
Author: User
Tags dba mysql index
MYSQL This database is definitely suitable for the DBA class to play, and generally do a little 10,000 news of small system how to write can, with the XX framework can achieve rapid development. But the data volume to 100,000, million to tens of millions, his performance can be so high? A little mistake, may cause the whole system rewrite, even more this system can not operate properly! Well, not so much nonsense. Use the facts to speak and see examples:

Datasheet Collect (ID, title, info, VType) on these 4 fields, where title is fixed length, info with text, ID
is gradually, VType is Tinyint,vtype is the index. This is a simple model of the basic news system. Now fill in the data and fill in 100,000 pieces of news.

The last collect is 100,000 records, the database table occupies 1.6G of hard disk. OK, look at the following SQL statement:

Select Id,title from collect limit 1000, 10; very soon; basically 0.01 seconds is OK, then look at the following

Select Id,title from collect limit 90000, 10; Start paging from 90,000, results?

8-9 seconds to complete, my God what's wrong??? In fact, to optimize this data, the Internet to find the answer. Look at one of the following statements:

Select ID from collect order by ID limit 90000, 10; Soon, 0.04 seconds will be OK.
Why? Because the ID primary key to do the index of course fast. On-line modification is:

Select Id,title from collect where id>= (the select ID from collect the order by ID
Limit 90000,1) limit 10;

This is the result of indexing with an ID. But the problem is so complicated that it's over. Look at the following statement

Select ID from collect where vtype=1 order by ID limit 90000, 10;
It's slow, it took 8-9 seconds!

Here I believe a lot of people will like me, there is a sense of collapse! VType did the index, huh? How can it be slow? VType made the index is nice you direct select ID from
Collect where vtype=1 limit 1000, 10;
is very fast, basically 0.05 seconds, but increase 90 times times, starting from 90,000, that is the speed of 0.05*90=4.5 seconds. and test results 8-9 seconds to an order of magnitude. From this point on, some people put forward the idea of a discuz
The forum is the same idea. Ideas are as follows:

Build an Index Table: T (id,title,vtype) and set to fixed length, and then do pagination, paging out the results and then to collect inside to find info.
Is it feasible? I know it under the experiment.

100,000 records to T (Id,title,vtype), the data table size of about 20M. Use

Select ID from T where vtype=1 order by ID limit 90000, 10;
Very soon. Basically 0.1-0.2 seconds can run out. Why is that? I guess it's because there are too many collect data, so paging is a long way to go. Limit
Completely related to the size of the datasheet. In fact, this is a full table scan, just because the data volume is small, only 100,000 fast. OK, a crazy experiment, add to 1 million, test performance.

Added 10 times times the data, immediately t table to more than 200 m, and is fixed length. Or just the query statement, time is 0.1-0.2 seconds to complete! No problem with table performance? Wrong! Because our limit is still 90,000, so hurry up. Give me a big one, 900,000 start.

Select ID from T where vtype=1 order by ID limit 900000, 10; Look at the result, time is 1-2 seconds!

Why?? The time of the meter is still so long, very depressed! Someone agreed that the long will improve the performance of limit, I also thought, because the length of a record is fixed, MySQL
Should be able to figure out 900,000 of the location is right ah? But we overestimate the wisdom of MySQL, he is not a business database, the fact that the fixed-length and limit impact on the small? No wonder someone said
Discuz to 1 million records will be very slow, I believe this is true, this is related to database design!

Can't mysql break the 1 million limit??? To 1 million of the page is really to the limit???

The answer: NO!!!!
Why not break 1 million is because the MySQL is not designed to cause. Here's how to do a crazy test with the Non-paged Method! A table with 1 million records, and 10G
Database, how to quickly pagination!

Well, our test went back to the collect table and started to test the conclusion that:
300,000 data, with the table method feasible, more than 300,000 his speed will a slow line you can't stand it! Of course, if you use the table + I This method, it is absolutely perfect. But after using my method, without the table can also be a perfect solution!

The answer is: Composite Index! Once the MySQL index design, inadvertently found that the index name can be taken, you can select a few fields to come in, what is the use of it? Start Select ID from
Collect order by ID limit 90000, 10; So fast is because of the index, but if you add a where will not go to the index. With the idea of a try
Index such as search (Vtype,id). and then test

Select ID from collect where vtype=1 limit 90000, 10; Very fast! 0.04 seconds to complete!

Re-test: Select ID, title from collect where vtype=1 limit 90000, 10;
Very sorry, 8-9 seconds, did not go search index!

Re-test: Search (Id,vtype), or select ID this statement, also very regrettable, 0.5 seconds.

In summary: If there is a where condition, and want to go index with limit, you must design an index, where
Put first, limit use the primary key to put 2nd, and only select the primary key!

Solved the paging problem perfectly. Can quickly return ID there is hope to optimize limit, according to such logic, millions limit should be in 0.0x seconds can be completed. Looks like MySQL
The optimization and indexing of statements is very important!

Well, back to the original question, how can the above research success be quickly applied to development? If you use a composite query, my lightweight framework is useless. Pagination string you have to write it yourself, how much trouble? Here is another example, the idea came out:

SELECT * from collect where ID in (9000,12,50,7000); It's 0 seconds to check it out!

Mygod, the MySQL index is also valid for in statements! It seems wrong to say on the Internet that indexing is not available!

With this conclusion, it's easy to apply to lightweight frameworks:

The code is as follows:
Copy Code code as follows:

$db =dblink ();
$db->pagesize=20;

$sql = "SELECT ID from collect where vtype= $vtype";

$db->execute ($sql);
$strpage = $db->strpage ();
Save the paging string in a temporary variable for easy output
while ($rs = $db->fetch_array ()) {
$strid. = $rs [' id ']. ', ';
}
$strid =substr ($strid, 0,strlen ($strid)-1);
Construct an ID string
$db->pagesize=0;
It is critical that, without logging off the class, the paging will be emptied, so that only one database connection is required, and no further opening is required;
$db->execute ("Select
Id,title,url,stime,gtime,vtype,tag from collect where ID in ($strid));

<?php while ($rs = $db->fetch_array ()):?>
<tr>
<td> <?php echo $rs [' id '];? ></td>
<td> <?php echo $rs [' url '];? ></td>
<td> <?php echo $rs [' stime '];? ></td>
<td> <?php echo $rs [' gtime '];? ></td>
<td> <?php echo $rs [' vtype '];? ></td>
<td> <a href= "? act=show&id=<?php echo $rs [' id '];? > "
target= "_blank" ><?php echo $rs [' title '];? ></a></td>
<td> <?php Echo
$rs [' tag '];? ></td>
</tr>
<?php Endwhile;
?>
</table>
<?php
Echo $strpage;

Through simple transformations, the idea is simple: 1 by optimizing the index, find the ID, and spell "123,90000,12000" such a string. 2 the 2nd time to find out the results of the query.

Small index + a little bit of change so that MySQL can support the million or even tens efficient paging!

In this example, I reflect on a point: for large systems, PHP must not use the framework, especially that even SQL statements can not see the framework! Because the beginning of my lightweight framework almost crashed! Only for the rapid development of small applications, for Erp,oa, large web sites, data layer including logic layer of things can not use the framework. If the programmer loses control of the SQL statement, the risk of the project will increase exponentially! Especially with MySQL.
, MySQL must be a professional DBA to perform his best performance. The performance difference caused by an index may be hundreds of thousands of thousand!

Ps:
After the actual test, to 1 million of the data, 1.6 million data, 15G table, 190M index, even if the index, limit have 0.49 seconds. So it is best not to let others see 100,000 data later, otherwise it will be very slow! Even with the index. After this optimization, MySQL to the millions paging is a limit! But having this kind of score has been pretty good if you are sure to die with SQL Server! And 1.6 million of the data used
ID in (str) is fast, basically still 0 seconds. If so, tens data, MySQL should also be easy to cope with.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.