MySQL large amount of data fast paging implementation

Source: Internet
Author: User
Tags dba mysql index

Just beginning to learn the SQL statement, it will be written

The code is as follows:

The code is as follows Copy Code

SELECT * FROM table order by ID LIMIT 1000, 10;

But when the data reaches millions, the writing slows down.

The code is as follows:

The code is as follows Copy Code

SELECT * FROM table order by ID LIMIT 1000000, 10;

Maybe dozens of seconds.

A lot of optimization methods on the web are like this

The code is as follows:

The code is as follows Copy Code

SELECT * FROM table WHERE ID >= (select IDs from Table LIMIT 1000000, 1) LIMIT 10;

Yes, the speed is up to 0.x seconds, and it looks fine.
But it's not perfect!

The following sentence is perfect!

The code is as follows:

The code is as follows Copy Code

SELECT * FROM table WHERE ID BETWEEN 1000000 and 1000010;

It's going to be 5 to 10 times times faster than the above sentence.

In addition, if you need to query the ID is not a continuous paragraph, the best way is to find the ID first, and then in the query

The code is as follows:

The code is as follows Copy Code
SELECT * FROM table WHERE ID in (10000, 100000, 1000000 ...);

Share a little more.
When querying a longer string for a field, add a field to the field in the table design, such as a field that stores the URL
Query, do not directly query the string, inefficient, you should find out the string of CRC32 or MD5

How to optimize MySQL tens quick paging

Limit 1,111 data is really a bit of a performance problem, and it's possible to use the where ID >= XX in a variety of ways, so the ID number on the index can be faster. By:jack
MySQL Limit page-slow solution (MySQL limit optimization, millions to thousands record implementation fast paging)

How high is MYSQL performance? With PHP more than half a year, really so in-depth to think about the problem or from the day before yesterday. Have had the pain has been hopeless, to now full of confidence! MYSQL This database is definitely suitable for the DBA class to play, and generally do a little 10,000 news of small system how to write can, with the XX framework can achieve rapid development. But the data volume to 100,000, million to tens of millions, his performance can be so high? A little mistake, may cause the whole system rewrite, even more this system can not operate properly! Well, not so much nonsense. Use the facts to speak and see examples:
Data table collect (ID, title, info, VType) on these 4 fields, which title with fixed length, info with text, ID is gradual, VType is Tinyint,vtype is index. This is a simple model of the basic news system. Now fill in the data and fill in 100,000 pieces of news.
The last collect is 100,000 records, the database table occupies 1.6G of hard disk. OK, look at the following SQL statement:

Select Id,title from collect limit 1000, 10; very soon; basically 0.01 seconds is OK, then look at the following
Select Id,title from collect limit 90000, 10; Start paging from 90,000, results?
8-9 seconds to complete, my God what's wrong??? In fact, to optimize this data, the Internet to find the answer. Look at one of the following statements:
Select ID from collect order by ID limit 90000, 10; Soon, 0.04 seconds will be OK. Why? Because the ID primary key to do the index of course fast. On-line modification is:
Select Id,title from collect where id>= (select IDs from collect the order by ID limit 90000,1) limit 10;
This is the result of indexing with an ID. But the problem is so complicated that it's over. Look at the following statement
Select ID from collect where vtype=1 order by ID limit 90000, 10; It's slow, it took 8-9 seconds!
Here I believe a lot of people will like me, there is a sense of collapse! VType did the index, huh? How can it be slow? VType made the index is nice, you direct select ID from collect where vtype=1 limit 1000, 10; is very fast, basically 0.05 seconds, but increase 90 times times, starting from 90,000, that is the speed of 0.05*90=4.5 seconds. and test results 8-9 seconds to an order of magnitude. From here some people put forward the idea of the table, this and dis #cuz forum is the same idea. Ideas are as follows:
Build an Index Table: T (id,title,vtype) and set to fixed length, and then do pagination, paging out the results and then to collect inside to find info. Is it feasible? I know it under the experiment.
100,000 records to T (Id,title,vtype), the data table size of about 20M. Use
Select ID from T where vtype=1 order by ID limit 90000, 10; Very soon. Basically 0.1-0.2 seconds can run out. Why is that? I guess it's because there are too many collect data, so paging is a long way to go. Limit is completely related to the size of the datasheet. In fact, this is a full table scan, just because the data volume is small, only 100,000 fast. OK, a crazy experiment, add to 1 million, test performance.
Added 10 times times the data, immediately t table to more than 200 m, and is fixed length. Or just the query statement, time is 0.1-0.2 seconds to complete! No problem with table performance? Wrong! Because our limit is still 90,000, so hurry up. Give me a big one, 900,000 start.
Select ID from T where vtype=1 order by ID limit 900000, 10; Look at the result, time is 1-2 seconds!
Why?? The time of the meter is still so long, very depressed! Someone agreed that the long will improve the performance of limit, I also thought, because the length of a record is fixed, MySQL should be able to calculate 900,000 of the position is right ah? But we overestimate the wisdom of MySQL, he is not a business database, the fact that the fixed-length and limit impact on the small? No wonder someone said Discuz to 1 million records will be very slow, I believe this is true, this and database design related!
Can't mysql break the 1 million limit??? To 1 million of the page is really to the limit???
The answer: NO!!!! Why not break 1 million is because the MySQL is not designed to cause. Here's how to do a crazy test with the Non-paged Method! A table to handle 1 million records, and 10G database, how to quickly pagination!
Well, our test went back to the collect table and started to test the conclusion is: 300,000 data, with the table method is feasible, more than 300,000 of his speed will a slow line you can not bear! Of course, if you use the table + I This method, it is absolutely perfect. But after using my method, without the table can also be a perfect solution!
The answer is: Composite Index! Once the MySQL index design, inadvertently found that the index name can be taken, you can select a few fields to come in, what is the use of it? The start select ID from collect order by ID limit 90000, 10; So fast is because of the index, but if you add a where will not go to the index. The idea of trying is to add an index like search (Vtype,id). and then test
Select ID from collect where vtype=1 limit 90000, 10; Very fast! 0.04 seconds to complete!
Re-test: Select ID, title from collect where vtype=1 limit 90000, 10; Very sorry, 8-9 seconds, did not go search index!
Re-test: Search (Id,vtype), or select ID this statement, also very regrettable, 0.5 seconds.
In summary: If there is a where condition, and want to go index with limit, you must design an index, where the first place, limit use the primary key put 2nd bit, and can only select the primary key!
Solved the paging problem perfectly. Can quickly return ID there is hope to optimize limit, according to such logic, millions limit should be in 0.0x seconds can be completed. It seems that the MySQL statement optimization and indexing is very important!
Well, back to the original question, how can the above research success be quickly applied to development? If you use a composite query, my lightweight framework is useless. Pagination string you have to write it yourself, how much trouble? Here is another example, the idea came out:
SELECT * from collect where ID in (9000,12,50,7000); It's 0 seconds to check it out!
Mygod, the MySQL index is also valid for in statements! It seems wrong to say on the Internet that indexing is not available!
With this conclusion, it's easy to apply to lightweight frameworks:

The code is as follows:

The code is as follows:

The code is as follows Copy Code

$db =dblink ();
$db->pagesize=20;
$sql = "SELECT ID from collect where vtype= $vtype";
$db->execute ($sql);
$strpage = $db->strpage (); Save the paging string in a temporary variable for easy output
while ($rs = $db->fetch_array ()) {
$strid. = $rs [' id ']. ', ';
}
$strid =substr ($strid, 0,strlen ($strid)-1); Construct an ID string
$db->pagesize=0; It is critical that, without logging off the class, the paging will be emptied, so that only one database connection is required, and no further opening is required;
$db->execute ("Select Id,title,url,stime,gtime,vtype,tag from collect where ID in ($strid)");
<?php while ($rs = $db->fetch_array ()):?>
<tr>
<td> <?php echo $rs [' id '];? ></td>
<td> <?php echo $rs [' url '];? ></td>
<td> <?php echo $rs [' stime '];? ></td>
<td> <?php echo $rs [' gtime '];? ></td>
<td> <?php echo $rs [' vtype '];? ></td>
<td> <a href= "? act=show&id=<?php echo $rs [' id '];? > "target=" _blank "><?php echo $rs [' title '];? ></a></td>
<td> <?php echo $rs [' tag '];? ></td>
</tr>
<?php Endwhile;?>
</table>
<?php
Echo $strpage;

With a simple transformation, the idea is simple: 1 by optimizing the index, find the ID, and spell "123,90000,12000″ such a string." 2 the 2nd time to find out the results of the query.
Small index + a little bit of change so that MySQL can support the million or even tens efficient paging!
In this example, I reflect on a point: for large systems, PHP must not use the framework, especially that even SQL statements can not see the framework! Because the beginning of my lightweight framework almost crashed! Only for the rapid development of small applications, for Erp,oa, large web sites, data layer including logic layer of things can not use the framework. If the programmer loses control of the SQL statement, the risk of the project will increase exponentially! MySQL, in particular, must require a professional DBA to perform his best performance. The performance difference caused by an index may be hundreds of thousands of thousand!
PS: After the actual test, to 1 million of the data, 1.6 million data, 15G table, 190M index, even if the index, limit have 0.49 seconds. So it is best not to let others see 100,000 data later, otherwise it will be very slow! Even with the index. After this optimization, MySQL to the millions paging is a limit! But having this kind of score has been pretty good if you are sure to die with SQL Server! And 1.6 million of the data with ID in (STR) quickly, basic or 0 seconds. If so, tens data, MySQL should also be easy to cope with.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.