MySQL single table millions record query paging optimization

Source: Internet
Author: User
Tags server memory

Insert Select (Make millions record)

Before starting a query for millions data, make your own millions records for our use by using the Insert Select method

Insert is typically used to insert a row for a table with a specified column value. However, there is another form of insert that can be used to insert the results of a SELECT statement into a table. This is called an Insert Select, which, as its name implies, consists of an INSERT statement and a SELECT statement.

Now, there is a warning_reparied table with 2,447 records, as follows:

Mysql> Select COUNT (*) from warning_repaired;+----------+|     COUNT (*) |+----------+| 2447 |+----------+1 row in Set (0.00 sec) mysql>

Use this warning_repaired table to create a millions number of tables:

First, create a new table warning_repaired1,

mysql>  create table  ' warning_repaired1 '   (    ->     ' id '  int (one)  NOT NULL AUTO_INCREMENT,    ->     ' device_moid '  varchar (+)  NOT NULL,    ->    ' device _name '  varchar ( DEFAULT NULL,    ->    ' Device_type ')  varchar  DEFAULT NULL,    ->    ' device_ip '   varchar  DEFAULT NULL,    ->    ' warning_type '  enum (' 0 ', ' 1 ', ' 2 ')  NOT NULL,    ->    ' domain_moid '  varchar (36)  NOT NULL,    ->    ' domain_name '  varchar (+)   default null,    ->    ' Code '  smallint (6)  NOT NULL,     ->    ' Level '  varchar ( NOT NULL,    ->    ') Description '  varchar (+)  DEFAULT NULL,    ->    ' Start_ Time '  datetime NOT NULL,    ->    ' resolve_time '   datetime not null,    ->   primary key  (' id '),     ->   UNIQUE KEY  ' id '   (' id ')     ->  )  ENGINE=InnoDB AUTO_INCREMENT=4895 DEFAULT CHARSET=utf8; query ok, 0 rows affected  (0.39 SEC) mysql> select count (*)   from warning_repaired1; +----------+| count (*)  |+----------+|         0 |+----------+1 row in set  (0.00 sec) mysql>  Select count (*)  from warning_repaired;+----------+| count (*)  |+----------+|     2447 |+----------+1 row in  set  (0.00 sec) mysql>

Next, insert the record in the warning_repaired into the warning_repaired1 table by inserting a SELECT statement:

Mysql>  insert into warning_repaired1 (Device_moid, device_name, device_type,  device_ip, warning_type, domain_moid, domain_name, code, level,  Description, start_time, resolve_time)  select device_moid, device_name, device_ type, device_ip, warning_type, domain_moid, domain_name, code, level,  description, start_time, resolve_time from warning_repaired; query ok, 2447 rows affected  (1.07 sec) Records: 2447  duplicates:  0  warnings: 0mysql> select count (*)  from warning_repaired;+------ ----+| count (*)  |+----------+|     2447 |+----------+1 row  in set  (0.00 SEC) mysql> select count (*)  from warning_repaired1;+-------- --+| count (*)  |+----------+|   &Nbsp; 2447 |+----------+1 row in set  (0.00 sec) 

After the insert is successful, the query table used by the Insert SELECT statement is also changed to Warning_repaired1, as follows:

Insert into warning_repaired1 (device_moid, Device_name, Device_type, Device_ip, Warning_type, Domain_moid, domain_name , code, level, description, start_time, resolve_time) Select Device_moid, Device_name, Device_type, Device_ip, warning_ty PE, domain_moid, domain_name, code, level, description, Start_time, resolve_time from warning_repaired1;

This runs several times (record exponential growth) and can quickly produce millions records.


Most common MySQL most basic paging way limit

Mysql> select count (*)  from warning_repaired;+----------+| count (*)  |+------- ---+|     2447 |+----------+1 row in set  (0.00 sec) Mysql> select count (*)  from warning_repaired5;+----------+| count (*)  |+------ ----+|  5990256 |+----------+1 row in set  (10.11 sec) mysql>  select code,level,description from warning_repaired5 limit 1000,2;+------+----------+ ----------------+| code | level    | description     |+------+----------+----------------+| 1006 | critical |  Register GK Failure       | |  1006 | critical |  Register GK Failure      |+------+----------+---------- ------+2 rows in set  (0.00 SEC) mysql> select code,level,description  from warning_repaired5 limit 10000,2;+------+----------+----------------+| code |  level    | description    |+------+----------+----------------+|  1006 | critical |  Register GK Failure      | |  1006 | critical |  Register GK Failure      |+------+----------+---------- ------+2 rows in set  (0.05 SEC) mysql> select code,level,description  from warning_repaired5 limit 100000,2;+------+----------+-------------------------------- ----------------------+| code | level    | description                                             |+------+----------+------------------------------------------------------+| 2003 | critical |  server memory average usage exceeds threshold in 5 minutes                   | |  2019 | critical |  network card throughput over-threshold value                                      |+------+----------+------------------------------------------ ------------+2 rows in set  (0.26 sec) Mysql> select code,level, description from warning_repaired5 limit 1000000,2;+------+----------+----------------+|  code | level    | description    |+------+------- ---+----------------+| 1006 | critical |  register GK Failure      | |  1006 | critical |  Register GK Failure      |+------+----------+----------------+2 rows in  set  (1.56 sec) Mysql> select code,level,description from warning_repaired5  limit 5000000,2; +------+----------+----------------+| code | level     | description    |+------+----------+----------------+| 1006 |  critical |  Register GK Failure      | |  1006 | critical |  Register GK Failure      |+------+----------+---------- ------+2 rows in set  (7.15 SEC) mysql>

When there are no more than 1 million records, you can see that the time spent is still relatively small. So in a small number of cases, such SQL is sufficient, the only problem to be aware of is to ensure that the index is used. However, as the amount of data increases, the number of pages will be more and more, in the process of slowly growing data, may appear in the case of limit 5000000,2, limit 5000000,2 means to scan the l5000002 line that satisfies the condition, Throw away the previous 5000000 lines, return to the last 2 lines, the problem is here, if limit 5000000, 2, need to scan 5000002 lines, in a high concurrency application, each query needs to scan more than 500w rows, performance must be greatly compromised.

There are several disadvantages to this approach: a large offset increases the result set, and a small percentage of inefficient paging is sufficient to produce disk I/O bottlenecks, requiring more rows to be scanned .

Simple solution: Do not display the total number of records, no users care about this number, do not allow users to access the larger pages of records, redirect them; avoid count (*), do not display the total number, let users through the "next page" to page, the total number of caches, the total number of individual statistics, in the INSERT and delete increment/decrement

mysql> select code,level,description from warning_repaired5 limit 5000000,2;                   +- -----+----------+----------------+| code | level    | description     |+------+----------+----------------+| 1006 | critical |  Register GK Failure      | |  1006 | critical |  Register GK Failure      |+------+----------+---------- ------+2 rows in set  (2.98 SEC) mysql> select code,level,description  from warning_repaired5 order by id desc limit 5000000,2;                    +------+ ----------+----------------+| code | level    | description    |+------+----------+----------------+| 1006 | critical |  Register GK Failure      | |  1006 | critical |  Register GK Failure      |+------+----------+---------- ------+2 rows in set  (8.04 SEC)

It can be seen from the above that the time it takes to add the order by id DESC increases again.


The second is to divide the table, calculate the hash value, here does not introduce.


The third type: offset

Mysql> select code,level,description from warning_repaired5 order by id  desc limit 5000000,20;                                                                            +------+----------+----------------+| code | level     | description    |+------+----------+----------------+| 1006  | critical |  Register GK Failure      |......|  1006 | critical |  Register GK Failure      |+------+----------+---------- ------+20 rows in set  (4.77&NBSP;SEC) mysql> select code,level,description from warning_repaired5 where id < = ( select id from warning_repaired5 order by id desc limit  5000000,1)  order by id desc limit 20;+------+----------+----------------+|  code | level    | description    |+------+----------+-- --------------+| 1006 | critical |  Register GK Failure      | |  1006 | critical |  Register GK Failure      |......|  1006 | critical |  Register GK Failure      |+------+----------+---------- ------+20 rows in set  (4.26&NBSP;SEC)

You can find out the time is a little bit less than the first.

Overall, in the face of millions data when using the third method to optimize the system performance is able to get a good upgrade, in the face of complex queries, as much as possible to simplify, reduce the amount of computation. At the same time, as much as possible to use memory cache, conditional can consider the table, sub-Library, array and other large-scale solutions.

Reference article:

Http://www.lvtao.net/database/mysql_page_limit.html

http://my.oschina.net/u/1540325/blog/477126


MySQL single table millions record query paging optimization

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.