MySQL page is relatively simple, only need limit offset,length can get data, but when the offset and length is larger, MySQL significantly degraded performance
1. Sub-Query optimization method
The first piece of data, then the ID that is greater than or equal to this data, is the data to get.
Disadvantage: The data must be continuous, you can say that there is no where conditions, where conditions will filter data, resulting in data loss of continuity, specific ways to see the following query example:
Copy Code code as follows:
Mysql> set profiling=1;
Query OK, 0 rows Affected (0.00 sec)
Mysql> Select COUNT (*) from member;
+----------+
| COUNT (*) |
+----------+
| 169566 |
+----------+
1 row in Set (0.00 sec)
Mysql> Pager grep!~-
PAGER set to ' grep!~-'
Mysql> SELECT * from member limit 10, 100;
Rows in Set (0.00 sec)
Mysql> SELECT * from member where MemberID >= (select MemberID from member limit 10,1) limit 100;
Rows in Set (0.00 sec)
Mysql> SELECT * from member limit 1000, 100;
Rows in Set (0.01 sec)
Mysql> SELECT * from member where MemberID >= (select MemberID from member limit 1000,1) limit 100;
Rows in Set (0.00 sec)
Mysql> SELECT * from member limit 100000, 100;
Rows in Set (0.10 sec)
Mysql> SELECT * from member where MemberID >= (select MemberID from member limit 100000,1) limit 100;
Rows in Set (0.02 sec)
Mysql> Nopager
PAGER set to stdout
Mysql> Show Profiles\g
1. Row ***************************
Query_id:1
duration:0.00003300
Query:select Count (*) from member
2. Row ***************************
Query_id:2
duration:0.00167000
Query:select * from member limit 10, 100
3. Row ***************************
Query_id:3
duration:0.00112400
Query:select * from member where MemberID >= (select MemberID from member limit 10,1) limit 100
4. Row ***************************
Query_id:4
duration:0.00263200
Query:select * from member limit 1000, 100
5. Row ***************************
Query_id:5
duration:0.00134000
Query:select * from member where MemberID >= (select MemberID from member limit 1000,1) limit 100
6. Row ***************************
Query_id:6
duration:0.09956700
Query:select * from member limit 100000, 100
7. Row ***************************
Query_id:7
duration:0.02447700
Query:select * from member where MemberID >= (select MemberID from member limit 100000,1) limit 100
From the results, it can be found that the use of subquery method can effectively improve the performance when the deviation is more than 1000.
2. Inverted Table Optimization method
The inverted table method is analogous to indexing, maintaining pages with a single table, and then getting the data through an efficient connection
Disadvantage: Only suitable data number fixed situation, data can not be deleted, maintenance page table difficult
Inverted Table Introduction: (and inverted index is called search engine algorithm Cornerstone)
An inverted table is an inverted index that is stored in memory that can append inverted records. The inverted table is a mini inverted index.
A temporary inverted file is a inverted index stored on disk that is not able to append inverted records as a file. A temporary inverted file is a medium-scale inverted index.
The final inverted file is the inverted index that is merged by a temporary inverted file stored on disk in the form of a file. The final inverted file is a larger inverted index.
Inverted indexes serve as abstract concepts, whereas inverted tables, temporary inverted files, and final inverted files are three different manifestations of inverted indexes.
3. Reverse Lookup Optimization method
When you offset more than half the number of records, first use the sort, so the offset is reversed.
Disadvantage: Order by optimization is more troublesome, to increase the index, the index affects the efficiency of data modification, and to know the total number of records, offset more than half of the data
Limit offset algorithm:
Forward lookup: (current page-1) * Page length
Reverse lookup: Total Record-current page * page length
Do the experiment and see how the performance
Total record number: 1,628,775
Number of records per page: 40
Total Pages: 1,628,775/40 = 40720
Middle page: 40720/2 = 20360
No. 21000 Page
Forward Lookup sql:
Copy Code code as follows:
SELECT * from ' abc ' WHERE ' BatchId ' = 123 LIMIT 839960, 40
Time: 1.8696 seconds
Reverse Lookup sql:
Copy Code code as follows:
SELECT * from ' abc ' WHERE ' BatchId ' = 123 ORDER by Inputdate DESC LIMIT 788775, 40
Time: 1.8336 seconds
No. 30000 Page
Forward Lookup sql:
Copy Code code as follows:
SELECT * from ' abc ' WHERE ' BatchId ' = 123 LIMIT 1199960, 40
Time: 2.6493 seconds
Reverse Lookup sql:
Copy Code code as follows:
SELECT * from ' abc ' WHERE ' BatchId ' = 123 ORDER by Inputdate DESC LIMIT 428775, 40
Time: 1.0035 seconds
Note that the result of the reverse lookup is descending desc, and the inputdate is the insertion time of the record, or the primary key union index, but not convenient.
4.limit Limit Optimization method
Limit the limit offset below a certain number. More than this number equals no data, I remember the Alibaba DBA said they did it.
5. Index only Search method
MySQL limit work is to read the first N records, and then discard the top N, read M want, so the greater the N, performance will be worse.
Pre-optimized SQL:
Copy Code code as follows:
SELECT * from member ORDER by last_active LIMIT 50,5
Optimized after SQL:
Copy Code code as follows:
SELECT * FROM-member INNER JOIN (select member_id to-last_active LIMIT 5) USING (member_id)
The difference is that SQL before optimization requires more I/O waste because it reads the index, reads the data, and discards the unnecessary rows. The optimized SQL (subquery) read-only index (Cover index) is OK, and then read the required columns through member_id.
Summary: Limit optimization restrictions are more, so the actual situation or do not have to be specific to the concrete analysis. The number of pages then, basically very few people see ...