MySQL id Association and index use case optimization, mysqlid

Source: Internet
Author: User

MySQL id Association and index use case optimization, mysqlid

The customer service MM call was received last night. A user reported that the database response was very slow. The mobile phone received an abnormal load alarm. After boarding the host, it found that a large number of SQL statements were executed very slowly, and some of them had been executed for more than 10 s.
Optimization Point 1:

SELECT * FROM `sitevipdb`.`game_shares_buy_list` WHERE price>='2.00′ ORDER BY tran_id DESC LIMIT 10;

Table Structure:

CREATE TABLE `game_shares_buy_list` (`tran_id` int(10) unsigned NOT NULL AUTO_INCREMENT,`………..'PRIMARY KEY (`tran_id`),KEY `ind_username` (`username`)) ENGINE=InnoDB AUTO_INCREMENT=3144200 DEFAULT CHARSET=utf8;

Execution Plan:

root@127.0.0.1 : sitevipdb 09:10:22> explain SELECT * FROM `sitevipdb`.`game_shares_buy_list` WHERE price>='2.00′ ORDER BY tran_id DESC LIMIT 10;+—-+————-+———————-+——-+—————+———+———+——+——+————-+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |+—-+————-+———————-+——-+—————+———+———+——+——+————-+| 1 | SIMPLE | game_shares_buy_list | index | NULL | PRIMARY | 4 | NULL | 10 | Using where |+—-+————-+———————-+——-+—————+———+———+——+——+————-+1 row in set (0.00 sec)

Analyze the execution plan of the SQL statement. Because tran_id is the table's primary key, queries are scanned in descending order of the primary key so that no sorting is required,
Then the records with the filter condition price> 2.00 appear to be very good in the execution plan. If the query scans 10 records that meet the conditions, the scanning will be stopped;
However, there is a problem here. If a large number of records in the table do not match 2.00, it means that the query needs to scan a large number of records to find 10 matching records:

root@127.0.0.1 : sitevipdb 09:17:23> select price,count(*) as cnt from `game_shares_buy_list` group by price order by cnt desc limit 10;+——-+——-+| price | cnt |+——-+——-+| 1.75 | 39101 || 1.68 | 38477 || 1.71 | 34869 || 1.66 | 34849 || 1.72 | 34718 || 1.70 | 33996 || 1.76 | 32527 || 1.69 | 27189 || 1.61 | 25694 || 1.25 | 25450 |

We can see that a large number of records are not 2.00 In the table. Therefore, records cannot be scanned in sequence based on the primary key;
Do you need to create an index in price:

root@127.0.0.1 : sitevipdb 09:09:01> select count(*) from `game_shares_buy_list` where price>'2′;+———-+| count(*) |+———-+| 4087 |+———-+root@127.0.0.1 : sitevipdb 09:17:31> select count(*) from `game_shares_buy_list` ;+———-+| count(*) |+———-+| 1572100 |

From the data distribution of the price above, we can see that the price distribution is relatively concentrated. If an index is created on the price, mysql may think that too many records need to be returned to the table,
At the same time, additional sorting is required without selecting the index on the price:

root@127.0.0.1 : sitevipdb 09:24:53> alter table game_shares_buy_list add index ind_game_shares_buy_list_price(price);Query OK, 0 rows affected (5.79 sec)

Although the optimizer noticed the newly added index, it finally chose primary for scanning;
Therefore, the index we add does not produce any effect at this time, and the database load is still very high. If the index on price is forced, the effect will be as follows:

root@127.0.0.1 : sitevipdb 09:35:38> SELECT * FROM `sitevipdb`.`game_shares_buy_list` WHERE price>='2.0′ ORDER BY tran_id DESC LIMIT 10;。。。。。10 rows in set (7.06 sec)root@127.0.0.1 : sitevipdb 09:36:00> SELECT * FROM `sitevipdb`.`game_shares_buy_list` force index(ind_game_shares_buy_list_price) WHERE price>='2.0′ ORDER BY tran_id DESC LIMIT 10;。。。。10 rows in set (1.01 sec)

We can see that if the index is forcibly taken, the time has obviously decreased, but it is still a little slow. Can it be faster? In fact, we only need to scan 10 records, but when querying these 10 records, we need to scan a large number of invalid records.

How to reduce this data: in fact, you only need to rewrite the SQL statement. First, we can obtain 10 IDs that meet the conditions from the index and associate them in the return table:

root@127.0.0.1 : sitevipdb 09:44:45> select * from game_shares_buy_list t1,-> ( SELECT tran_id FROM sitevipdb.game_shares_buy_list WHERE price>='2.0′ ORDER BY tran_id DESC LIMIT 10) t2-> where t1.tran_id=t2.tran_id;10 rows in set (0.00 sec)

We can see that the execution time is no longer in seconds. After communicating with the customer by phone, I am willing to rewrite the SQL statement like this.

-Here we can see that order by tran_id requires additional sorting, and the index can also be used to create an elimination sort (tran_id, price) so that sorting can be eliminated, at the same time, order by desc/asc + limit M, N can be used for optimization.

Optimization Point 2:

CREATE TABLE `game_session` (`session_id` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT ,`session_expires` int(10) unsigned NOT NULL DEFAULT '0′,`client_ip` varchar(16) DEFAULT NULL,`session_data` text,…………………….PRIMARY KEY (`session_id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;

The query result is select 'session _ data', and 'session _ expires 'from 'game _ session' where session_id = 'xxx'. A large number of waiting conditions occur.
At the same time, the insert operation of the table also waits;
We can see that the table structure design is problematic. After consulting the customer, you can change it to the following structure:

CREATE TABLE `game_session` (id int auto_increment,`session_id` varchar(30) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT ,`session_expires` int(10) unsigned NOT NULL DEFAULT '0′,`client_ip` varchar(16) DEFAULT NULL,`session_data` varchar(200),PRIMARY KEY (id),key ind_session_id(session_id,session_data, session_expires)) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Summary:

  1. The newly added auto-incrementing primary key id is used as the primary key of the table, which improves the insert performance and reduces the size of the table primary key;
  2. Change session_data from text to varchar (200). After consulting the customer, this field can be retained without storing large fields, and text can be changed to varchar to be redundant to the index;
  3. Because the query can be completed by overwriting indexes, the three fields of the query are redundant to the index. The query is completed by indexing, and no need to return to the table.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.