The method of checking and tuning MySQL performance _mysql

Source: Internet
Author: User
Tags create index mysql client server memory
In the face of serious performance problems, there are generally several possibilities:
1, the index is not well established;
2, the SQL writing is too complex;
3, configuration error;
4, the machine is really not load;
1, the index is not built well
If you see the CPU consumed by MySQL is very large, you can use the MySQL client tool to check.
Execute on Linux
/usr/local/mysql/bin/mysql-hlocalhost-uroot-p
Enter the password, and if there is no password, you can go to the client interface without the-p parameter.
Look at the current operation
Show Full Processlist
Can run a few more times
This command sees the currently executing SQL statement, which tells you the SQL, database name, execution status, client IP, account number used, elapsed time, and so on.
In my cache back end, this most of the time is not see the display of any SQL statements, I think this is more normal. If you see a lot of SQL statements, then this MySQL will certainly have a performance problem
If there is a performance problem, you can analyze:
1. Is there a SQL statement stuck?
This is a lot of cases, if the database is using MyISAM, then there may be a written thread will lock the data table, if this statement does not end, the other statements will not run.
Look at the time in processlist to see if there is a long execution statement, pay attention to these statements.
2, a lot of the same SQL statement is executing
If this happens, it is possible that the SQL statement does not perform efficiently, as well as paying attention to these statements.
Then assemble the statements you suspect and check them with Desc (explain).
First look at a normal desc output:
Mysql> DESC SELECT * from IMGs where imgid=1651768337;
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
| 1 | Simple | IMGs | Const | PRIMARY | PRIMARY | 8 | Const | 1 | |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
1 row in Set (0.00 sec)
Note the three entries for key, rows, and extra, which indicate that the SQL uses the primary primary key index to query, the result set number is 1, the extra is not shown, and no sorting or other operations are used. The result can be inferred that MySQL will query imgid=1651768337 this record from the index, and then go to the real table to remove all the fields, is very simple operation.
Key is the index that the current SQL will use, and MySQL can use only one index to execute a simple statement; rows is the result set size returned, and the result set is all the matching results for a search using the index; Extra typically shows how to query and sort.
If you do not use the key, or if rows are large and use the Filesort sort, it will generally affect efficiency, for example:
Mysql> DESC SELECT * from IMGs where userid= ' 7mini ' ORDER by clicks DESC Limit 10;
+----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------- -+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------- -+
| 1 | Simple | IMGs | All | NULL | NULL | NULL | NULL | 12506 | The Using where; Using Filesort |
+----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------- -+
1 row in Set (0.00 sec)
This SQL result assembly is 12,506 and is filesort, so it can be very efficient to execute. At this time, the MySQL execution will scan the entire table, a one to find matching userid= "7mini" records, and then to the records of the clicks to do a sorting, the efficiency can be imagined. Real execution if the discovery is relatively fast, it is because the server memory is enough to 12,506 relatively short records all read into memory, so it is relatively fast, but more or more together or table big, efficiency problem is serious.
Then I added userid to the index:
CREATE INDEX UserID on IMGs (userid);
Then check again:
Mysql> DESC SELECT * from IMGs where userid= ' 7mini ' ORDER by clicks DESC Limit 10;
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
| 1 | Simple | IMGs | Ref | UserID | UserID | 51 | Const | 8 | The Using where; Using Filesort |
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
1 row in Set (0.00 sec)
Well, then you can see MySQL using the UserID index search, after a search with UserID index, the result set has 8. Then, although a filesort is used, the efficiency problem is mitigated because the result set is only 8.
However, if I use another userid query, the result will be different:
Mysql> DESC SELECT * from IMGs where userid= ' admin ' ORDER by clicks DESC Limit 10;
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
| 1 | Simple | IMGs | Ref | UserID | UserID | 51 | Const | 2944 | The Using where; Using Filesort |
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
1 row in Set (0.00 sec)
This result and userid= "7mini" results are basically the same, but MySQL with a userid index search after the size of the result set reached 2,944, the 2,944 records will be added to the memory for Filesort, the efficiency is much worse than 7mini that time. There are two ways to solve this problem. The first way is to add an index and judge the condition, because I only need to click on the maximum of 10 data, so there are a lot of data I do not need to add in the sort, such as clicks less than 10, this data may account for a large part.
I add an index to clicks and then add a where condition Requery:
Create index clicks on IMGs (clicks);
Mysql> DESC SELECT * from IMGs where userid= ' admin ' ORDER by clicks DESC Limit 10;
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
| 1 | Simple | IMGs | Ref | Userid,clicks | UserID | 51 | Const | 2944 | The Using where; Using Filesort |
+----+-------------+-------+------+---------------+--------+---------+-------+------+-------------------------- ---+
1 row in Set (0.00 sec)
At this point can see Possible_keys into Userid,clicks,possible_keys is can match all the indexes, MySQL will judge from the Possible_keys itself and take one of the draw to execute the statement, it is noteworthy that The index that MySQL takes is not necessarily the most optimized. This query MySQL or use userid this index to query, and did not follow my wishes, so the result is still no change. Change SQL plus use index to force MySQL using the clicks index:
Mysql> DESC SELECT * FROM IMGS use index (clicks) where userid= ' admin ' and clicks>10 order by clicks DESC Limit 10
+----+-------------+-------+-------+---------------+--------+---------+------+------+-------------+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+-------+-------+---------------+--------+---------+------+------+-------------+
| 1 | Simple | IMGs | Range | clicks | clicks | 4 | NULL | 5455 | Using where |
+----+-------------+-------+-------+---------------+--------+---------+------+------+-------------+
1 row in Set (0.00 sec)
At this time MySQL used the clicks index to query, but the result set than the UserID is larger! There seems to be another limit:
Mysql> DESC SELECT * FROM IMGS use index (clicks) where userid= ' admin ' and clicks>1000 order by clicks DESC Limit 10
+----+-------------+-------+-------+---------------+--------+---------+------+------+-------------+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+-------+-------+---------------+--------+---------+------+------+-------------+
| 1 | Simple | IMGs | Range | clicks | clicks | 4 | NULL | 312 | Using where |
+----+-------------+-------+-------+---------------+--------+---------+------+------+-------------+
1 row in Set (0.00 sec)
When added to 1000, the result set becomes 312, and the sorting efficiency should be acceptable.
However, the use of a change index this optimization method needs to take a sampling point, such as this example of the number 1000, so that the userid of each number, to find a sampling point, which is very difficult to run the program. If you take 1000 samples, then the userid= ' 7mini ' example, the result will not be 8, but 2, causing users confusion.
Of course there is another way to add a double index:
Create INDEX userid_clicks on IMGs (userid, clicks)
Mysql> DESC SELECT * from IMGs where userid= ' admin ' ORDER by clicks DESC Limit 10;
+----+-------------+-------+------+----------------------+---------------+---------+-------+------+------------ -+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+-------+------+----------------------+---------------+---------+-------+------+------------ -+
| 1 | Simple | IMGs | Ref | Userid,userid_clicks | Userid_clicks | 51 | Const | 2944 | Using where |
+----+-------------+-------+------+----------------------+---------------+---------+-------+------+------------ -+
1 row in Set (0.00 sec)
As you can see, the result set is still 2,944, but the filesort in the extra is missing. At this time MySQL use userid_clicks this index to query, this not only can quickly query to userid= "admin" all the records, and the result is according to Clicks Row Good order! So you don't have to read this result set into memory one by one, and it will be much higher in efficiency.
But with a multiple-field index this way there is a problem, if the query is a lot of SQL, you have to plan well, otherwise the index will be built very much, not only affect the data insert and update efficiency, and the data table is also vulnerable to damage.
The above is the method of index optimization, because the reason may be more complex, so write more long, generally well optimized index, MySQL efficiency will improve n grade, so also do not need to consider adding machines to solve the problem.
However, MySQL and even all databases may not be able to solve the limit problem. In MySQL, limit 0, 10 only to index appropriate, is no problem, but limit 100000,10 will be very slow, because MySQL will scan the results of the sequence, and then find 100000 this point, take out 10 return. To find the 100000 point, you need to scan 100,000 records, which is a time-consuming cycle. I don't know if there is any good algorithm to optimize the scan engine, I can't think of any good way. For limit, now until the more distant future, I think only through the business, procedures and data table planning to optimize, I think of these optimization methods are not a foolproof, and then discuss.
2, the SQL writing is too complex
SQL Writing if you use some special functions, such as groupby, or multiple table joint query, MySQL use what way to query can also use DESC to analyze, my side with complex SQL situation is not much, so not often analysis, temporarily no good advice.
3, configuration error
The main parameters in the configuration are Key_buffer, Sort_buffer_size/myisam_sort_buffer_size, the two parameters mean:
key_buffer=128m: The index of all tables will be placed in this area of memory as much as possible, the index is larger than open a little bigger can, I generally set to 128M, there is a good suggestion is very rarely used and larger than the table to find ways to move to another place, This can significantly reduce the memory footprint of MySQL.
SORT_BUFFER_SIZE=1M: The memory used by a single thread for sorting, the query result set will be put into this memory, if relatively small, MySQL will put more than a few times, so a little more open to it, it is important to optimize the index and query statements, so that they do not generate too large result sets.
Some other configurations:
Thread_concurrency=8: This configuration is standard =cpu quantity x2
Interactive_timeout=30
WAIT_TIMEOUT=30: These two configurations use 10-30 seconds, this will release the memory resources as soon as possible, note: The connection that has been used is not broken, this configuration just broke off a long time fixed connection.
Query_cache: This function does not use, now many people see the cache these letters just like to see the baby, this is not materialistic. MySQL Query_cache every time the table data will be cleaned up to the table of all the cache, if the update is more frequent, query_cache not only help not busy, but also to the efficiency of a great impact. This parameter is only suitable for a read-only database, if you do not want to use, you can only use query_cache_type=2 to specify some SQL Sql_cache cache.
Max_connections: The default is 100, generally enough to use, but generally to open a larger, open to 400-600 on it, can more than 600 of the general efficiency problem, have to find another countermeasure, light by increasing this number is not the way.
Other configuration can be by default, the individual feel that the problem is not so big, remind: 1, although the configuration is very important, but in most cases is not the main culprit of efficiency problems. 2, MySQL is a database, the most important for the database should not be efficient, but stability and data accuracy.
4, the machine really can't load
If the above adjustment, the server is still unable to withstand, it can only be optimized through the architectural level adjustment.
1, MySQL synchronization.
The MySQL sync function synchronizes the data to several from the database, writes from the primary database, and provides read from the database.
I'm not that happy to use MySQL synchronization, because it increases the complexity of the program and often causes data errors. In the high load service, the crash can also be a quick reboot, but the data error to restore the more trouble.
2. Add Cache
After the cache is added, the concurrency problem can be resolved, and the effect is obvious. If it is a real-time system, consider refreshing caching to keep the cache up to date.
In the front-end to add squid structure is advocated to use, in the high hit rate of the application, basically can solve the problem.
If caching is done in the logic layer of the program, it can add a lot of complexity, the problem will be more and more difficult to solve, do not propose to adjust at this level.
3, the program architecture adjustment, supporting the simultaneous connection of multiple databases
If the Web into the cache after the problem is still more serious, only through the program structure adjustment, the application to break up, with multiple machines at the same time to provide services.
If the separation, the business is a little impact, if some of the functions of the business must use all the data, you can use a complete library +n a decentralized library such a framework, each modification in the complete library and decentralized library operations once, or regularly organize the complete library.
Of course, there is one of the most stupid, the complete copy of the database, and then the program every time the full SQL in these libraries to execute the interview, polling access, I think this is more secure than the way MySQL synchronized.
4. Use MySQL Proxy
MySQL proxy can spread the various tables in the database to several servers through the proxy, but its problem is not solve the hot table problem, if the hot content scattered in multiple tables, this method is easier to solve the problem.
I have not used this software has not been carefully checked, but I have a little doubt about its function, that is, how it implements the joint query between multiple tables? If it can be achieved, then how efficient?
5. Use Memcachedb
Database swap to support the memcachedb of MySQL, it is possible to try the idea, from the memcachedb of the implementation of the way and the level of the data does not have any impact on the user has nothing to worry about.
For me now because of the database problem is not much, did not test this thing. However, as long as it supports most of the main syntax of MySQL and is inherently stable, usability is not to be questioned.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.