MySQL performance checking and tuning method

Source: Internet
Author: User
Tags server memory

MySQL performance checking and tuning methodPublished: October 4, 2009 published by: Ourmysql
Source: Sudone.com was read: 3,524 times only 1 Reviews

I have been using MySQL this database software, it works relatively stable, high efficiency. In the face of severe performance problems, there are generally several possibilities:

1, the index is not built well;
2, SQL writing is too complex;
3, configuration error;
4, the machine does not load;

1. The index is not built.

If you see that MySQL consumes a large CPU, you can use the MySQL client tool to check it.

Execute under Linux

/usr/local/mysql/bin/mysql-hlocalhost-uroot-p

Enter the password, if there is no password, you can go into the client interface without the-p parameter.

Look at the current operating conditions

Show Full Processlist

can run multiple times

This command can see the SQL statement that is currently executing, which tells you the SQL executed, the name of the database, the state of execution, the client IP from, the account used, the elapsed time, and so on.

In my cache backend, most of the time it is not visible to display any SQL statements, I think this is relatively normal. If you see a lot of SQL statements, then this MySQL is bound to have performance problems

If there is a performance problem, you can analyze it:

1. Is there a SQL statement stuck?

This is a lot of cases, if the database is using MyISAM, then there is a possibility that a written thread will lock the data table, if the statement does not end, the other statements will not run.

Look at the time in the processlist to see if there is a lengthy statement to take note of these statements.

2. A large number of identical SQL statements are executing

If this is the case, it is possible that the SQL statement is inefficient and should be aware of these statements as well.

Then gather all the statements you suspect, and use DESC (explain) to examine the statements.

First look at a normal desc output:

Mysql> DESC SELECT * from IMGs where imgid=1651768337;
+--+ ————-+ ——-+ ——-+ ————— + ——— + ——— + ——-+--+ ——-+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+--+ ————-+ ——-+ ——-+ ————— + ——— + ——— + ——-+--+ ——-+
| 1 | Simple | IMGs | Const | PRIMARY | PRIMARY | 8 |    Const |       1 | |
+--+ ————-+ ——-+ ——-+ ————— + ——— + ——— + ——-+--+ ——-+
1 row in Set (0.00 sec)

Note that key, rows, and extra are three items, and the result of this statement shows that the SQL is queried using the primary primary key index, the number of result sets is 1, the extra is not displayed, and no sorting or other actions are used. As a result, it is easy to infer that MySQL will query the imgid=1651768337 record from the index and then fetch all the fields from the real table.

Key is the index that indicates the current SQL is used, and MySQL can only use one index when executing a simple statement, note that this is the size of the result set that is returned, and the result set is all the matching results for a single search using that index; Extra generally shows how to query and sort.

If you do not use the key, or if the rows are large and use the Filesort sort, it will generally affect the efficiency, for example:

Mysql> DESC SELECT * from IMGs where userid= ' 7mini ' ORDER by clicks DESC Limit 10;
+--+ ————-+ ——-+--+ ————— +--+ ——— +--+ ——-+ ————————— –+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+--+ ————-+ ——-+--+ ————— +--+ ——— +--+ ——-+ ————————— –+
| 1 | Simple | IMGs | All | NULL | NULL | NULL | NULL | 12506 | Using where; Using Filesort |
+--+ ————-+ ——-+--+ ————— +--+ ——— +--+ ——-+ ————————— –+
1 row in Set (0.00 sec)

This SQL result set has 12,506 filesort, so it's very efficient to execute. When MySQL executes, it will scan the entire table once, one to find the matching userid= "7mini" records, and then to the clicks of these records to be sorted once, the efficiency can be imagined. Real execution if found to be relatively fast, that is because the server memory is enough to read 12,506 of the relatively short record all into memory, so it is relatively fast, but more than the concurrency or the table big up, the efficiency problem is serious.

Then I add the UserID to the index:

CREATE INDEX UserID on IMGs (userid);

Then check again:

Mysql> DESC SELECT * from IMGs where userid= ' 7mini ' ORDER by clicks DESC Limit 10;
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
| 1 | Simple | IMGs | Ref | UserID | UserID | 51 |    Const | 8 | Using where; Using Filesort |
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
1 row in Set (0.00 sec)

Well, you can see that MySQL uses the UserID index search, and after searching with the UserID index, there are 8 result sets. Then, although Filesort is used in a single order, the efficiency problem is mitigated by the fact that there are only 8 result sets.

However, if I query with another userid, the result will be different:

Mysql> DESC SELECT * from IMGs where userid= ' admin ' ORDER by clicks DESC Limit 10;
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
| 1 | Simple | IMGs | Ref | UserID | UserID | 51 | Const | 2944 | Using where; Using Filesort |
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
1 row in Set (0.00 sec)

This result and userid= "7mini" results are basically the same, but MySQL with userid index once search result set size reached 2,944, these 2,944 records will be added to memory filesort, efficiency than 7mini that is much worse. There can be two ways to solve, the first way is to add an index and judging conditions, because I only need to take the maximum number of hits 10 data, so there is a lot of data I do not need to add in the sorting, such as the number of clicks less than 10, the data may account for a large part.

I add an index to clicks and then add a where condition to query again:

Create index clicks on IMGs (clicks);

Mysql> DESC SELECT * from IMGs where userid= ' admin ' ORDER by clicks DESC Limit 10;
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
| 1 | Simple | IMGs | Ref | Userid,clicks | UserID | 51 | Const | 2944 | Using where; Using Filesort |
+--+ ————-+ ——-+--+ ————— +--–+ ——— + ——-+--+ ————————— –+
1 row in Set (0.00 sec)

At this point you can see that Possible_keys becomes Userid,clicks,possible_keys is all the indexes that can be matched, MySQL will judge from the Possible_keys and take one of the lead execution statements, it is worth noting that The index that MySQL uses is not necessarily the most optimized. This query MySQL or use the UserID this index to query, and did not follow my wishes, so the results have not changed. Change SQL plus use index to enforce MySQL using the clicks index:

Mysql> DESC SELECT * FROM IMGS use index (clicks) where userid= ' admin ' and clicks>10 order by clicks DESC Limit 10
+--+ ————-+ ——-+ ——-+ ————— +--–+ ——— +--+--+ ————-+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+--+ ————-+ ——-+ ——-+ ————— +--–+ ——— +--+--+ ————-+
| 1 | Simple | IMGs | Range | clicks | clicks | 4 | NULL | 5455 | Using where |
+--+ ————-+ ——-+ ——-+ ————— +--–+ ——— +--+--+ ————-+
1 row in Set (0.00 sec)

MySQL uses the clicks Index to query, but the result set is larger than the UserID! There seems to be a further restriction:

Mysql> DESC SELECT * FROM IMGS use index (clicks) where userid= ' admin ' and clicks>1000 order by clicks DESC Limit 10
+--+ ————-+ ——-+ ——-+ ————— +--–+ ——— +--+--+ ————-+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+--+ ————-+ ——-+ ——-+ ————— +--–+ ——— +--+--+ ————-+
| 1 | Simple | IMGs | Range | clicks | clicks | 4 |  NULL | 312 | Using where |
+--+ ————-+ ——-+ ——-+ ————— +--–+ ——— +--+--+ ————-+
1 row in Set (0.00 sec)

When you add to 1000, the result set becomes 312, and the sorting efficiency should be acceptable.

However, the use of this optimization method of indexing needs to take a sample point, such as this example of the number 1000, so that the userid of each value, to find a sampling point, which is difficult for the program. If you sample by 1000, then userid= ' 7mini ' in this example, the result will not be 8, but 2, causing confusion to the user.

Of course there is another way to add a double index:

Create INDEX userid_clicks on IMGs (userid, clicks)

Mysql> DESC SELECT * from IMGs where userid= ' admin ' ORDER by clicks DESC Limit 10;
+--+ ————-+ ——-+--+ ———————-+ ————— + ——— + ——-+--+ ————-+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+--+ ————-+ ——-+--+ ———————-+ ————— + ——— + ——-+--+ ————-+
| 1 | Simple | IMGs | Ref | Userid,userid_clicks | Userid_clicks | 51 | Const | 2944 | Using where |
+--+ ————-+ ——-+--+ ———————-+ ————— + ——— + ——-+--+ ————-+
1 row in Set (0.00 sec)

As you can see, the result set is still 2,944, but the filesort in extra is missing. At this time MySQL use userid_clicks this index to query, this not only can quickly query to userid= "admin" all records, and the result is ordered according to clicks! So you don't have to read this result set into memory one by one, which is much more efficient.

But there's a problem with multi-field indexing, and if you have a lot of SQL in your query, you have to plan for it, otherwise the index will be built so much that it will not only affect the efficiency of the data insert and update, but also the data tables that are prone to corruption.

The above is the index optimization method, because the reason may be more complex, so write longer, generally well optimized index, MySQL efficiency will increase n grade, so do not need to consider adding machine to solve the problem.

However, MySQL and even all databases may not solve the limit problem. In MySQL, limit 0, 10 only to index the appropriate, there is no problem, but the limit 100000,10 will be very slow, because MySQL will scan the results of the sequence, and then find 100000 this point, remove 10 return. To find 100000 this point, scan 100,000 records, this cycle is more time-consuming. I do not know if there will be any good algorithm to optimize the scanning engine, I can not think of any good idea. For limit, until the far future, I think only through the planning of business, procedures and data sheets to optimize, I think of these optimization methods have not yet a surefire plan, discussed later.

2. SQL syntax is too complex

SQL Writing if you use some special functions, such as groupby, or multi-table joint query, MySQL used to query the way you can also use DESC to analyze, my side with the case of complex SQL is not too much, so not often analyzed, for the moment there is no good advice.

3. Configuration Error

The main parameters in the configuration are Key_buffer, Sort_buffer_size/myisam_sort_buffer_size, and these two parameters mean:

key_buffer=128m: All the table index will be placed as far as possible in this area of memory, the index is larger then open slightly larger, I generally set to 128M, there is a good suggestion is rarely used and larger than the table to find ways to move to another place, This can significantly reduce the memory footprint of MySQL.
SORT_BUFFER_SIZE=1M: The memory used by a single thread for sorting, the query result set will be put into this memory, if relatively small, MySQL will put more than a few times, so slightly open a little bit, it is important to optimize the index and query statements, so that they do not generate too large result set.

Some other configurations:

Thread_concurrency=8: This configuration comes standard =cpu quantity x2
Interactive_timeout=30
WAIT_TIMEOUT=30: These two configurations are available for 10-30 seconds, which frees up memory resources as soon as possible, noting that the connection that has been in use is not broken, and this configuration only disconnects a long-time fixed connection.
Query_cache: This function does not use, now many people see the cache these letters just like to see the treasure, this is not materialistic. MySQL's query_cache will re-clean all the caches that are attached to the table every time the table data changes, and if the update is frequent, Query_cache will not only help but also have a significant impact on efficiency. This parameter is only suitable for a read-only database, and if it is to be used, it can only be cached using query_cache_type=2 to specify some SQL by itself Sql_cache.
Max_connections: The default is 100, usually enough to use, but generally to open a larger, open to 400-600 on it, can more than 600 words generally have efficiency problems, to find another countermeasure, light by increasing this figure is not the way.

Other configuration can be by default, personally feel that the problem is not so big, remind: 1, configuration Although very important, but in most cases is not the culprit of efficiency problems. 2, MySQL is a database, the most important for the database is not efficiency, but stability and data accuracy.

4, the machine does not load

If you make the above adjustment, the server is still unable to withstand, it can only be optimized by the schema-level tuning.

1, MySQL synchronization.

The MySQL synchronization feature synchronizes data to several tables from the database, written by the primary database, and read from the database.

I personally am not so happy to use MySQL synchronization because this approach increases the complexity of the program and often leads to data errors. In the high-load service, the crash can also be quickly restarted, but the data is wrong to restore the more troublesome.

2. Add Cache

After the cache is added, the concurrency problem can be resolved and the effect is obvious. In the case of a real-time system, consider refreshing caching to keep the cache up to date.

Adding squid to the front end of the architecture is advocated for use, in high-hit applications, can basically solve the problem.

Caching in the logic layer of the program adds a lot of complexity, and the problem is more and more difficult to solve, and it is not recommended to adjust at this level.

3, program architecture adjustment, support to connect multiple databases simultaneously

If the web joins the cache after the problem is still relatively serious, only through the program architecture adjustment, the application is split, with multiple machines at the same time to provide services.

If it is broken down, there is a little impact on the business, if some of the functions of the business must use all the data, you can use a complete library +n a decentralized library such a structure, each modification in the full library and the decentralized library operations once, or regularly organize the full library.

Of course, there is a kind of the most stupid, the entire database complete copy, and then every time the program to the full SQL in these libraries to perform the visit, polling access, I think this is more secure than the way MySQL synchronization.

4. Using MySQL proxy agent

MySQL proxy can be used by the agent to spread the database of the various tables to several servers, but its problem is not to solve the hot table problem, if the hot content scattered in multiple tables, this method is relatively easy to solve the problem.

I have not used this software has not been carefully checked, but I have a little doubt about its function, that is, how does it implement a joint query between multiple tables? If it can be achieved, then how efficient?

5. Using Memcachedb

Database swap support MySQL memcachedb, it is possible to try the idea, from the implementation of Memcachedb and the level of the data has no impact, will not have any trouble for users.

For me now because the database is not a lot of problems, have not tried this thing. However, as long as it supports most of MySQL's main syntax, and itself is stable, usability is no doubt.

MySQL performance checking and tuning method

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.