How to check and optimize MySQL performance in Linux

Source: Internet
Author: User
If you see that MySQL consumes a lot of cpu, you can use the MySQL client tool to check. Run usrlocalmysqlbinmysql-hlocalhost-uroot-p in Linux.

If you see that MySQL consumes a lot of cpu, you can use the MySQL client tool to check. Run/usr/local/mysql/bin/mysql-hlocalhost-uroot-p in Linux.

I have been using the MySQL database software, which is stable and highly efficient. There are several possible causes for serious performance problems:

1. The index is not created;

2. SQL statements are too complex;

3. Configuration Error;

4. The machine cannot be loaded;

1. Index not created

If you see that MySQL consumes a lot of cpu, you can use the MySQL client tool to check.

Run

/Usr/local/mysql/bin/mysql-hlocalhost-uroot-p

Enter the password. If there is no password, you can enter the client interface without the-p parameter.

Check the current running status

Show full processlist

It can be run several times more

This command shows the SQL statement being executed. It informs the executed SQL statement, database name, execution status, Client ip address, account used, and running time.

In my cache backend, no SQL statements are displayed most of the time. I think this is normal. If you see a lot of SQL statements, this mysql will certainly have performance problems

If there is a performance problem, you can analyze it:

1. Is there any SQL statement stuck?

This is often the case. If myisam is used in the database, a write thread may lock the data table. If this statement does not end, other statements cannot be run.

Check the time item in processlist to see if the statements have been executed for a long time. Pay attention to these statements.

2. A large number of identical SQL statements are being executed

In this case, the execution efficiency of the SQL statement may be low. Pay attention to these statements as well.

Then you can set all the statements you suspect and use desc (explain) to check these statements.

First, let's look at a normal desc output:

Mysql & gt; desc select * from imgs where imgid = 1651768337;

+ ---- + ------------- + ------- + --------------- + --------- + ------- + ------ + ------- +

| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+ ---- + ------------- + ------- + --------------- + --------- + ------- + ------ + ------- +

| 1 | SIMPLE | imgs | const | PRIMARY | 8 | const | 1 |

+ ---- + ------------- + ------- + --------------- + --------- + ------- + ------ + ------- +

1 row in set (0.00 sec)

Note key, rows, and Extra. The results returned by this statement indicate that the SQL statement uses the PRIMARY key index for query. The number of result sets is one and Extra is not displayed, it proves that sorting or other operations are not used. From this result, we can infer that mysql will query the imgid = 1651768337 record from the index, and then retrieve all fields from the real table. This is a very simple operation.

Key indicates the index used by the current SQL statement. mysql can only use one index when executing a simple statement. Note this restriction. rows indicates that the returned result set is large or small, the result set is all matching results of a search using this index. Extra generally displays the query and sorting methods ,.

If key is not used, or filesort sorting is used for a large number of rows, the efficiency is generally affected. For example:

Mysql> desc select * from imgs where userid = "7 mini" order by clicks desc limit 10;

+ ---- + ------------- + ------- + ------ + --------------- + ------ + --------- + ------ + ------- + ----------------------------- +

| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+ ---- + ------------- + ------- + ------ + --------------- + ------ + --------- + ------ + ------- + ----------------------------- +

| 1 | SIMPLE | imgs | ALL | NULL | 12506 | Using where; Using filesort |

+ ---- + ------------- + ------- + ------ + --------------- + ------ + --------- + ------ + ------- + ----------------------------- +

1 row in set (0.00 sec)

This SQL result Assembly contains 12506 records, and filesort is used. Therefore, execution is very efficient. In this case, mysql scans the entire table and finds records matching userid = "7 mini" one by one, and then sorts the clicks of these records. The efficiency can be imagined. If the actual execution is still relatively fast, it is because the server memory is enough to read all the 12506 short records into the memory, so it is relatively fast, however, when the concurrency or table size increases, the efficiency problem is serious.

Then I add the userid to the index:

Create index userid on imgs (userid );

Then check:

Mysql> desc select * from imgs where userid = "7 mini" order by clicks desc limit 10;

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

| 1 | SIMPLE | imgs | ref | userid | 51 | const | 8 | Using where; Using filesort |

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

1 row in set (0.00 sec)

Well, we can see that mysql has used the userid index for search. After using the userid index for one search, there are 8 results in the result set. Then, although filesort is used to sort data one by one, the efficiency problem is mitigated because the result set contains only eight partitions.

However, if I use another userid for query, the results will be different:

Mysql> desc select * from imgs where userid = "admin" order by clicks desc limit 10;

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

| 1 | SIMPLE | imgs | ref | userid | 51 | const | 2944 | Using where; Using filesort |

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

1 row in set (0.00 sec)

The results are basically the same as those of userid = "7 mini". However, after mysql indexes a search result set with userid, the size of the result set reaches 2944. All the 2944 records are added to the memory for filesort, efficiency is much worse than that of 7 mini. There are two ways to solve this problem. The first method is to add an index and a judgment condition, because I only need to obtain the maximum 10 pieces of data based on the clicks, so there is a lot of data that I don't need to add to sort. For example, if the number of clicks is less than 10, this data may occupy a large part.

I add an index to clicks, and then add a where condition to query:

Create index clicks on imgs (clicks );

Mysql> desc select * from imgs where userid = "admin" order by clicks desc limit 10;

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

| 1 | SIMPLE | imgs | ref | userid, clicks | userid | 51 | const | 2944 | Using where; Using filesort |

+ ---- + ------------- + ------- + ------ + --------------- + -------- + --------- + ------- + ------ + ----------------------------- +

1 row in set (0.00 sec)

In this case, we can see that possible_keys is changed to userid, clicks, and possible_keys are all indexes that can be matched. mysql judges from possible_keys and uses one of the indexes to execute the statement. It is worth noting that, the index used by mysql may not be optimized. I still use the userid index to query mysql in this query, but the results remain unchanged. Modify the SQL statement and use index to force mysql to use the clicks index:

Mysql> desc select * from imgs use index (clicks) where userid = 'admin' and clicks> 10 order by clicks desc limit 10

+ ---- + ------------- + ------- + --------------- + -------- + --------- + ------ + ------------- +

| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+ ---- + ------------- + ------- + --------------- + -------- + --------- + ------ + ------------- +

| 1 | SIMPLE | imgs | range | clicks | 4 | NULL | 5455 | Using where |

+ ---- + ------------- + ------- + --------------- + -------- + --------- + ------ + ------------- +

1 row in set (0.00 sec)

At this time, mysql uses the clicks index for query, but the result set is larger than userid! Again:

Mysql & gt; desc select * from imgs use index (clicks) where userid = 'admin' and clicks & gt; 1000 order by clicks desc limit 10

+ ---- + ------------- + ------- + --------------- + -------- + --------- + ------ + ------------- +

| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+ ---- + ------------- + ------- + --------------- + -------- + --------- + ------ + ------------- +

| 1 | SIMPLE | imgs | range | clicks | 4 | NULL | 312 | Using where |

+ ---- + ------------- + ------- + --------------- + -------- + --------- + ------ + ------------- +

1 row in set (0.00 sec)

When 1000 is added, the result set is changed to 312, and the sorting efficiency should be acceptable.

However, the optimization method of changing the index requires a sampling point, such as the number 1000 in this example. In this way, each value of userid needs to find a sampling point, this is difficult for the program. If sampling is based on 1000, In the userid = '7mini 'example, the result will not be 8, but 2, causing confusion to the user.

Of course, there is another way to add dual indexes:

Create index userid_clicks on imgs (userid, clicks)

Mysql> desc select * from imgs where userid = "admin" order by clicks desc limit 10;

+ ---- + ------------- + ------- + ------ + ---------------------- + --------------- + --------- + ------- + ------ + ------------- +

| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |

+ ---- + ------------- + ------- + ------ + ---------------------- + --------------- + --------- + ------- + ------ + ------------- +

| 1 | SIMPLE | imgs | ref | userid, userid_clicks | 51 | const | 2944 | Using where |

+ ---- + ------------- + ------- + ------ + ---------------------- + --------------- + --------- + ------- + ------ + ------------- +

1 row in set (0.00 sec)

At this time, we can see that there are still 2944 results, but the filesort in Extra is missing. In this case, mysql uses the index userid_clicks to query, which not only can quickly query all records of userid = "admin", but also the results are sorted by clicks! Therefore, you no longer need to read this result set into the memory for sorting, which is much more efficient.

However, there is a problem with the Multi-field index method. If there are many types of SQL queries, you have to make a good plan. Otherwise, the index will be created a lot, this will not only affect the efficiency of data insert and update, but also damage the data table.

The above is the index optimization method, because the reasons may be more complex, so the write is relatively long, generally after the index is well optimized, mysql's efficiency will increase n grades, therefore, you do not need to consider adding machines to solve the problem.

However, mysql and even all databases may not solve the limit problem. In mysql, limit 100000 is no problem as long as the index is appropriate, but limit, 10 will be slow, because mysql will scan the sorted results and find the point, retrieve 10 results. To locate the 100000 point, we need to scan 100000 records. This cycle is time-consuming. I don't know if there will be any good algorithms to optimize this scanning engine, so I have no idea whether there is any good solution. For limit, for a long time now, I think we can only optimize it through business, program, and data table planning. None of the optimization methods I have come up with is a complete solution, I will discuss it later.

2. SQL statements are too complex

If some special functions such as groupby or multi-table join queries are used for SQL writing, desc can be used to analyze the methods used by mysql, there are not many complicated SQL statements on my side, so I don't often analyze them. I don't have good suggestions for the moment.

3. Configuration Error

The main parameters in the configuration are key_buffer, sort_buffer_size/myisam_sort_buffer_size. The two parameters mean:

Key_buffer = 128 M, A good suggestion is to move the rarely used and relatively large tables to other places, which can significantly reduce the memory usage of mysql.

Sort_buffer_size = 1 M: memory used by a single thread for sorting. The query result set will be put into this memory. If it is small, mysql will put it several times more, therefore, you can simply increase the value by optimizing indexes and query statements so that they do not generate too many result sets.

Other configurations:

Thread_concurrency = 8: This configuration comes standard with = number of CPUs x2

Interactive_timeout = 30

Wait_timeout = 30: These two configurations can be used for 10-30 seconds. This will release the memory resources as soon as possible. Note: The connections that have been in use will not be broken, this configuration only breaks the connection that remains unchanged for a long time.

Query_cache. Mysql query_cache clears all the caches that are connected to the table every time the table data changes. If the updates are frequent, query_cache does not help, it also has a great impact on efficiency. This parameter is only applicable to read-only databases. If you need to use it, you can only use query_cache_type = 2 to specify some SQL statements for caching.

Max_connections: The default value is 100. Generally, it is enough, but it should be larger. It can be opened to 600-. If it can exceed, it is generally efficient. You have to find another countermeasure, adding this number alone is not a solution.

Other configurations can be implemented by default. I personally think the problem is not that big. Note: 1. Although the configuration is very important, it is not the cause of efficiency in most cases. 2. mysql is a database. The most important thing for databases is not efficiency, but stability and data accuracy.

4. The machine cannot be loaded.

If the above adjustments are made, the server will still be unable to afford them, then the optimization can only be achieved through architecture-level adjustments.

1. mysql synchronization.

The mysql synchronization function is used to synchronize data to several slave databases, which are written into the primary database and read from the database.

I personally am not so happy to use mysql synchronization, because this method will increase the complexity of the program and often cause data errors. In a high-load service, you can restart the service immediately after it crashes, but it is troublesome to recover the service if the data is incorrect.

2. Add Cache

After the cache is added, the concurrency problem can be solved, and the effect is obvious. For a real-time system, you can use the cache refresh method to keep the cache up-to-date.

The squid-based front-end architecture is widely used. Applications with high hit rates can basically solve the problem.

Caching in the program logic layer adds a lot of complexity and causes many problems and is difficult to solve. We do not recommend that you make adjustments at this layer.

3. program architecture adjustment, supporting simultaneous connection to multiple databases

If the problem persists after the web is added to the cache, you can only adjust the program architecture to split the application and provide services at the same time with multiple machines.

If it is split up, it will have a slight impact on the business. If some of the features in the business must use all the data, you can use a complete library + n distributed libraries, each modification is performed once in the complete and distributed databases, or the complete database is organized on a regular basis.

Of course, there is also the most stupid way to copy the entire database, and then the program will execute the complete SQL statement in these databases each time, and requests will be made during access, I think this is safer than mysql synchronization.

4. Use mysql proxy

Mysql proxy can distribute tables in the database to several servers through a proxy, but its problem is that it cannot solve the problem of hot tables. If popular content is scattered in multiple tables, this solution is easier to solve.

I have never used this software or checked it carefully, but I have a little doubt about its function, that is, how does it implement joint queries between multiple tables? If so, what is the efficiency?

5. Use memcachedb

Using memcachedb that supports mysql for databases is a try. From the implementation method and level of memcachedb, there is no impact on data and it will not affect users.

For me, I have not tried this thing because of a few database problems. However, as long as it supports most of mysql's major syntaxes and is stable in itself, there is no doubt about availability.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.