This article describes how to analyze MySQL slow query logs by setting parameters to analyze the causes of performance problems, for more information, see query. Currently, most databases provide performance analysis tools. For example, Oracle will help you find slow statements and provide optimization solutions. In MySQL, you need to enable slow log records for analysis (records can be saved in tables or files. by default, they are saved in files, and our system uses the default method ).
First, let's take a look at what the records in the MySQL slow query log look like:
Time Id Command Argument# Time: 141010 9:33:57# User@Host: root[root] @ localhost [] Id: 1# Query_time: 0.000342 Lock_time: 0.000142 Rows_sent: 3 Rows_examined: 3use test;SET timestamp=1412904837;select * from t;
This log should be well understood. The first one is to record the timestamp, and the second one is to record the user and address information for executing the command, third # record the query execution time, lock time, number of returned rows, and number of scanned rows. Next, record the actually executed SQL statements. You can also use the following command to see the meaning of each field in the cvs storage format.
SHOW CREATE TABLE mysql.slow_log;
Next, let's talk about how to obtain and analyze slow logs.
View MySQL slow log parameters
Log on to the started MySQL instance and run the following command:
mysql> show variables like '%slow_query%';
+---------------------------+----------------------------------------+| Variable_name | Value |+---------------------------+----------------------------------------+| slow_query_log | OFF || slow_query_log_file | /usr/local/mysql/data/cloudlu-slow.log |+---------------------------+----------------------------------------+
Here we will show you the location where slow logs are stored and whether slow logs are enabled.
So what kind of query needs to be logged? In MySQL, queries without indexes and queries that exceed the specified time and exceed the specified number of scanned rows must be recorded in the slow log query.
How can we view their parameters?
Query record switch without index
mysql> show global variables like '%indexes%';
+----------------------------------------+-------+| Variable_name | Value |+----------------------------------------+-------+| log_queries_not_using_indexes | OFF || log_throttle_queries_not_using_indexes | 0 |+----------------------------------------+-------+
The first parameter indicates whether to enable query with no index for the record, and the second parameter indicates
The parameter is used to control the log traffic. the default value 0 indicates no limit.
Query switch that exceeds the specified time length
mysql> show global variables like '%long_query%';
+-----------------+-----------+| Variable_name | Value |+-----------------+-----------+| long_query_time | 10.000000 |+-----------------+-----------+1 row in set (0.00 sec)
A query with a specified length of time must be recorded.
Scan query switch that exceeds the specified number of rows
mysql> show variables like '%min_examined_row_limit%';
+------------------------+-------+| Variable_name | Value |+------------------------+-------+| min_examined_row_limit | 0 |+------------------------+-------+1 row in set (0.00 sec)
The default value is 0, indicating the number of rows not scanned now
Set MySQL slow log parameters
Enter MySQL, enter the following command, modify it in the MySQL startup configuration file, or add startup parameters to MySQL. the following changes are made after entering MySQL:
set global long_query_time=0.1;set global log_queries_not_using_indexes=on;set global slow_query_log = on;
There are two points to consider here. First, there is a problem with logs that exceed the specific duration, which is determined by the system requirements. The second is how many logs are recorded per minute without using indexes. to prevent too many logs from affecting performance.
In actual log analysis, there are usually a large number of slow logs, and there will be a large number of records for the same query, here we need to find the most problematic and optimized log from slow log query. There are many analysis tools in this regard. the most basic analysis tool is the output example of mysqldumpslow and mysqldumpslow (Perl script) that comes with MySQL:
[root@cloudlu bin]# ./mysqldumpslow -s t -t 1 /usr/local/mysql/data/cloudlu-slow.log
Reading mysql slow query log from /usr/local/mysql/data/cloudlu-slow.log Count: 1 Time=0.00s (0s) Lock=0.00s (0s) Rows=3.0 (3), root[root]@localhost select * from t
At first glance, it is very clear that its output mainly counts the number of occurrences of different slow SQL statements (Count 1), the maximum execution Time (Time 0.00 s ), total Time consumed (Time 0 s), Lock wait Time (Lock 0.00 s), and Lock wait Time (Lock 0 s ), total number of Rows sent to the client (Rows 3.0), Total Number of scanned Rows (Rows 3), user (root), and SQL statement itself. Its most common parameters include:
- -S sorting option: c query times r returns the number of record rows t query time
- -T n: top n queries are displayed.
The general analysis is almost the same, but the percentage and other data mysqldumpslow is not perfect. Therefore, many MySQL slow log analysis tools are available in the world, including mysqlsla (Perl script) and pt-query-digest (Perl script), which can provide Count, SQL execution times and percentage of the total slow log count, Time, execution Time, including the total Time, average Time, minimum, maximum Time, and Time as a percentage of the total slow SQL Time, 95% of Time: the fastest and slowest SQL statements are removed. the SQL execution Time with a coverage rate of 95%, Lock Time, Lock wait Time, 95% of Lock, and 95% of slow SQL wait Lock Time are removed, rows sent: Number of result Rows, including average, minimum, maximum, Rows examined, and number of scanned Rows. It can also generate table reports and store analysis results. Here we will not introduce them one by one.
Through these slow log analysis software, you can find that the slow query statement has completed most of the SQL optimization. Run the explain or desc command in MySQL to check the slow query statement.
mysql> explain select * from test.t \G
*************************** 1. row *************************** id: 1 select_type: SIMPLE table: t type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 2 Extra: NULL 1 row in set (0.00 sec)
Its output format details can be followed by MySQL explain format, the most important thing in the output is:
1. type: ALL is the most efficient and important
2. key: whether the Key is used. what is the length of the key?
3. Extra: it is best not to show filesort and temporary. The most important thing is to focus on orderby and groupby.
Note: SQL optimization is a very complicated process, and the east wall may be replaced by the west wall. for example, after an index is added to a database table, the query speed is faster, but the storage space is increased, the time consumed for insert/delete operations is also increased. if you execute this optimization in a system with multiple writes and fewer reads, it may be counterproductive. Therefore, after Optimization, do not care about it. you must continuously monitor the system to prevent new bottlenecks from being introduced.