MySQL slow Query

Source: Internet
Author: User
Tags mysql slow query log

MySQL slow Query

There are some places where there are rivers and lakes and databases. The problem of SQL optimization is a long way to go. We can't help but have bad SQL. What should I do? Fortunately, all major databases have the collection function of bad SQL statements, while the slow query collection function of MySQL is similar. In combination with the analysis of SQL Execution plans, this optimization will start.

Enable MySQL slow query log

1. view the current slow query settings

# Check the slow query time. The default value is 10 s. We recommend that you reduce it to 1 s or less,
Mysql> show variables like "long_query_time ";
+ ----------------- + ---------- +
| Variable_name | Value |
+ ----------------- + ---------- +
| Long_query_time | 1.000000 |
+ ----------------- + ---------- +
1 row in set (0.00 sec)
# Viewing the configuration of slow queries
Mysql> show variables like "% slow % ";
+ ----------------------------------- + ---------------------- +
| Variable_name | Value |
+ ----------------------------------- + ---------------------- +
| Log_slow_admin_statements | OFF |
| Log_slow_filter |
| Log_slow_rate_limit | 1 |
| Log_slow_rate_type | session |
| Log_slow_slave_statements | OFF |
| Log_slow_sp_statements | ON |
| Log_slow_verbosity |
| Max_slowlog_files | 0 |
| Max_slowlog_size | 0 |
| Slow_launch_time | 2 |
| Slow_query_log | ON |
| Slow_query_log_always_write_time | 1, 10.000000 |
| Slow_query_log_file |/tmp/slow_querys.log |
| Slow_query_log_use_global_control |
+ ----------------------------------- + ---------------------- +
14 rows in set (0.01 sec)

The value of slow_query_log is on, which means the function is enabled.

2. How to enable the slow query function
Method 1: Find the mysql configuration file my. cnf on the server and append the content to the mysqld module. The advantage is that the mysql process needs to be restarted.

Vim my. cnf
[Mysqld]
Slow_query_log = ON
# Define the Log Path for slow Query
Slow_query_log_file =/tmp/slow_querys.log
# It is a slow query that defines the number of seconds it has been queried. Here I define 1 second. After 5.6, it is allowed to set less than 1 second, for example, 0.1 second.
Long_query_time = 1
# It is used to set whether or not to query slow query records that do not use indexes. By default, this function is disabled. When this function is enabled, many logs are generated and can be dynamically modified.
# Log-queries-not-using-indexes
Management commands are also recorded for slow queries. For example, optimeze table and alter table, which are disabled by default. Many logs are generated and can be dynamically modified as needed.
# Log-slow-admin-statements

Restart the mysql server and run the following command to check the slow query log:

Tail-f/tmp/slow_querys.log

Method 2: Modify the global variables of mysql. The advantage of this operation is that you do not need to restart the mysql server and log on to mysql to execute the SQL script, but it becomes invalid after restart.

# Enable the slow query function. 1 is enabled, and 0 is disabled.
Mysql> set global slow_query_log = 1;
# It is a slow query that defines the number of seconds it has been queried. Here I define 1 second. After 5.6, it is allowed to set less than 1 second, for example, 0.1 second.
Mysql> set global long_query_time = 1;
# Define the Log Path for slow Query
Mysql> set global slow_query_log_file = '/tmp/slow_querys.log ';
# Disable the function: set global slow_query_log = 0;
Run the following command to check whether the operation is successful.
Mysql> show variables like 'long % ';
Mysql> show variables like 'slow % ';
# Set slow query records to tables
# Set global log_output = 'table ';

Of course, you can also combine the two. On the one hand, the mysql process does not have to be restarted to take effect, and on the other hand, you do not have to worry about parameter invalidation after the restart, and the effect is consistent.

Pay special attention to the long_query_time setting. After 5.6, it can be set to lower than 0.1 seconds. Therefore, the detailed record level depends on your own needs. The database capacity is relatively large, more than 0.1 seconds, so it became a bit unreasonable.

Definition of slow query logs

Directly view mysql slow query log analysis. For example, we can use tail-f slow_query.log to view the content

Tail-f slow_query.log
# Time: 110107 16:22:11
# User @ Host: root [root] @ localhost []
# Query_time: 9.869362 Lock_time: 0.000035 Rows_sent: 1 Rows_examined: 6261774
SET timestamp = 1294388531;
Select count (*) from ep_friends;

Field meaning analysis:

First line: SQL query execution time
Line 2: Execute the SQL query connection information, user and connection IP
The third line records some useful information.
Query_time: the execution time of this SQL statement. The longer the SQL statement, the slower the execution time.
Lock_time: Wait for the table lock time in the MySQL server stage (not in the storage engine stage)
Rows_sent: query the number of returned rows
Rows_examined

The fourth row sets the timestamp, which has no practical significance. It only corresponds to the execution time of the first row.

The SQL statement executed records the information of the fifth line and all the subsequent rows (the second # Time: Before) because the SQL statement may be very long.

Software for analyzing slow queries

Although the slow query log is clear enough, we often record not only one SQL statement, but there may be many and many logs. If we do not make statistics, it is estimated that we will see the year and month of the monkey, in this case, statistical analysis is required.

Method 1: Use the mysqldumpslow Command provided by the mysql program for analysis. For example:
Mysqldumpslow-s c-t 10/tmp/slow-log
This will output the 10 SQL statements with the most records, and the results are similar to the format of the above general slow query records, so we will not detail them here.

Parameter Parsing:
-S: indicates the sort method. The sub-parameters are as follows:

C, t, l, and r are sorted by the number of records, time, query time, and number of returned records,

Ac, at, al, ar: Refers to the corresponding flashback;

-T: The number of previous data records returned. This indicates that 10 data records are returned (or the first 10 data records are returned)

-G: You can write a regular expression matching mode, which is case insensitive. For example:
/Path/mysqldumpslow-s r-t 10/tmp/slow-log to obtain a maximum of 10 queries from the returned record set.
/Path/mysqldumpslow-s t-t 10-g "left join"/tmp/slow-log, get the first 10 query statements containing the left join in chronological order.

Method 2: Use pt-query-digest of pt (Percona Toolkit) for statistical analysis. This is a script written in perl by Percona. Only the pt tool set is installed. If you are interested, install the pt tool first. Analyze the slow Query file directly and execute the following:

Pt-query-digest slow_querys.log> t.txt

Because there may still be a lot of SQL statements in the record, it still looks difficult, so we recommend that you output them to the file. The output information is divided into three parts,

Part 1: Overall statistics

#580 ms user time, 0 system time, 35.46 M rss, 100.01 M vsz
# Current date: Wed Jul 19 14:32:40 2017
# Hostname: yztserver1
# Files: slow_querys.log
# Overall: 2.63 k total, 36 unique, 0.03 QPS, 0.03x concurrency ___________
# Time range: 2017-07-18T03: 17: 17 to 2017-07-19T06: 30: 18
# Attribute total min max avg 95% stddev median
#===================================================== ======================================
# Exec time 3145 s 1 s 5S 1 s 2 s 258 ms 1 s
# Lock time 677 ms 0 64 ms 257us 260us 2 ms 144us
# Rows sent 8.44 k 0 5.50 k 3.29 0.99 108.92
# Rows examine 1.06G 0 2.12 M 421.02 k 619.64 k 155.33 k
# Rows affecte 0 0 0 0 0 0 0
# Bytes sent 9.00 M 11 6.24 M 3.51 k 13.78 k 119.76 k 65.89
# Query size 735.85 k 6 2.19 k 286.72 463.90 128.05 246.02

Record all the slow query statistics in this slow log file, which is usually a rough look:

Overall: Total number of queries in this file. The preceding example shows a total of 2.63k queries, that is, 2.63k slow queries.
Time range: the Time range for query execution.
Unique: number of unique queries, that is, the total number of Different queries counted after the query conditions are parameterized. This example is 36. That is to say, these 2.63K slow queries are actually classified as 36.

Attribute: Attribute resolution. Other sub-items:

Total: total, min: Minimum, max: Maximum, avg: average,

95%: Sort all values from small to large in the number at the position of 95%. This number is generally the most reference value,

Median: median, which sorts all values from small to large and positions them in the middle.

Others are literal. Just translate them.

Part 2: Query group statistics

# Profile
# Rank Query ID Response time callr/Call V/M Item
#===================================================== ======================================
#1 0x8965CC929FB1C7B2 2080.0546 66.1% 1816 1.1454 SELECT 1
#2 0x9425d04cea1970b4 228.4754 7.3% 131 1.7441 SELECT 2
#3 0x94b4d7aa44921364 138.5964 4.4% 112 1.2375 SELECT 3
#4 0x6BD09392D1D0B5D7 84.1681 2.7% 70 1.2024 0.03 SELECT 4
#5 0x1E9926677DBA3657 81.2260 2.6% 68 1.1945 0.03 SELECT 5
#6 0xBBCE31227D8C6A93 69.6594 2.2% 56 1.2439 0.05 SELECT 6
#7 0x9A691CB1A14640F4 60.4517 1.9% 51 1.1853 0.04 SELECT 7
#8 0xDA99A20C8BE81B9C 55.7751 1.8% 46 1.2125 0.05 SELECT 8
#9 0x1F53AC684A365326 55.6378 1.8% 45 1.2364 0.03 SELECT 9 _
#10 0x664E0C18531443A5 38.6894 1.2% 31 1.2480 0.05 SELECT way_bill_main
#11 0xF591153EC390D8CA 32.5370 1.0% 28 1.1620 0.01 SELECT way_bill_main
#12 0xA3A82D5207F1BC2E 24.6582 0.8% 20 1.2329 0.06 SELECT way_bill_main
#13 0xFCED276145371CE4 22.2543 0.7% 18 1.2363 0.05 SELECT way_bill_main
#14 0x4895df1_2d5a600 21.3184 0.7% 17 1.2540 0.07 SELECT way_bill_main
#16 0xA4DD833DF8C96D04 14.6107 0.5% 13 1.1239 0.01 SELECT way_bill_main
#17 0x0133 a3c3538cbba8 13.9799 0.4% 13 1.0754 0.00 SELECT way_bill_main
#18 0x2C52F334EF3D8D2D 12.5960 0.4% 10 1.2596 0.03 SELECT way_bill_main
# MISC 0 xMISC 110.2030 3.5% 83 1.3277 0.0 <19 ITEMS>

In this part, the query is parameterized and grouped, and the execution of various types of queries is analyzed. The results are sorted in descending order based on the total execution duration, so I can change the display.
Response: total Response time.
Time: Total time proportion of the query in this analysis.
CILS: number of executions, that is, the total number of queries of this type in this analysis.
R/Call: average response time of each execution.
Item: query object

Part 3: detailed statistical results of each query.

# Query 1: 0.02 QPS, 0.02x concurrency, ID 0x8965CC929FB1C7B2 at byte 868609
# This item is wrongly ded in the report because it matches -- limit.
# Scores: V/M = 0.03
# Time range: 2017-07-18T03: 17: 56 to 2017-07-19T06: 30: 18
# Attribute pct total min max avg 95% stddev median
#===================================================== ======================================
# Count 69 1816
# Exec time 66 2080 s 1 s 4S 1 s 1 s 189 ms 1 s
# Lock time 51 349 ms 67us 19 ms 192us 194us 760us 144us
# Rows sent 21 1.77 k 1 1 1 1 0 1
# Rows examine 71 771.37 M 262.54 k 440.03 k 434.96 k 419.40 k 24.34 k 419.40 k
# Rows affecte 0 0 0 0 0 0 0 0
# Bytes sent 1 120.49 k 65 68 67.94 65.89 0.35 65.89
# Query size 60 443.31 k 248 250 249.97 246.02 0.00
# String:
# Databases ytttt
# Hosts 10.25.28.2
# Last errno 0
# Users gztttttt
# Query_time distribution
# 1us
# 10us
# 100us
#1 ms
#10 ms
#100 ms
#1 s ##################################### ###########################
#10 s +
# Tables
# Show table status from 'tttttt' LIKE 'way _ bill_main '\ G
# Show create table 'tttttt'. 'Way _ bill_main '\ G
# Show table status from 'tttttt' LIKE 'scheduler _ task' \ G
# Show create table 'tttttt'. 'scheduler _ task' \ G
# EXPLAIN /*! 50100 PARTITIONS */
Select count (1) FROM 1 as w inner join... This is omitted...

The above part is similar to the information in the first part. The overall running efficiency of the SQL statement recorded in this part is analyzed as follows:

Databases: Database Name
Users: the number of executions performed by each user (Percentage). Currently, there is only one user, Because I authorize a database and an independent user.
Query_time distribution: Query time distribution. The length represents the ratio of the interval. In this example, it is generally 1 s.
Tables: Tables involved in the query
Explain: the example, that is, the information of this SQL.

The other information is similar to this, but different SQL information is displayed.

Pt-query-digest parameter description:

-- Create-review-table when the -- review parameter is used to output the analysis result to the table, it is automatically created if no table exists.
-- Create-history-table when the -- history parameter is used to output the analysis result to the table, it is automatically created if no table exists.
-- Filter: matches and filters input slow queries based on specified strings before analysis.
-- Limit limits the percentage or quantity of output results. The default value is 20, that is, the output of the slowest 20 statements. If the value is 50%, the results are sorted from large to small according to the total response time proportion, output to the end of the total reaches 50%.
-- Host MySQL server address
-- User mysql user name
-- Password mysql user password
-- History saves the analysis results to the table, and the analysis results are more detailed. If the same statement exists next time -- history is used, and the time range of the query is different from that in the history table, it is recorded in the data table. You can query the same CHECKSUM to compare the historical changes of a certain type of query.
-- Review saves the analysis results to the table. This analysis only parameterizes the query conditions and queries a record of a type, which is relatively simple. When -- review is used next time, if the same statement analysis exists, it is not recorded in the data table.
-- Output analysis result output type. The values can be report (Standard Analysis report), slowlog (Mysql slow log), json, and json-anon. Generally, report is used for ease of reading.
-- Start time of since analysis. The value is a string, which can be a specified time point in the format of "yyyy-mm-dd [hh: mm: ss, it can also be a simple Time Value: s (seconds), h (hours), m (minutes), d (days). For example, 12 hours indicates that statistics are started from 12 hours ago.
-- Until end time, combined with-since, can analyze slow queries within a period of time.

Other command examples:

1. Analyze the queries in the last 12 hours:
Pt-query-digest -- since = 12 h slow. log> slow_report2.log
2. Analyze queries within a specified time range:
Pt-query-digest slow. log -- since '2017-04-17 09:30:00 '-- until '2017-04-17 10:00:00'> slow_report3.log
3. analysis refers to slow queries containing select statements
Pt-query-digest -- filter' $ event-> {fingerprint} = ~ M/^ select/I 'slow. log> slow_report4.log
4. Slow query for a user
Pt-query-digest -- filter' ($ event-> {user} | "") = ~ M/^ root/I 'slow. log> slow_report5.log
5. query all full table scans or full join slow queries
Pt-query-digest -- filter' ($ event-> {Full_scan} | "") eq "yes ") | ($ event-> {Full_join} | "") eq "yes") 'slow. log> slow_report6.log
6. Save the query to the query_review table.
Pt-query-digest -- user = root-password = abc123 -- review h = localhost, D = test, t = query_review -- create-review-table slow. log
7. Save the query to the query_history table.
Pt-query-digest -- user = root-password = abc123 -- review h = localhost, D = test, t = query _ history -- create-review-table slow. log_20140401
Pt-query-digest -- user = root-password = abc123 -- review h = localhost, D = test, t = query_history -- create-review-table slow. log_20140402
8. Capture mysql tcp protocol data through tcpdump, and then analyze
Tcpdump-s 65535-x-nn-q-tttt-I any-c 1000 port 3306> mysql.tcp.txt
Pt-query-digest -- type tcpdump mysql.tcp.txt> slow_report9.log
9. Analyze binlog
Mysqlbinlog mysql-bin.000093> mysql-bin000093. SQL
Pt-query-digest -- type = binlog mysql-bin000093. SQL> slow_report10.log
10. analyze the general log
Pt-query-digest -- type = genlog localhost. log> slow_report11.log

In fact, although pt-query-digest has a lot of information, the useful information output is much better than mysqldumpslow. Instead, it is dazzled and multiple tools must be installed for use. However, it is a good thing to tell the development about the efficiency. It can be said that the number of slow queries executed by an SQL statement is clear, so the actual use is wise, and you can do it on your own.

Digress
If the slow query log cannot solve the problem, it is recommended to enable the query log general-log to track the SQL statement.
This is basically the same as the above operations. Check the current status first.
Show variables like 'General % ';
You can add it in my. cnf.
General-log = 1 enabled (0 disabled)
Log =/log/mysql_query.log path
You can also set the variable to change it like that.
Set global general_log = 1 enabled (0 disabled)

This article permanently updates link: https://www.bkjia.com/Linux/2018-03/151340.htm

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.