MySQL slow query (ii)-Pt-query-digest detailed slow query log pt-query-digest slow log analysis

Source: Internet
Author: User
Tags time 0 time interval mysql index percona

Essays-66 Articles-0 Comments-19
   



 


MySQL slow query (2)-pt-query-digest detailed slow query log




I. Introduction

pt-query-digest is a tool for analyzing mysql slow queries. It can analyze binlog, general log, slowlog, or analysis through MySQL protocol data captured by SHOWPROCESSLIST or tcpdump. You can output the analysis results to a file. The analysis process is to first parameterize the conditions of the query statement, and then to group the statistics of the query after the parameterization to calculate the execution time, number of times, and proportion of each query. Find the problem and optimize it.

Install pt-query-digest

1. Download page: https://www.percona.com/doc/percona-toolkit/2.2/installation.html
2.perl module

yum install -y perl-CPAN perl-Time-HiRes

3. Installation steps
Method 1: rpm installation

cd / usr / local / src
wget percona.com/get/percona-toolkit.rpm
yum install -y percona-toolkit.rpm

The tool installation directory is: / usr / bin

Method two: source installation


Copy code
cd / usr / local / src
wget percona.com/get/percona-toolkit.tar.gz
tar zxf percona-toolkit.tar.gz
cd percona-toolkit-2.2.19
perl Makefile.PL PREFIX = / usr / local / percona-toolkit
make && make install

Copy code

Tool installation directory: / usr / local / percona-toolkit / bin

4. Introduction to the usage of each tool (details: https://www.percona.com/doc/percona-toolkit/2.2/index.html)
(1) Slow query log analysis statistics

pt-query-digest /usr/local/mysql/data/slow.log

(2) Server summary

pt-summary

(3) Server disk monitoring

pt-diskstats

(4) Summary of mysql service status

pt-mysql-summary---user = root --password = root

Third, pt-query-digest syntax and important options


Copy code
pt-query-digest [OPTIONS] [FILES] [DSN]
--create-review-table When using the --review parameter to output analysis results to a table, it is automatically created if there is no table.
--create-history-table When using the --history parameter to output analysis results to a table, it is automatically created if there is no table.
--filter Performs matching filtering on the input slow query according to the specified string before analyzing
--limit Limits the percentage or number of output results. The default value is 20, which is the slowest 20 sentences output. If it is 50%, the total response time is sorted from large to small, and the output is terminated until the sum reaches 50%.
--host mysql server address
--user mysql username
--password mysql user password
--history saves the analysis results to a table, the analysis results are more detailed, the next time you use --history, if the same statement exists, and the time interval of the query is different from the history table, it will be recorded to the data table You can compare the historical changes of a certain type of query by querying the same CHECKSUM.
--review Saves the analysis results to a table. This analysis only parameterizes the query conditions. One type of query and one record is relatively simple. The next time you use --review, if the same statement analysis exists, it will not be recorded in the data table.
--output Analysis result output type. The value can be report (standard analysis report), slowlog (Mysql slow log), json, json-anon. Generally, report is used for easy reading.
--since When does the analysis start, the value is a string, which can be a specified time point in the format "yyyy-mm-dd [hh: mm: ss]", or a simple time value: s ( Seconds), h (hours), m (minutes), and d (days). For example, 12h indicates that statistics are available from 12 hours ago.
--until Deadline time, combined with --since can analyze slow queries over a period of time.

Copy code

Fourth, analyze the output of pt-query-digest

Part I: Overall statistical results
Overall: How many queries are there in total
Time range: time range for query execution
unique: The number of unique queries, that is, how many different queries there are after the parameterization of the query conditions
total: total min: minimum max: maximum avg: average
95%: arrange all values from small to large, the number at 95%, this number is generally the most valuable
median: median, sort all values from small to large, with the number in the middle


Copy code
# User time, system time, physical memory footprint, and virtual memory footprint of the tool for log analysis
# 340ms user time, 140ms system time, 23.99M rss, 203.11M vsz
# Tool execution time
# Current date: Fri Nov 25 02:37:18 2016
# Hostname of the analysis tool
# Hostname: localhost.localdomain
# The analyzed file name
# Files: slow.log
# Total number of statements, number of unique statements, QPS, number of concurrency
# Overall: 2 total, 2 unique, 0.01 QPS, 0.01x concurrency ________________
# Time range for logging
# Time range: 2016-11-22 06:06:18 to 06:11:40
# Attribute Total Min Max 95% Standard Medium
# Attribute total min max avg 95% stddev median
# ====================================================== ===== ========
# Statement execution time
# Exec time 3s 640ms 2s 1s 2s 999ms 1s
# Lock time
# Lock time 1ms 0 1ms 723us 1ms 1ms 723us
# Number of lines sent to the client
# Rows sent 5 1 4 2.50 4 2.12 2.50
# select statement scan rows
# Rows examine 186.17k 0 186.17k 93.09k 186.17k 131.64k 93.09k
# Number of characters to query
# Query size 455 15 440 227.50 440 300.52 227.50

Copy code

Part II: Query Group Statistics Results
Rank: the rank of all statements, default order by query time in descending order, specified by --order-by
Query ID: the ID of the statement, (remove extra spaces and text characters, calculate the hash value)
Response: total response time
time: the total time proportion of this query in this analysis
calls: the number of executions, that is, how many query statements of this type are included in this analysis
R / Call: average response time per execution
V / M: Variance-to-mean ratio of response time
Item: query object

# Profile
# Rank Query ID Response time Calls R / Call V / M Item
# ==== ================================================== == ================
# 1 0xF9A57DD5A41825CA 2.0529 76.2% 1 2.0529 0.00 SELECT
# 2 0x4194D8F83F4F9365 0.6401 23.8% 1 0.6401 0.00 SELECT wx_member_base

Part III: Detailed statistical results of each query
According to the detailed statistical results of the following query, the top table lists the statistics of the execution times, maximum, minimum, average, 95% and other items.
ID: the ID number of the query, corresponding to the Query ID
Databases: database names
Users: Number of executions by each user (proportion)
Query_time distribution: The query time distribution, the length of which reflects the proportion of the interval. In this example, the number of queries between 1s and 10s is more than twice the number of 10s.
Tables: the tables involved in the query
Explain: SQL statement


Copy code
# Query 1: 0 QPS, 0x concurrency, ID 0xF9A57DD5A41825CA at byte 802 ______
# This item is included in the report because it matches --limit.
# Scores: V / M = 0.00
# Time range: all events occurred at 2016-11-22 06:11:40
# Attribute pct total min max avg 95% stddev median
# ===================================================== = ===============
# Count 50 1
# Exec time 76 2s 2s 2s 2s 2s 0 2s
# Lock time 0 0 0 0 0 0 0 0 0
# Rows sent 20 1 1 1 1 1 0 1
# Rows examine 0 0 0 0 0 0 0 0 0
# Query size 3 15 15 15 15 15 0 15
# String:
# Databases test
# Hosts 192.168.8.1
# Users mysql
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms
# 100ms
# 1s ############################ #################### ##################
# 10s +
# EXPLAIN / *! 50100 PARTITIONS * /
select sleep (2) \ G

Copy code

Five, usage examples

1. Direct analysis of slow query files:

pt-query-digest slow.log> slow_report.log

2. Analysis of queries within the last 12 hours:

pt-query-digest --since = 12h slow.log> slow_report2.log

3. Analyze queries within a specified time range:

pt-query-digest slow.log --since ‘2017-01-07 09:30:00’ --until ‘2017-01-07 10:00:00‘>> slow_report3.log

4. Analysis refers to slow queries containing select statements

pt-query-digest --filter ‘$ event-> {fingerprint} = ~ m / ^ select / i‘ slow.log> slow_report4.log

5. slow query for a certain user

pt-query-digest --filter ‘($ event-> {user} ||" ") = ~ m / ^ root / i‘ slow.log> slow_report5.log

6. Query all slow queries for full table scans or full joins

pt-query-digest --filter '(($ event-> {Full_scan} || "") eq "yes") || (($ event-> {Full_join} || "") eq "yes")' slow.log> slow_report6.log

7. Save the query to the query_review table

pt-query-digest --user = root --password = abc123 --review h = localhost, D = test, t = query_review--create-review-table slow.log

8. Save the query to the query_history table

pt-query-digest --user = root --password = abc123 --review h = localhost, D = test, t = query_history--create-review-table slow.log_0001
pt-query-digest --user = root --password = abc123 --review h = localhost, D = test, t = query_history--create-review-table slow.log_0002

9. Capture tcp protocol data of mysql through tcpdump, and then analyze

tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306> mysql.tcp.txt
pt-query-digest --type tcpdump mysql.tcp.txt> slow_report9.log

10. Analyze the binlog

mysqlbinlog mysql-bin.000093> mysql-bin000093.sql
pt-query-digest --type = binlog mysql-bin000093.sql> slow_report10.log

11. Analyze the general log

pt-query-digest --type = genlog localhost.log> slow_report11.log

Previous Article: MySQL Slow Query (1)-Enable Slow Query

 see more:
MySQL optimization
MySQL storage engines
Detailed MySQL locks
MySQL transactions
MySQL index types

References:
http://www.cnblogs.com/zhanjindong/p/3472804.html
http://blog.csdn.net/seteor/article/details/24017913



MySQL slow query (ii)-Pt-query-digest detailed slow query log pt-query-digest slow log analysis


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.