MySQL Slow query----pt-query-digest detailed slow query log (Linux system)

Source: Internet
Author: User
Tags percona

First, Introduction

Pt-query-digest is a tool for analyzing slow queries for MySQL, which analyzes Binlog, general log, Slowlog, It can also be analyzed using showprocesslist or MySQL protocol data crawled by tcpdump. The analysis results can be exported to a file, the analysis process is the parameters of the query statement, and then the parameterized query after the grouping statistics, statistics of the query execution time, frequency, percentage, etc., can be used to identify problems with the analysis of the optimization.

Second, installation Pt-query-digest

1. Download page:https://www.percona.com/doc/percona-toolkit/2.2/installation.html
Modules for 2.perl

Yum install-y perl-cpan perl-time-hires

3. Installation Steps
Method One: RPM installation

cd/usr/local/srcwget Percona.com/get/percona-toolkit.  Rpmyum install-y percona-toolkit.rpm 

Tool installation directory in:/usr/bin

Method Two: Source code installation

cd/usr/local/srcwget percona.com/get/percona-toolkit.tar.gztar zxf percona-TOOLKIT.TAR.GZCD percona-toolkit-2.2.  perl makefile.pl prefix=/usr/local/percona-toolkitmake && make install     

Tool installation directory in:/usr/local/percona-toolkit/bin

4. Introduction to each tool usage (Details:https://www.percona.com/doc/percona-toolkit/2.2/index.html)
(1) Slow query log analysis statistics

Pt-query-digest/usr/local/mysql/data/slow.log

(2) Server Summary

(3) Server disk monitoring

(4) MySQL Service status summary

Iii. Pt-query-digest Grammar and important options

pt-query-digest [OPTIONS] [FILES] [dsn]--create-review-table when used--When the review parameter outputs the analysis results to a table, it is automatically created without a table. --create-history-table when using--When the history parameter outputs the analysis results to a table, it is created automatically if no tables are found. --Filter the input slow query by the specified string to match the filter before the analysis--limit limit the output result percentage or quantity, the default value is 20, the slowest 20 statement output, if 50% is the total response time from the largest to the small sort, output to the sum of 50% location Cutoff. --host MySQL server address--userMySQL username--password mysql user password--history Save the analysis results to the table, the analysis results are more detailed, the next time you use--history, If the same statement exists, and the query is in a different time period than the history table, it is recorded in a data table and can be queried to compare the historical changes of a type of query by querying the same checksum. --review Save the analysis results to a table, this analysis only to the query criteria parameterized, a type of query a record, relatively simple. The next time you use-review, if you have the same statement analysis, it will not be recorded in the datasheet. --output analysis Results Output type, values can be report (standard analysis Reports), Slowlog (Mysql slow log), JSON, json-Anon, general use report, for readability. --since from when to parse, the value is a string, can be a specified "yyyy-mm-dd [HH:MM:SS]" format point in time, or it can be a simple time value: s (seconds), H (Hours), M (minutes), D (days), such as 12h means starting from 12 hours ago statistics. --until cut-off time, with-since can be analyzed for a period of time slow query. 
Iv. analysis of pt-query-digest output results

Part I: Overall statistical results
Overall: How many queries are there in total?
Time range: The timeframe for query execution
Unique: The number of unique queries, that is, after the query criteria are parameterized, the total number of different queries
Total: min: min Max: Max avg: Average
95%: Put all the values from small to large, the position is located in the number 95%, this number is generally the most valuable reference
Median: Median, all values from small to large, position in the middle of the number

# The tool performs log analysis of user time, system time, physical memory footprint, virtual memory footprint # 340ms user times, 140ms system times,23.99M RSS,203.11Mvsz# tool Execution Time # current Date:fri Nov2502:37:182016# Run profiling tool hostname # hostname:localhost.localdomain# parsed file name # files:slow.log# Total number of statements, number of unique statements, QPS, concurrency # Overall:2 Total,2 Unique,0.01 QPS,0.01x concurrency ________________# The time range of the log record # times range:2016-11-2206:06:To06:11:40# attribute total minimum maximum average85DStandard Medium # Attribute total min Max avg85DStdDev median# ============ ======= ======= ======= ======= ======= ======= =======# statement Execution times # Exec time 3s 640ms 2s 1s 2s 999ms 1s# lock occupancy # lock Time 1ms01ms 723us 1ms 1ms 723us# number of rows sent to the client # rows sent5 1 4 2.50 4 2.12 2.50< Span style= "COLOR: #000000" ># SELECT statement Scan number of rows # rows examine 186.17k 0 186.17k 93.09k  186.17k 131.64k 93.09k# Number of characters queried # query size 455 15  227.50 440 300.52 227.50            

Part II: Query the results of grouping statistics
Rank: The ranking of all statements, by default in descending order by query time, specified by--order-by
Query ID: The ID of the statement (remove extra space and text characters, calculate hash value)
Response: Total response time
Time: The total percentage of the query in this analysis
Calls: number of executions, that is, the total number of this analysis of this type of query statements
R/call: Average response time per execution
V/m: Ratio of response time Variance-to-mean
Item: Query Object

 # profile# Rank Query ID Response time Calls r/call v/m item# = = ================== ============= ===== ====== ===== ===============#" Span style= "COLOR: #800080" >1 0xf9a57dd5a41825ca 2.0529 Span style= "COLOR: #800080" >76.2% 1 2.0529 0.00 select# 2 0x4194d8f83f4f9365 0.6401 23.8% 1 0.6401 0.00 select Wx_member_base           

Part III: Detailed statistical results for each query
Detailed statistical results from the following query, the top table lists the number of executions, maximum, minimum, average, 95% and other purposes of statistics.
ID: The ID number of the query, and the reference ID corresponding to the
Databases: Database name
Users: Number of times each user executes (%)
Query_time Distribution: The query time distribution, the length reflects the interval ratio, in this case 1s-10s between the number of queries is more than 10s twice times.
Tables: Tables involved in the query
Explain:sql statements

# Query1:0 QPS, 0x concurrency, ID0XF9A57DD5A41825CA atByte802______# this itemis includedIn the because it matches--limit.# scores:v/m =0.00# time Range:all events occurred at2016-11-2206:11:40# Attribute pct Total min max avg85DStdDev median# ============ = = = ======= ======= ======= ======= ======= ======= =======# Count501# Exec Time2s 2s 2s 2s 2s02s# Lock Time00000000# Rows Sent201111101# Rows Examine00000000# Query size 3 15 15 15  15 0 15# string:# Databases test# Hosts 192.168. 8.1# Users mysql# query_time distribution# 1us# 10us# 100us# 1ms# 10ms# 100ms# 1s ########### ###################################################### 10s+# EXPLAIN /*!50100 partitions*/select Sleep" (2) \g        
V. Examples of usage

1. Direct analysis of slow query files:

Pt-query-digest  slow.log > Slow_report.log

2. Analyze queries within the last 12 hours:

Pt-query-digest  --since=12h  slow.log > Slow_report2.log

3. Analyze queries within a specified time range:

'2017-01-07 09:30:00'2017-01-07 10:00:00' > > Slow_report3.log   

4. Parsing refers to slow queries with SELECT statements

'$event->{fingerprint} =~ m/^select/i' slow.log> slow_report4.log 

5. Slow query for a user

'($event->{user} | | "") =~ m/^root/i' slow.log> slow_report5.log 

6. Query all all full-table scans or slow queries

'(($event->{full_scan} | | "") eq "yes") | | (($event->{full_join} | | "") eq "yes")' slow.log> slow_report6.log 

7. Save the query to the Query_review table

Pt-query-digest--user=root–password=abc123--review  h=localhost,d=test,t=query_review--create-review-table  Slow.log

8. Save the query to the Query_history table

Pt-query-digest  --user=root–password=abc123--review  h=localhost,d=test,t=query_history--create-review- table  slow.log_0001pt-query-digest  --user=root–password=abc123--review  h=localhost,d=test,t= Query_history--create-review-table  slow.log_0002

9. Crawl the MySQL TCP protocol data via tcpdump, and then analyze

3306 > mysql.tcp.txtpt-query-digest--type tcpdump mysql.tcp.txt> slow_report9.log

10. Analysis Binlog

Mysqlbinlog Mysql-bin. 000093 > mysql-bin000093.sqlpt-query-digest  --type=binlog  mysql-bin000093.sql > Slow_ Report10.log 

11. Analysis of General Log

Pt-query-digest  --type=genlog  localhost.log > Slow_report11.log

MySQL Slow query----pt-query-digest detailed slow query log (Linux system)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.