MySQL Slow scan log analysis tool Percona-toolkit

Source: Internet
Author: User
Tags percona perl script

Memo from: http://blog.csdn.net/seteor/article/details/24017913

1. Introduction to Tools

Pt-query-digest is a tool for analyzing slow queries for MySQL, which analyzes Binlog, general log, Slowlog, It can also be analyzed using showprocesslist or MySQL protocol data crawled by tcpdump. The analysis results can be exported to a file, the analysis process is the parameters of the query statement, and then the parameterized query after the grouping statistics, statistics of the query execution time, frequency, percentage, etc., can be used to identify problems with the analysis of the optimization. Pt-query-digest is a Perl script that can only be downloaded and empowered to execute. [Email protected]]# wget percona.com/get/pt-query-digest [[email protected]]# chmod u+x pt-query-digest
2. Syntax and important options

pt-query-digest [OPTIONS] [FILES] [DSN]

--create-review-tableWhen you use the--review parameter to output the analysis results to a table, it is created automatically if no tables are available.
--create-history-tableWhen you use the--history parameter to output the analysis results to a table, it is created automatically if no tables are available.
--filter
A slow query of input is matched and filtered by a specified string before analysis
--limitLimit the output percent or number, the default value is 20, the slowest 20 statement output, if 50% is the total response time from large to small sort, output to the sum of 50% position cutoff.
--host MySQL server address
--user
MySQL User name
--password
MySQL User password
--historySave the analysis results to a table, the analysis results are more detailed, the next time you use--history, if the same statement exists, and the query is in a different time period and history table, it will be recorded in the data table, you can query the same checksum to compare the historical changes of a type of query.
--reviewSave the analysis results to a table, this analysis only to the query criteria parameterized, a type of query a record, relatively simple. The next time you use--review, if you have the same statement analysis, it will not be recorded in the datasheet.
--outputThe analysis result output type, the value can be report (standard analysis Reports), Slowlog (Mysql slow log), JSON, Json-anon, general use report, for easy to read.
--sinceFrom when to parse, the value is a string, can be a specified "Yyyy-mm-dd [hh:mm:ss]" format point in time, or it can be a simple time value: s (seconds), H (Hours), M (minutes), D (days), such as 12h means starting from 12 hours ago statistics.
--untilDeadline, with-since can analyze slow queries over time.

3. The standard analysis report explains the first part: Overall statistical results, such as
Overall: How many queries are in total, the previous example being a total of 266 queries. Time range: The timeframe for query execution. Unique: The number of unique queries, that is, after the query criteria are parameterized, a total of how many different queries, the example is 55. Total: min: min Max: Max avg: Average 95%: All values are arranged from small to large, and the position is in the number 95%, which is generally the most valuable reference. Median: Median, arranges all values from small to large and positions in the middle number. Part II: Query the results of grouped statistics, such as

By visible, this part of the query is parameterized and grouped, and then the execution of various types of query analysis, the results by the total execution time, from large to small sort. Response: Total response time. Time: This query is accounted for in the overall period of the analysis. Calls: number of executions, that is, the total number of this analysis of this type of query statements. R/call: The average response time per execution. Item: Query Object Part III: Detailed statistical results for each query, such as:

By visible, the detailed statistics of the 12th query, the top table lists the number of executions, the maximum, minimum, average, 95% and other statistical purposes. Databases: library Name users: number of executions per user (%) query_time Distribution: Query time distribution, length reflects the interval ratio, in this case 1s-10s between the number of queries is twice times more than 10s. Tables: Table Explain that is involved in the query: example
4. Usage example (1) Analyze slow query files directly:pt-query-digest slow.log > Slow_report.log
(2) Analyze Recent  A hours of query:

Pt-query-digest--since=12h slow.log > Slow_report2.log

(3) analyze queries within a specified time range:

Pt-query-digest slow.log--since ' 2014-04-17 09:30:00 '--until ' 2014-04-17 10:00:00 ' > > Slow_report3.log

(4) Analysis means slow query with SELECT statement
Pt-query-digest--filter ' $event->{fingerprint} =~ m/^select/i ' slow.log> slow_report4.log

(5) Slow query for a user
Pt-query-digest--filter ' ($event->{user} | | "") =~ m/^root/i ' slow.log> slow_report5.log

(6) Query all full-table scans or slow queries
Pt-query-digest--filter ' (($event->{full_scan} | | "") eq "yes") | | (($event->{full_join} | | "") eq "yes") ' slow.log> Slow_report6.log

(7)
Save the query to Query_review table
Pt-query-digest--user=root–password=abc123--review h=localhost,d=test,t=query_review--create-review-table Slow.log

(8) Save the query to the Query_history table
Pt-query-digest--user=root–password=abc123--review H=localhost,d=test,t=query_ history--create-review-table slow.log_20140401
Pt-query-digest--user=root–password=abc123--review h=localhost,d=test,t=query_history--create-review-table slow.log_20140402

(9)throughtcpdumpCrawlMySQLof theTCPprotocol data, and then analyze
Tcpdump-s 65535-x-nn-q-tttt-i any-c-Port 3306 > Mysql.tcp.txt
Pt-query-digest--type tcpdump mysql.tcp.txt> slow_report9.log

(Ten)AnalysisBinlog
Mysqlbinlog mysql-bin.000093 > Mysql-bin000093.sql
Pt-query-digest--type=binlog mysql-bin000093.sql > Slow_report10.log

(one)AnalysisGeneral Log
Pt-query-digest--type=genlog localhost.log > Slow_report11.log

MySQL Slow scan log analysis tool Percona-toolkit

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.