Original http://blog.csdn.net/seteor/article/details/24017913
1.percona-toolkit Installation
wget http://www.percona.com/redir/downloads/percona-toolkit/2.1.1/percona-toolkit-2.1.1-1.noarch.rpm
wget http://pkgs.repoforge.org/perl-TermReadKey/perl-TermReadKey-2.30-1.el5.rf.x86_64.rpm
RPM-IVH perl-termreadkey-2.30-1.el5.rf.x86_64.rpm
RPM-IVH percona-toolkit-2.1.1-1.noarch.rpm
2. Syntax and important options
pt-query-digest [OPTIONS] [FILES] [DSN]
--create-review-tableWhen you use the--review parameter to output the analysis results to a table, it is created automatically if no tables are available.
--create-history-tableWhen you use the--history parameter to output the analysis results to a table, it is created automatically if no tables are available.
--filterA slow query of input is matched and filtered by a specified string before analysis
--limitLimit the output percent or number, the default value is 20, the slowest 20 statement output, if 50% is the total response time from large to small sort, output to the sum of 50% position cutoff.
--host MySQL server address
--userMySQL User name
--passwordMySQL User password
--historySave the analysis results to a table, the analysis results are more detailed, the next time you use--history, if the same statement exists, and the query is in a different time period and history table, it will be recorded in the data table, you can query the same checksum to compare the historical changes of a type of query.
--reviewSave the analysis results to a table, this analysis only to the query criteria parameterized, a type of query a record, relatively simple. The next time you use--review, if you have the same statement analysis, it will not be recorded in the datasheet.
--outputThe analysis result output type, the value can be report (standard analysis Reports), Slowlog (Mysql slow log), JSON, Json-anon, general use report, for easy to read.
--sinceFrom when to parse, the value is a string, can be a specified "Yyyy-mm-dd [hh:mm:ss]" format point in time, or it can be a simple time value: s (seconds), H (Hours), M (minutes), D (days), such as 12h means starting from 12 hours ago statistics.
--untilDeadline, with-since can analyze slow queries over time.
3. The standard analysis report explains the first part: Overall statistical results, such as
Overall: How many queries are in total, the previous example being a total of 266 queries. Time range: The timeframe for query execution. Unique: The number of unique queries, that is, after the query criteria are parameterized, a total of how many different queries, the example is 55. Total: min: min Max: Max avg: Average 95%: All values are arranged from small to large, and the position is in the number 95%, which is generally the most valuable reference. Median: Median, arranges all values from small to large and positions in the middle number. Part II: Query the results of grouped statistics, such as
By visible, this part of the query is parameterized and grouped, and then the execution of various types of query analysis, the results by the total execution time, from large to small sort. Response: Total response time. Time: This query is accounted for in the overall period of the analysis. Calls: number of executions, that is, the total number of this analysis of this type of query statements. R/call: The average response time per execution. Item: Query Object Part III: Detailed statistical results for each query, such as:
By visible, the detailed statistics of the 12th query, the top table lists the number of executions, the maximum, minimum, average, 95% and other statistical purposes. Databases: library Name users: number of executions per user (%) query_time Distribution: Query time distribution, length reflects the interval ratio, in this case 1s-10s between the number of queries is twice times more than 10s. Tables: Table Explain that is involved in the query: example
4. Usage example (1) Analyze slow query files directly:pt-query-digest slow.log > Slow_report.log
(2) Analyze Recent A hours of query:
Pt-query-digest--since=12h slow.log > Slow_report2.log
(3) analyze queries within a specified time range:
Pt-query-digest slow.log--since ' 2014-04-17 09:30:00 '--until ' 2014-04-17 10:00:00 ' > > Slow_report3.log
(4) Analysis means slow query with SELECT statement
Pt-query-digest--filter ' $event->{fingerprint} =~ m/^select/i ' slow.log> slow_report4.log
(5) Slow query for a user
Pt-query-digest--filter ' ($event->{user} | | "") =~ m/^root/i ' slow.log> slow_report5.log
(6) Query all full-table scans or slow queries
Pt-query-digest--filter ' (($event->{full_scan} | | "") eq "yes") | | (($event->{full_join} | | "") eq "yes") ' slow.log> Slow_report6.log
(7) Save the query to Query_review table
Pt-query-digest--user=root–password=abc123--review h=localhost,d=test,t=query_review--create-review-table Slow.log
(8) Save the query to the Query_history table
Pt-query-digest--user=root–password=abc123--review H=localhost,d=test,t=query_ history--create-review-table slow.log_20140401
Pt-query-digest--user=root–password=abc123--review h=localhost,d=test,t=query_history--create-review-table slow.log_20140402
(9)throughtcpdumpCrawlMySQLof theTCPprotocol data, and then analyze
Tcpdump-s 65535-x-nn-q-tttt-i any-c-Port 3306 > Mysql.tcp.txt
Pt-query-digest--type tcpdump mysql.tcp.txt> slow_report9.log
(Ten)AnalysisBinlog
Mysqlbinlog mysql-bin.000093 > Mysql-bin000093.sql
Pt-query-digest--type=binlog mysql-bin000093.sql > Slow_report10.log
(one)AnalysisGeneral Log
Pt-query-digest--type=genlog localhost.log > Slow_report11.log
Official Document: Http://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html
A solution to the "Pick" pt-query-digest tool