Related parameters of a slow log
Long_query_time: Set the threshold of the slow query, the SQL that exceeds the sub-set value is logged to the slow query log, the default value is 1s
LOG_SLOW_QUERIES:1/0 Specifies whether to turn on the slow query log (this parameter should be replaced by Slow_query_log for compatibility retention)
Slow_query_log_file: Specifies the slow log file location, can be empty, the system will give a default file
The processing scheme of two slow log
1 cut the daily slow log and insert it into the remote database for storage with Pt-query-digest
2 Cut slow log retention for 30 days, others are emptied
3 Slow logs are sent daily in the form of mail
4 manual filtering with Pt-query-digest on the database side with special needs
Three-slow log cutting scheme
1 with the day Rabbit script, cut by the hour
Script Core Principles
tmp_log= ' $mysql _client-h$mysql_host-p$mysql_port-u$mysql_user-p$mysql_password-e ' select Concat (' $slowquery _dir ' , ' Slowquery_ ', Date_format (now (), '%y%m%d%h '), '. log '); "| grep log|sed-n-E ' 2p '
Set global slow_query_log=1 set global long_query_time=1 set global slow_query_log_file = ' $tmp _log ';
From this code we can see that it is based on the way to reset Slow_log, and then use Pt-query-digest first read and then production of new Slow_log, recommended this way
The script is on the Internet.
2 Logrotate Way to cut
/data/log/mysql/slowlog/mysql-slow.log {
Daily cut by date
Rotate 30 reserved for 30 days
Missingok #如果日志文件不存在, continue processing the next file without generating an error message
Delaycompress #推迟要压缩的文件 until the next polling cycle to perform compression
Copytruncate #见下面注解
Postrotate
Pt-query-digest--user=anemometer--password=anemometer--review H=node17,d=slow_query_log,t=global_query_review-- History h=node17,d=slow_query_log,t=global_query_review_history--no-report--limit=0%--filter= "\ $event->{ Bytes} = Length (\ $event->{arg}) and \ $event->{hostname}=\ "$HOSTNAME \" "/data/log/mysql/slowlog/ mysql-slow.log-' Date +%y%m%d '
#/usr/local/mysql/bin/mysql-e "flush slow logs;"
Endscript
}
Remote insertion before cutting, feeling the configuration is more complex, there are familiar with this tool can be used by students in this way
Some notes about the four related issues
1 comes with
2-Day Rabbit
3 anemometer
4 custom Collection mode under C/S architecture
1 Script Collection
Common Pt-query-digest
Different point filtering fliter rules are not the same as the history table fields
2 Handling of hostname
1 comes with is unable to solve this problem, missing the corresponding field
2 Day Rabbit Add server_id
--filter= "\ $event->{add_column} = Length (\ $event->{arg}) and \ $event->{serverid}= $lepus _server_id"
3 Anemometer Add hostname
--filter= "\ $event->{bytes} = Length (\ $event->{arg}) and \ $event->{hostname}=\ ' $HOSTNAME \ '"
4 pt-query-diget Two types of table differences
-history the analysis results to the table, the analysis results are detailed, the next time you use--history, if the same statement exists, and the query is in a different time period and history table, it will be recorded in the data table, You can compare the historical changes of a type of query by querying the same checksum.
--review Save the analysis results to a table, this analysis only to the query criteria parameterized, a type of query a record, relatively simple. The next time you use--review, if you have the same statement analysis, it will not be recorded in the datasheet.
Five summary
Core tools Pt-query-digest filtering rules, related table design front-end display
MySQL 36th article ~mysql slow Log scheme interpretation 1