Slow query log in MySQL

Source: Internet
Author: User
Tags mysql query set time mysql slow query log percona

First of all, let's take a look at the logs in mysql, mainly including: error log, query log, slow query log, transaction log, binary log;

The log is an important part of the mysql database. The log file records the changes that occur during the operation of the mysql database; that is, it is used to record the client connection status of the mysql database, the execution of SQL statements, and error information. When the database is accidentally damaged, you can view the cause of the file error through the log, and you can use the log file for data recovery.

First look at the log parameters

mysql> show variables like ‘% log%’;
+ ----------------------------------------- + ------- -------------------------------- +
| Variable_name | Value |
+ ----------------------------------------- + ------- -------------------------------- +
| back_log | 80 |
| binlog_cache_size | 32768 |
| binlog_checksum | CRC32 |
| binlog_direct_non_transactional_updates | OFF |
| binlog_error_action | IGNORE_ERROR |
| binlog_format | STATEMENT |
| binlog_gtid_simple_recovery | OFF |
| binlog_max_flush_queue_time | 0 |
| binlog_order_commits | ON |
| binlog_row_image | FULL |
| binlog_rows_query_log_events | OFF |
| binlog_stmt_cache_size | 32768 |
| binlogging_impossible_mode | IGNORE_ERROR |
| expire_logs_days | 0 |
| general_log | OFF |
| general_log_file | /data/mysql/localhost.log |
| innodb_api_enable_binlog | OFF |
| innodb_flush_log_at_timeout | 1 |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 8388608 |
| innodb_log_compressed_pages | ON |
| innodb_log_file_size | 50331648 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_mirrored_log_groups | 1 |
| innodb_online_alter_log_max_size | 134217728 |
| innodb_undo_logs | 128 |
| log_bin | OFF |
| log_bin_basename | |
| log_bin_index | |
| log_bin_trust_function_creators | OFF |
| log_bin_use_v1_row_events | OFF |
| log_error | /data/mysql/localhost.localdomain.err |
| log_output | FILE |
| log_queries_not_using_indexes | OFF |
| log_slave_updates | OFF |
| log_slow_admin_statements | OFF |
| log_slow_slave_statements | OFF |
| log_throttle_queries_not_using_indexes | 0 |
| log_warnings | 1 |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 1073741824 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| max_relay_log_size | 0 |
| relay_log | |
| relay_log_basename | |
| relay_log_index | |
| relay_log_info_file | relay-log.info |
| relay_log_info_repository | FILE |
| relay_log_purge | ON |
| relay_log_recovery | OFF |
| relay_log_space_limit | 0 |
| simplified_binlog_gtid_recovery | OFF|
| slow_query_log | OFF |
| slow_query_log_file | /data/mysql/localhost-slow.log |
| sql_log_bin | ON |
| sql_log_off | OFF |
| sync_binlog | 0 |
| sync_relay_log | 10000 |
| sync_relay_log_info | 10000 |
+ ----------------------------------------- + ------- -------------------------------- +
61 rows in set (0.03 sec)
 Take a look at the slow parameter

mysql> show variables like ‘% slow%’;
+ --------------------------- + --------------------- ----------- +
| Variable_name | Value |
+ --------------------------- + --------------------- ----------- +
| log_slow_admin_statements | OFF |
| log_slow_slave_statements | OFF |
| slow_launch_time | 2 |
| slow_query_log | OFF |
| slow_query_log_file | /data/mysql/localhost-slow.log |
+ --------------------------- + --------------------- ----------- +
5 rows in set (0.00 sec)
 global slow

mysql> show global status like ‘% slow%’;
+ --------------------- + ------- +
| Variable_name | Value |
+ --------------------- + ------- +
| Slow_launch_threads | 0 |
| Slow_queries | 0 |
+ --------------------- + ------- +
 

1. Explanation of some parameters

1. Slow_launch_time: If the thread creation time exceeds this number of seconds, the server adds the Slow_launch_threads state variable.
2. slow_query_log: whether to record slow queries. Use the value of the long_query_time variable to determine the "slow query".
3. slow_query_log_file: slow log file path
4. long_query_time: Slow log execution time (seconds), the log will be recorded after the set time
5. log-slow-admin-statements: some management instructions will also be recorded. Such as OPTIMEZE TABLE, ALTER TABLE, etc.
6. log-queries-not-using-indexes: MySQL will log queries that do not use indexes to the slow query log. No matter how fast it is executed, the query statement will be recorded without using the index. Sometimes, some queries that do not use indexing are very fast (for example, scanning very small tables), but it may also cause the server to slow down and even use a lot of disk space.

 

Second, the configuration

#Place the following configuration in my.cnf

[mysqld]
slow_query_log_file = /var/lib/mysql/slow-queries.log
log_slow_queries = ON
long_query_time = 1
log-queries-not-using-indexes
log-slow-admin-statements
We can also configure after login

set global slow_query_log = on;
set global slow_query_log_file = /var/lib/mysql/slow-queries.log;
set global long_query_time = 1;
set gloabl log_slow_admin_statements = on;
set global log_queries_not_using_indexes = on;
 View logs

[[email protected] config] $ sudo tail -f /var/lib/mysql/dev-slow.log
SET timestamp = 1437572901;
show warnings;
# Time: 150722 21:48:49
# [email protected]: root [root] @ localhost []
# Query_time: 0.021936 Lock_time: 0.000121 Rows_sent: 1 Rows_examined: 5294
SET timestamp = 1437572929;
select sql_no_cache id, bid, pic, cateid, title, tag, indextitle, dateline, summary from acc_info_article where status = 1 and find_in_set ("84", bid) order by sort asc limit 1;
/ usr / libexec / mysqld, Version: 5.5.29-log (MySQL Community Server (GPL)). started with:
Tcp port: 3306 Unix socket: /var/lib/mysql/mysql.sock
 Explanation

First line: execution time
Second line: Executive user
Third line (important)

Query_time SQL execution time, the longer the slower
Lock_time waits for table lock time in MySQL server stage (not in storage engine stage)
Rows_sent query returns the number of rows
Rows_examined query the number of rows checked

1. The log cannot explain all the problems, and may be related to the lock table and the occasional busy system.

a. The lock table causes the query to be in an isomorphic state. lock_time shows the time the query waits for the lock to be translated
b. The data or index is not cached, which is common when the server is started for the first time or the server is not tuned
c. Back up the database, I / O becomes slow,
d. Run other queries at the same time, reducing the current query,
e. Of course, if a SQL statement is often slow to query, it can basically be judged that it can be optimized again


2. Do not enable log-queries-not-using-indexes (no index query record function)

a. This function is actually not very useful, that is, when recording SQL queries, there is no general record of the index
b. Although the index has an impact on the speed of the query, it depends on the amount of data
c. After this feature is turned on, queries such as select * from blog will also be recorded in the log, and soon the log file will be filled with spam, which will affect the main query slow log record

 

Three, log analysis, workers must first sharpen their tools to do their best

Two main methods are recommended

1, mysqldumpslow

[[email protected] _data] #mysqldumpslow slow-query.log

Reading mysql slow query log from slow-query.log
Count: 2 Time = 11.00s (22s) Lock = 0.00s (0s) Rows = 1.0 (2), root [root] @mysql
select count (N) from t_user;

mysqldumpslow -s c -t 10 /database/mysql/slow-query.log

This will output the 10 most frequently recorded SQL statements, of which:
-s, indicates the sorting method, c, t, l, r are sorted according to the number of records, time, query time, and the number of records returned, ac, at, al, ar, indicating the corresponding flashback;
-t, is the meaning of top n, which is how many pieces of data to return;
-g, you can write a regular matching pattern behind, which is not case sensitive;

mysqldumpslow -s r -t 10 / database / mysql / slow-log
Get the 10 queries that return the most records.

mysqldumpslow -s t -t 10 -g "left join" / database / mysql / slow-log
Get the top 10 query sentences with left joins sorted by time.
Using the mysqldumpslow command can get all kinds of query statements we need very clearly. Monitoring, analyzing, and optimizing MySQL query statements is a very important step in MySQL optimization. After the slow query log is turned on, due to the logging operation, it will occupy CPU resources to a certain extent and affect the performance of mysql, but it can be turned on periodically to locate performance bottlenecks.
 2.pt-query-digest

Download and install

[[email protected] ~] # https://www.percona.com/get/pt-query-digest
[[email protected] ~] # chmod u + x pt-query-digest
 Parameter explanation

--create-review-table When using the --review parameter to output the analysis result to a table, it is automatically created if there is no table.
--create-history-table When using the --history parameter to output the analysis results to a table, it is automatically created if there is no table.
--filter analyze the input slow query according to the specified string and then analyze it
--limit limits the percentage or number of output results. The default value is 20, that is, the slowest 20 statements will be output. If it is 50%, it will be sorted according to the proportion of total response time from large to small.
--host mysql server address
--user mysql username
--password mysql user password
--history Save the analysis results to the table, the analysis results are more detailed, next time use --histoIn ry, if there is the same sentence, and the time interval of the query is different from the history table, it will be recorded in the data table, and the historical change of a certain type of query can be compared by querying the same CHECKSUM.
--review Save the analysis results to the table. This analysis is just to parameterize the query conditions. One record for one type of query is relatively simple. The next time you use --review, if there is the same sentence analysis, it will not be recorded in the data table.
--output The output type of the analysis result. The value can be report (standard analysis report), slowlog (Mysql slow log), json, json-anon. Generally, report is used for easy reading.
--since When to start analyzing, the value is a string, which can be a specified time point in the format of "yyyy-mm-dd [hh: mm: ss]", or a simple time value: s ( Seconds), h (hours), m (minutes), d (days), such as 12h means that the statistics are started from 12 hours ago.
--until deadline, cooperate with --since can analyze slow queries within a period of time.
 usage

(1) Direct analysis of slow query files:
pt-query-digest slow.log> slow_report.log

(2) Analyze the queries within the last 12 hours:
pt-query-digest --since = 12h slow.log> slow_report2.log

(3) Analyze the query within the specified time range:
pt-query-digest slow.log --since ‘2014-04-17 09:30:00‘ --until ‘2014-04-17 10:00:00’>> slow_report3.log

(4) Analysis refers to slow queries containing select statements
pt-query-digest--filter ‘$ event-> {fingerprint} = ~ m / ^ select / i‘ slow.log> slow_report4.log

(5) Slow query for a user
pt-query-digest--filter ‘($ event-> {user} ||" ") = ~ m / ^ root / i‘ slow.log> slow_report5.log

(6) Query all slow queries of full table scan or full join
pt-query-digest--filter '(($ event-> {Full_scan} || "") eq "yes") || (($ event-> {Full_join} || "") eq "yes")' slow.log> slow_report6.log

(7) Save the query to the query_review table
pt-query-digest --user = root --password = abc123 --review h = localhost, D = test, t = query_review--create-review-table slow.log

(8) Save the query to the query_history table
pt-query-digest --user = root --password = abc123 --review h = localhost, D = test, t = query_ history--create-review-table slow.log_20140401
pt-query-digest --user = root --password = abc123--review h = localhost, D = test, t = query_history--create-review-table slow.log_20140402

(9) grab tcp protocol data of mysql through tcpdump, and then analyze
tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306> mysql.tcp.txt
pt-query-digest --type tcpdump mysql.tcp.txt> slow_report9.log

(10) Analysis of binlog
mysqlbinlog mysql-bin.000093> mysql-bin000093.sql
pt-query-digest --type = binlog mysql-bin000093.sql> slow_report10.log

(11) Analysis of general log
pt-query-digest --type = genlog localhost.log> slow_report11.log


Official documentation: http://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html
 

Slow query log in mysql

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.