Find slow SQL statements in MySQL and mysqlsql statements
How can I find slow SQL statements in mysql? This may be a problem for many people. MySQL uses slow query logs to locate SQL statements with low execution efficiency. When the -- log-slow-queries [= file_name] option is enabled, mysqld will write a log file containing all SQL statements whose execution time exceeds long_query_time. You can view the log file to locate the SQL statements with lower efficiency. The following describes how to query slow SQL statements in MySQL.
I. MySQL database has several configuration options to help us capture inefficient SQL statements in a timely manner
1, slow_query_log
This parameter is set to ON to capture SQL statements whose execution time exceeds a certain value.
2, long_query_time
When the SQL statement execution time exceeds this value, it is recorded in the log. We recommend that you set it to 1 or shorter.
3, slow_query_log_file
The log file name.
4, log_queries_not_using_indexes
This parameter is set to ON, which can capture all SQL statements without indexes, although this SQL statement may be executed very quickly.
Ii. How to check the efficiency of SQL statements in mysql
1. query logs
(1) enabling MySQL slow query in Windows
In Windows, the configuration file of MySQL is usually my. ini. Find [mysqld] and add
The Code is as follows:
Log-slow-queries = F:/MySQL/log/mysqlslowquery. Log
Long_query_time = 2
(2) Enable MySQL slow query in Linux
The configuration file of MySQL in Windows is usually my. cnf. Find [mysqld] and add
The Code is as follows:
Log-slow-queries =/data/mysqldata/slowquery. Log
Long_query_time = 2
Description
Log-slow-queries = F:/MySQL/log/mysqlslowquery.
For the location where slow query logs are stored, this directory generally requires the write permission of the MySQL running account. Generally, this directory is set to the MySQL data storage directory;
2 In long_query_time = 2 indicates that the query takes more than two seconds to record;
2. show processlist command
Wshow processlist shows which threads are running. You can also use the mysqladmin processlist statement to obtain this information.
Meaning and purpose of each column:
ID column
An identifier that is useful when you want to kill a statement. Use the command to kill the query/*/mysqladmin kill process number.
User Column
If the user is not the root user, this command only displays the SQL statements within your permission range.
Host Column
Displays the port from which the statement is sent. Users used to track problematic statements.
Db column
Displays the database to which the process is currently connected.
Command Column
Display the commands executed by the current connection, which are usually sleep, query, and connect ).
Time column
The duration of this status, in seconds.
State Column
Displays the status of the currently connected SQL statement, which is an important column and will be described in the future. Note that the state is only a state in the execution of the statement, an SQL statement, take the query as an example. It may take several statuses such as copying to tmp table, Sorting result, and Sending data to complete the query.
Info Column
This SQL statement is displayed. Due to the limited length, the long SQL statement is not displayed completely, but it is an important basis for determining the problematic statement.
The most important part of this command is the state column. mysql lists the following states:
Checking table
Checking the data table (this is automatic ).
Closing tables
Refreshing the modified data in the table to the disk and closing the used table. This is a very fast operation. If not, check whether the disk space is full or the disk is under a heavy load.
Connect Out
The replication slave server is connecting to the master server.
Copying to tmp table on disk
Because the temporary result set is larger than tmp_table_size, the temporary table is being converted from memory storage to disk storage to save memory.
Creating tmp table
Creating a temporary table to store some query results.
Deleting from main table
The server is executing the first part of multi-Table deletion. The first table has just been deleted.
Deleting from reference tables
The server is executing the second part of multi-Table deletion and is deleting records of other tables.
Flushing tables
Executing flush tables, waiting for other threads to close the data table.
Killed
If a kill request is sent to a thread, the thread will check the kill flag and discard the next kill request. MySQL checks the kill flag in each primary loop. However, in some cases, the thread may die after a short period of time. If the thread is locked by other threads, the kill request will take effect immediately when the lock is released.
Locked
It is locked by other queries.
Sending data
Processing the record of the SELECT query and sending the result to the client.
Sorting for group
Sorting for group.
Sorting for order
Sorting order.
Opening tables
This process should be fast unless it is disturbed by other factors. For example, a data TABLE cannot be opened by another thread before the alter table or lock table statement is executed. Opening a table.
Removing duplicates
A select distinct query is being executed, but MySQL cannot optimize those duplicate records in the previous stage. Therefore, MySQL needs to remove duplicate records and then send the results to the client.
Reopen table
A lock is obtained for a table, but the lock can be obtained only after the table structure is modified. The lock has been released, the data table is closed, and the data table is being re-opened.
Repair by sorting
The repair command is being sorted to create an index.
Repair with keycache
The repair command is using the index cache to create a new index one by one. It is slower than Repair by sorting.
Searching rows for update
We are talking about identifying qualified records for updates. It must be completed before the related record is updated.
Sleeping
Waiting for the client to send a new request.
System lock
Waiting for an external system lock to be obtained. If multiple mysqld servers are not running to request the same table at the same time, you can add the -- skip-external-locking parameter to disable external system locks.
Upgrading lock
Insert delayed is trying to get a lock table to INSERT a new record.
Updating
Searching for matched records and modifying them.
User Lock
Waiting for GET_LOCK ().
Waiting for tables
This thread is notified that the data table structure has been modified. You need to re-open the data table to obtain the new structure. Then, in order to re-open the data table, you must wait until all other threads close the table. This notification is generated in the following situations: flush tables tbl_name, alter table, rename table, repair table, analyze table, or optimize table.
Waiting for handler insert
Insert delayed has completed all the INSERT operations to be processed and is waiting for new requests.
Most States correspond to fast operations. As long as one thread remains in the same state for several seconds, a problem may occur and you need to check it.
Other statuses are not listed above, but most of them are only needed to check whether there are errors on the server.
For example
3. explain the SQL Execution status
Explain shows how mysql uses indexes to process select statements and connect tables. It can help you select better indexes and write more optimized query statements.
You can add the explain statement before the select statement:
For example:
explain select surname,first_name form a,b where a.id=b.id
Result
Description of the EXPLAIN Column
Table
Which table is the data of this row?
Type
This is an important column that shows the type used by the connection. The connection types from the best to the worst are const, eq_reg, ref, range, indexhe, and ALL.
Possible_keys
Displays indexes that may be applied to this table. If it is null, there is no possible index. You can select an appropriate statement from the WHERE statement for the relevant domain.
Key
Actually used index. If it is NULL, no index is used. In rare cases, MYSQL selects an optimized index. In this case, you can use index (indexname) in the SELECT statement to force an INDEX or use ignore index (indexname) to force MYSQL to IGNORE the INDEX.
Key_len
The length of the index used. The shorter the length, the better.
Ref
It indicates which column of the index is used. If possible, it is a constant.
Rows
MYSQL considers that the number of rows that must be checked to return the requested data
Extra
Additional information about how MYSQL parses the query. We will discuss it in table 4.3, but here we can see that the bad examples are Using temporary and Using filesort, which means MYSQL cannot use indexes at all, and the result is that the retrieval will be slow.
Meaning of the description returned by the extra column
Distinct
Once MYSQL finds the row that matches the row, it does not search any more.
Not exists
MYSQL optimizes left join. Once it finds a row that matches the left join standard, it no longer searches.
Range checked for each Record (index map :#)
No ideal index is found. Therefore, for each row combination in the preceding table, MYSQL checks which index is used and uses it to return rows from the table. This is one of the slowest connections using indexes.
Using filesort
When you see this, the query needs to be optimized. MYSQL requires additional steps to find out how to sort the returned rows. It sorts all rows according to the connection type and the row pointer that stores the sort key value and all rows matching the condition.
Using index
The column data is returned from a table that only uses the information in the index but does not read the actual action. This occurs when all the request columns in the table are part of the same index.
Using temporary
When you see this, the query needs to be optimized. Here, MYSQL needs to create a temporary table to store the results. This usually happens when order by is applied to different column sets, rather than group.
Where used
The WHERE clause is used to limit which rows match the next table or are returned to the user. If you do not want to return ALL rows in the table and the connection type is ALL or index, this will happen, or if there is a problem with the query interpretation of different connection types (sort by efficiency order)
Const
The maximum value of a record in a table can match this query (the index can be a primary key or a unique index ). Because there is only one row, this value is actually a constant, because MYSQL first reads this value and treats it as a constant.
Eq_ref
During connection, MYSQL reads a record from the Union of each record in the previous table during query, it is used when you query all data that uses the index as the primary key or unique key.
Ref
This connection type only occurs when the query uses keys that are not the only or primary key, or some of these types (for example, using the leftmost prefix. For each row union in the previous table, all records are read from the table. This type depends heavily on the number of records matched by the index-the fewer the better
Range
This connection type uses an index to return rows in a range, such as> or <what happens when something is searched
Index
This connection type performs a full scan of each record in the preceding table (better than ALL, because the index is generally smaller than the table data)
ALL
This connection type performs a full scan for each of the preceding records. This is generally poor and should be avoided as much as possible.
MySQL-view slow SQL statements
Check whether MySQL has enabled viewing slow SQL log files
(1) Check whether slow SQL logs are enabled
Mysql> show variables like 'Log _ slow_queries ';
+ ------------------ + ------- +
| Variable_name | Value |
+ ------------------ + ------- +
| Log_slow_queries | ON |
+ ------------------ + ------- +
1 row in set (0.00 sec)
(2) Check the seconds in which the SQL statement will be executed and recorded in the log file.
Mysql> show variables like 'long _ query_time ';
+ ----------------- + ------- +
| Variable_name | Value |
+ ----------------- + ------- +
| Long_query_time | 1 |
+ ----------------- + ------- +
1 row in set (0.00 sec)
Here value = 1, indicating 1 second
2. Configure the my. ini file (the file name under inux is my. cnf), find the [mysqld] section, and add the log configuration, as shown in the following example:
[Mysqld]
Log = "C:/temp/mysql. log"
Log_slow_queries = "C:/temp/mysql_slow.log"
Long_query_time = 1
Log indicates the directory where log files are stored;
Log_slow_queries indicates the SQL log directory that records the execution time;
Long_query_time indicates the execution time, in seconds.
In Linux, these configuration items should already exist, but are commented out. You can remove the annotations. However, it is okay to add configuration items directly.
After a low-efficiency SQL statement is queried, you can use the EXPLAIN or DESC command to obtain information about how MySQL executes the SELECT statement, including how the table connects to and connects to the table during the execution of the SELECT statement, for example, to calculate the sales volume of all companies in 2006, we need to associate the sales table with the company table, and perform sum operations on the profit field. The execution plan of the corresponding SQL statement is as follows:
Mysql> explain select sum (profit) from sales a, company B where a. company_id = B. id and a. year = 2006 \ G;
* *************************** 1. row ***************************
Id: 1
Select_type: SIMPLE
Table:
Type: ALL
Possible_keys: NULL
Key: NULL
Key_len: NULL
Ref: NULL
Rows: 12
Extra: Using where
* *************************** 2. row ***************************
Id: 1
Select_type: SIMPLE
Table: B
Type: ALL
Possible_keys: NULL
Key: NULL
Key_len: NULL
Ref: NULL
Rows: 12
Extra: Using where
2 rows in set (0.00 sec)
Each column is interpreted as follows:
• Select_type: SELECT type. common values include SIMPLE (SIMPLE table, that is, table join or subquery is not used) and PRIMARY (PRIMARY query, that is, outer query), UNION (the second or subsequent query statement in UNION), SUBQUERY (the first select in the subquery), and so on.
• Table: The table of the output result set.
• Type: indicates the connection type of the table. The connection type with good performance to poor performance is system (only one row in the table, that is, the constant table) and const (a single table can have at most one matching row, for example, primary key or unique index) and eq_ref (for each row above, only one record is queried in this table. Simply put, the primary key or unique index is used in multi-table join), ref (similar to eq_ref, the difference is that instead of using primary key or unique index, but using a normal index), ref_or_null (similar to ref, the difference is that the condition contains a query for NULL) index_merge (index merge optimization), unique_subquery (in is followed by a subquery for the primary key field), index_subquery (similar to unique_subquery, the difference is that In is followed by a subquery that queries non-unique index fields), range (range query in a single table), and index (for each row above, data is obtained through the query index) and all (for each row above, data is obtained through full table scan ).
• Possible_keys: indicates the index that may be used during query.
• Key: indicates the actually used index.
• Key_len: the length of the index field.
• Rows: number of rows scanned.
• Extra: Description and description of execution.
In the preceding example, it can be confirmed that the full table scan of Table a results in unsatisfactory efficiency. Create an index for the year field of Table a as follows:
Mysql> create index idx_sales_year on sales (year );
Query OK, 12 rows affected (0.01 sec)
Records: 12 Duplicates: 0 Warnings: 0
After an index is created, the execution plan of this statement is as follows:
Mysql> explain select sum (profit) from sales a, company B where a. company_id = B. id and a. year = 2006 \ G;
* *************************** 1. row ***************************
Id: 1
Select_type: SIMPLE
Table:
Type: ref
Possible_keys: idx_sales_year
Key: idx_sales_year
Key_len: 4
Ref: const
Rows: 3
Extra:
* *************************** 2. row ***************************
Id: 1
Select_type: SIMPLE
Table: B
Type: ALL
Possible_keys: NULL
Key: NULL
Key_len: NULL
Ref: NULL
Rows: 12
Extra: Using where
2 rows in set (0.00 sec)
We can find that the number of rows to be scanned in Table a is significantly reduced after the index is created (from full table scan to three rows). It can be seen that the index can greatly improve the database access speed, this advantage is even more obvious when tables are huge. Using index optimization SQL is a common basic method for optimizing problematic SQL, in the subsequent sections, we will introduce how to optimize the SQL statements by using indexes.
This article mainly introduces the MySQL slow query analysis method. Some days ago, I set a record to query SQL statements that are slower than 1 second in the MySQL database. There are several very well-configured methods, and the names of several parameters cannot be remembered, so I made a note again.
For troubleshooting and identifying performance bottlenecks, the most common problems are slow MySQL queries and queries without indexes.
OK. Start to find the SQL statement that is not "refreshing" in MySQL.
MySQL slow query Analysis Method 1:
I am using this method.
MySQL and later versions Support recording slow SQL statements.
MySQL> show variables like 'long % ';
Note: This long_query_time is used to define how many seconds are slower to calculate as "Slow query"
+ ----------------- + ----------- +
| Variable_name | Value |
+ ----------------- + ----------- +
| Long_query_time | 10.000000 |
+ ----------------- + ----------- +
1 row in set (0.00 sec)
MySQL> set long_query_time = 1;
Note: I set 1, that is, if the execution time exceeds 1 second, the query is slow.
Query OK, 0 rows affected (0.00 sec)
MySQL> show variables like 'slow % ';
+ --------------------- + --------------- +
| Variable_name | Value |
+ --------------------- + --------------- +
| Slow_launch_time | 2 |
| Slow_query_log | ON |
Note: whether to Enable Logging
| Slow_query_log_file |/tmp/slow. log |
NOTE: Where to set
+ --------------------- + --------------- +
3 rows in set (0.00 sec)
MySQL> set global slow_query_log = 'on'
Note: Enable Logging
Once the slow_query_log variable is set to ON, MySQL starts recording immediately.
In/etc/my. cnf, you can set the initial values of the above MySQL global variables.
Long_query_time = 1 slow_query_log_file =/tmp/slow. log
MySQL slow query Analysis Method 2:
MySQLdumpslow command
/Path/MySQLdumpslow-s c-t 10/tmp/slow-log
This will output 10 SQL statements with the maximum number of records, of which:
-S indicates the sorting method. c, t, l, and r are sorted by the number of records, time, query time, and number of returned records, ac, at, al, and ar indicate reverse descriptions;
-T indicates the top n, that is, the number of previous data records returned;
-G, followed by a regular expression matching mode, which is case insensitive;
For example
/Path/MySQLdumpslow-s r-t 10/tmp/slow-log
You can obtain up to 10 queries from the returned record set.
/Path/MySQLdumpslow-s t-t 10-g "left join"/tmp/slow-log
Obtain the query statements containing the left join in the first 10 results sorted by time.
Simple Method:
Open my. ini, find [mysqld] and add long_query_time = 2 log-slow-queries = D:/mysql/logs/slow under it. log # Set to write logs there. It can be empty. The system will give a default file # log-slow-queries =/var/youpath/slow. in log linux, The host_name-slow.log log-queries-not-using-indexes long_query_time indicates how long the SQL statement will be executed (in seconds), which is set to 2 seconds.
The following describes common mysqldumpslow parameters. You can use mysqldumpslow-help to query them in detail. -S indicates the sorting method. c, t, l, and r are sorted by the number of records, time, query time, and number of returned records (from large to small ), ac, at, al, and ar indicate reverse descriptions. -T indicates the top n, that is, the number of previous data records returned. Www.jb51.net-g, which can be followed by a regular expression matching mode. It is case insensitive. Next is to use mysql's built-in slow query tool mysqldumpslow analysis (mysql bin directory), I here the log file name is host-slow.log. List the 10 SQL statements with the most record times mysqldumpslow-s c-t 10 host-slow.log list the 10 SQL statements that return the record set up to mysqldumpslow-s r-t 10 host-slow.log returns the first 10 times by Time SQL statement mysqldumpslow-s t-t 10-g "left join" host-slow.log using mysqldumpslow command can be very clear to get a variety of query statements we need, it is of great help to monitor, analyze, and optimize MySQL query statements.
During daily development, pages are often opened at a very slow speed. After elimination, it is determined that it is the impact of the database. In order to quickly find specific SQL statements, you can use the logging method of Mysql.
-- Enable the SQL Execution record Function
Set global log_output = 'table'; -- output to TABLE
Set global log = ON; -- enable the general_log function for all command execution records. All statements: successful and unsuccessful.
Set global log_slow_queries = ON; -- open the slow query SQL record slow_log and run the slow query statement and the statement without using the index successfully.
Set global long_query_time = 0.1; -- slow query time limit (seconds)
Set global log_queries_not_using_indexes = ON; -- Records SQL statements that do not use indexes.
-- Query SQL Execution records
Select * from mysql. slow_log order by 1; -- executed successfully: Slow query statement, and statement without using Index
Select * from mysql. general_log order by 1; -- all statements: successful and unsuccessful.
-- Disable SQL Execution records
Set global log = OFF;
Set global log_slow_queries = OFF;
-- Long_query_time parameter description
-- V4.0, 4.1, 5.0, v5.1 to 5.1.20 (included): Slow query analysis in milliseconds is not supported (1-10 seconds are supported );
-- 5.1.21 and later versions: supports millisecond-level slow query analysis, such as 0.1;
-- 6.0 to 6.0.3: Slow query analysis in milliseconds is not supported (the precision is 1-10 seconds );
-- 6.0.4 and later: Support for slow query analysis in milliseconds;
By using the SQL statement recorded in the log, you can quickly locate the specific file and optimize the SQL statement to see if the speed is improved?
This article analyzes the problem of slow query on the MySQL database server and puts forward the corresponding solutions. The specific solutions are as follows: it is often found that developers check statements that do not use indexes or do not have limit n statements. These no statements will greatly affect the database...
This article analyzes the problem of slow query on the MySQL database server and puts forward the corresponding solutions. The specific solutions are as follows:
Developers often find that they do not use indexed or limit n statements, which will have a great impact on the database, for example, a large table with tens of millions of records needs to be fully scanned, or filesort is constantly performed, causing io impact on the database and server. This is the case above the image library.
In addition to the non-index statements in the online database, the limit statement is not used. This is another case where the number of mysql connections is too large. Here, let's take a look at our previous monitoring practices.
1. Deploy zabbix and other open-source distributed monitoring systems to obtain the daily database I/O, cpu, and connections
2. Weekly Performance Statistics for deployment, including data increase, iostat, vmstat, and datasize
3. Mysql slowlog collection, listing top 10
I used to think that the monitoring has been perfect. Now, after the mysql node process monitoring is deployed, many drawbacks are discovered.
Disadvantages of the first approach: zabbix is too large and is not used for internal monitoring in mysql. Many data is not very prepared and is usually used to view historical data.
The disadvantage of the second approach: Because it only runs once a week, it is impossible to detect and trigger alarms in many cases.
Disadvantages of the third approach: when there are many node slowlogs, top10 becomes meaningless, and many times you will be given the regular task statements that must be run .. Little value for Reference
How can we solve and query these problems?
For troubleshooting and identifying performance bottlenecks, the most common problems are slow queries in MYSQL and queries that do not have to use indexes.
OK. Start to find the SQL statement that is not "refreshing" in mysql.
Method 1: I am using this method.
MySQL and later versions Support recording slow SQL statements.
Mysql> show variables like 'long % '; Note: This long_query_time is used to define how many seconds is slower to calculate as "Slow query"
+ ----------------- + ----------- +
| Variable_name | Value |
+ ----------------- + ----------- +
| Long_query_time | 10.000000 |
+ ----------------- + ----------- +
1 row in set (0.00 sec)
Mysql> set long_query_time = 1; Note: I set 1, that is, if the execution time exceeds 1 second, the query is slow.
Query OK, 0 rows affected (0.00 sec)
Mysql> show variables like 'slow % ';
+ --------------------- + --------------- +
| Variable_name | Value |
+ --------------------- + --------------- +
| Slow_launch_time | 2 |
| Slow_query_log | ON | Note: whether to Enable Logging
| Slow_query_log_file |/tmp/slow. log | Note: where to set
+ --------------------- + --------------- +
3 rows in set (0.00 sec)
Mysql> set global slow_query_log = 'on' Note: Enable Logging
Once the slow_query_log variable is set to ON, mysql starts recording immediately.
In/etc/my. cnf, you can set the initial values of the above MYSQL global variables.
Long_query_time = 1
Slow_query_log_file =/tmp/slow. log
Method 2: mysqldumpslow command
/Path/mysqldumpslow-s c-t 10/tmp/slow-log
This will output 10 SQL statements with the maximum number of records, of which:
-S indicates the sorting method. c, t, l, and r are sorted by the number of records, time, query time, and number of returned records, ac, at, al, and ar indicate reverse descriptions;
-T indicates the top n, that is, the number of previous data records returned;
-G, followed by a regular expression matching mode, which is case insensitive;
For example
/Path/mysqldumpslow-s r-t 10/tmp/slow-log
You can obtain up to 10 queries from the returned record set.
/Path/mysqldumpslow-s t-t 10-g "left join"/tmp/slow-log
Obtain the query statements containing the left join in the first 10 results sorted by time.
Summary of the benefits of node monitoring
1. Lightweight monitoring, real-time, and can be customized and modified according to the actual situation
2. Set the filter program to filter the statements that must be run.
3. promptly discover those queries that do not use indexes or are illegal. Although it takes a lot of time to process those slow statements, it is worthwhile to avoid database crashes.
4. When there are too many connections in the database, the program will automatically save the processlist of the current database. This is a powerful tool when DBA finds the cause.
5. When using mysqlbinlog for analysis, you can get a clear period of time when the database status is abnormal.
Some people will make mysql Configuration File Settings.
Some other parameters are found when tmp_table_size is adjusted.
Qcache_queries_in_cache number of queries registered in the cache
Qcache_inserts number of queries added to the cache
Qcache_hits cache sample count
Qcache_lowmem_prunes number of queries deleted from the cache due to lack of memory
Qcache_not_cached does not have the number of queries cached (cannot be cached, or because of QUERY_CACHE_TYPE)
Qcache_free_memory
Qcache_free_blocks query the number of idle memory blocks in the cache
Qcache_total_blocks
Qcache_free_memory can cache some common queries. If it is a common SQL statement, it will be loaded into the memory. This will increase the database access speed