In the development of friends especially and MySQL have contact friends will encounter sometimes MySQL query is very slow, of course, I refer to the large amount of data millions level, not dozens of, the following we take a look at the resolution of the query slow way.
-
-
It is often found that developers look up the statements that are not indexed or limit N, these no statements will have a significant impact on the database, such as a large table of tens of millions of records to scan all, or do not stop doing filesort, the database and server caused by IO impact. This is the case above the mirrored library.
and to the online library, in addition to the absence of index statements, no use of limit statements, but also a situation, the number of MySQL connections too many problems. Speaking of which, let's take a look at our previous monitoring practices.
1. The deployment of Zabbix and other open source distributed monitoring system, access to the daily database IO,CPU, the number of connections
2. Deployment weekly performance statistics, including data increase, iostat,vmstat,datasize
3. Mysql Slowlog Collection, listing top 10
Previously thought to have done these monitoring is very perfect, now deployed the MySQL node process monitoring, only to find a lot of drawbacks
The disadvantage of the first approach: Zabbix is too large, and not in the internal control of MySQL, a lot of data is not very prepared, now generally used to check the historical data
The disadvantage of the second approach: because it is only run once a week, a lot of things can not find and alarm
The downside of the third approach: when the nodes are slowlog very much, top10 becomes meaningless, and often gives you the regular task statements that are sure to run. Reference is of little value
So how do we solve and query these problems?
For troubleshooting problem finding performance bottlenecks, the most easily identified and resolved problems are MySQL slow queries and queries without indexing.
OK, start to find the SQL statements that are not "cool" in MySQL.
=========================================================
Method One: This method I am using, hehe, more like this kind of instantaneous sex.
The code is as follows
More than Mysql5.0 versions can support logging of slower-performing SQL statements.
Mysql> Show variables like ' long% '; Note: This long_query_time is used to define the "slow query" which is slower than the number of seconds.
+-----------------+-----------+
| variable_name | Value |
+-----------------+-----------+
| Long_query_time | 10.000000 |
+-----------------+-----------+
1 row in Set (0.00 sec)
Mysql> set long_query_time=1; Note: I set 1, that is, the execution time of more than 1 seconds is slow query.
Query OK, 0 rows Affected (0.00 sec)
Mysql> Show variables like ' slow% ';
+---------------------+---------------+
| variable_name | Value |
+---------------------+---------------+
| Slow_launch_time | 2 |
| Slow_query_log | On | Note: Log logging is turned on
| Slow_query_log_file | /tmp/slow.log | Note: set to where
+---------------------+---------------+
3 Rows in Set (0.00 sec)
mysql> set global slow_query_log= ' on ' Note: Open logging
Once the Slow_query_log variable is set to On,mysql, the record starts immediately.
/ETC/MY.CNF inside can set the above MySQL global variable's initial value.
Long_query_time=1
Slow_query_log_file=/tmp/slow.log
====================================================
Method Two: Mysqldumpslow command
The code is as follows
/path/mysqldumpslow-s c-t 10/tmp/slow-log
This outputs the 10 SQL statements that have the highest number of records:
-S, is to indicate the way in which C, T, L, R are sorted according to the number of records, time, query time, the number of records returned, AC, at, AL, AR, the corresponding flashback;
-T is the meaning of top N, which is the data that returns the previous number of bars;
-G, you can write a regular matching mode, case insensitive;
Like what
/path/mysqldumpslow-s R 10/tmp/slow-log
Gets the 10 queries that return the recordset the most.
/path/mysqldumpslow-s t-t 10-g "left join"/tmp/slow-log
Get the first 10 items in chronological order that contain the query statement with the left connection.
Finally, summarize the benefits of node monitoring
1. Lightweight monitoring, real time, and can be customized and modified to the actual situation
2. Filters are set up for those statements that must be run
3. Timely discovery of those queries that are not indexed or illegal, although it is time-consuming to deal with those slow statements, it is worthwhile to avoid the database hanging up.
4. When there are too many connections in the database, the program will automatically save the processlist,dba of the current database for reason lookup It's a sharp weapon.
5. When using Mysqlbinlog to analyze, you can get a clear time period for the abnormal database state
Some people will build us to do MySQL profile settings
Some other parameters are found when adjusting the tmp_table_size
Qcache_queries_in_cache number of queries registered in the cache
Qcache_inserts number of queries added to the cache
Number of Qcache_hits cache samples
Qcache_lowmem_prunes number of queries removed from cache because of lack of memory
qcache_not_cached number of queries not cached (cannot be cached, or due to query_cache_type)
Total free memory for qcache_free_memory query cache
Number of free memory blocks in the Qcache_free_blocks query cache
Total number of blocks in the Qcache_total_blocks query cache
Qcache_free_memory can cache some of the commonly used queries, if the commonly used SQL is loaded into memory. That will increase database access speed.