MySQL DBA questions on the face
a friend sent a post asking some MySQL dba interview questions, answer less people, he gave me the address, but I do not have the account of the site, so the collation issued in my Blog inside, you can refer to the following, but also welcome to make more reasonable and more efficient treatment plan.
1, for aDBserver, what are the basic metrics that must be monitored, and how do I get these values?
must be monitored:CPU load, memory usage, disk size, Io read and write, network traffic, DB port traffic, database client connections, etc.
How to get these values: can be through third-party tools such as cacti, Zabbix, etc.;
Cacti monitoring MySQL Reference: http://blog.csdn.net/mchdba/article/details/27404109
Zabbix monitoring MySQL Reference: http://blog.csdn.net/mchdba/article/details/51447750
2, a brief introduction to one you have usedMYSQLstatus detection or monitoring tools, third-party or write your own can, say you think the best place to use
They use Nagios, cacti, and Zabbix, typically nagios and cacti, and Zabbix can be used alone.
(1) The advantage of Nagios is that the alarm content rich third-party control mature, cacti picture effect is more clear and realistic;
(2) The advantage of Zabbix is distributed deployment, rapid construction, but the monitoring diagram is rough, many templates need to implement their own, and the version changes too large template can not be compatible.
Zabbix Experience Reference My blog:http://blog.csdn.net/mchdba/article/category/2220809
Nagios Experience Reference: http://blog.csdn.net/mchdba/article/category/2247105
Cacti Experience Reference: http://blog.csdn.net/mchdba/article/category/2292853
3, a brief introduction to one you have usedMySQLLog(slow/general/binary log)analysis tools, third-party or write your own can, say what you think the best place to use
Generl, binary log is only occasionally used as a validation, rarely often where to analyze, after all, consumption performance is too large, but slow log often as the basis for analysis.
Analysis of the relatively simple processing of slow log script:
# check_slow.sh Analysis Slow Log scripts, sorted by execution count and execution time [Email protected]_test_121_61 ~]# vim/home/data/mysql/script/check_slow.sh Datestr= ' date-d "1 day Ago" + "%y-%m-%d" ' Mysql_command=/usr/local/mysql/bin/mysql Mysqlslow_command=/usr/local/mysql/bin/mysqldumpslow # Su-mysql cd/home/data/mysql/slowlog/ mkdir $datestr CD $DATESTR $mysql _command-uroot--password= '-e ' Set global slow_query_log=0; '; $mysql _command-uroot--password= ' e ' SHOW VARIABLES like "Slow_query_log"; Cp/home/data/mysql/data/db-master-1-slow.log. >/home/data/mysql/data/db-master-1-slow.log # >/data/mysql/data/localhost-slow.log $mysql _command-uroot--password= '-e ' Set global slow_query_log=1; '; $mysql _command-uroot--password= ' e ' SHOW VARIABLES like "Slow_query_log"; $mysqlslow _command-s c-t db-master-1-slow.log > Business_db_count_$datestr.log $mysqlslow _command-s at-t db-master-1-slow.log > Business_db_time_$datestr.log # crontab Task Scheduler, execute once a day XX * * */home/data/mysql/script/check_slow.sh >>/tmp/check_slow.log 2>&1 |
4, introduction of a encounteredDBServer Failure
Previously encountered more, some records down, some did not have time to record, not to mention the deepest impression, because after the settlement, with the passage of time, slowly forgotten, if the piece let you unforgettable words, it is estimated that you got into a big mess, let you bear more than the general price.
However, fortunately, I am more cautious, more serious, for the moment has not been a relatively large basket, is a small problem to deal with a lot, but the big basket basically no, but small problems accumulate small, also will be thick thin hair, I put some of the previous failures recorded in: HTTP://BLOG.CSDN.NET/MCHDBA /article/category/1596355, Welcome to reference, the so-called three people will have my teacher Yan, we communicate together progress.
5, if it appearsToo Many connections, what measures should be taken?
This encounters, high concurrency, a lot of slow SQL results.
(1) Remember that the approach is to increase the number of connections connections online, as follows:
mysql> set global max_connections=20000; Query OK, 0 rows affected (0.10 sec) Mysql> |
(2) then see which invalid number of connections, kill directly;
Analyze the script that has been written about a kill invalid connection, you can refer to the following:
#It is used to kill processlist of MySQL sleep #!/bin/sh While: Do N= ' mysqladmin processlist-uadmin-pxxxxx|grep-i sleep |wc-l ' Date= ' Date +%y%m%d\[%h:%m:%s] ' Echo $n If ["$n"-GT 10] Then For i in ' mysqladmin processlist-uadmin-pxxxxxx|grep-i sleep |awk ' {print $} ' Do Mysqladmin-uadmin-pxxxx Kill $i Done echo "Sleep is too many I killed it" >>/tmp/sleep.log echo "$date: $n" >>/tmp/sleep.log Fi Sleep 1 Done |
(3) Then look for the real reason why so many invalid SQL, there are several possible scenarios:
(a) If the index is artificially deleted, then the index is interpolated.
(b) is a result of high concurrency SQL, look at optimized SQL Add index, if you can not reduce the number of connections, see which application caused the high concurrency of SQL, temporarily stop the application.
(c) If the connection source is originated from an IP that is not recognized by the non-application server, then the data port access restrictions are made directly at the network layer.
(d) Kill the temporary SELECT statement if it is a slow temporary SELECT statement that consumes temporary memory resources.
PS: There may be a variety of reasons, can be adapted to local conditions, safe disposal. Do not panic when you encounter, you must calmly and calmly one-by-one processing.
MySQL DBA Interview Question answer "01"