First install LNMP environment, require PHP-5.3 or above. Reference: isadba.com? P82 or refer to isadba.com? P572 and then download Anemometergitclonegithub. combox
First install LNMP environment, require PHP-5.3 and above versions. Reference: http://isadba.com /? P = 82 or reference http://isadba.com /? P = 572 and then download Anemometer git clonehttps: // github.com/box
First install LNMP environment, require PHP-5.3 version or above.
Reference :? P = 82 or reference? P = 572
Then download Anemometer
Git clone https://github.com/box/Anemometer.git anemometer
Configure LNMP to deploy the downloaded anemometer to LAMP. You can open the page through the web and will prompt you that the website is not configured.
Mysql-uroot-p <mysql56-install. SQL # The table structure required for importing
Mysql-uroot-p <install. SQL # The table structure must be imported by yourself. Otherwise, two fields are missing, and the host and db cannot be distinguished.
Install percona toolkit
Yum install
Yum install percona-toolkit-y
Modify configuration file
Cd/var/www/anemometer/conf/
Cp sample. config. inc. php config. inc. php
Modify the database configuration of 35-40 rows and connect to the anemometer database. Of course, you also need to authorize the database.
# Use the pt-query-digest tool to import slow query data to the database
Pt-query-digest-user = anemometer-password = anemometerpass-review h = 192.168.11.28, D = slow_query_log, t = global_query_review \
-History h = 192.168.11.28, D = slow_query_log, t = global_query_review_history \
-No-report-limit = 0%-filter = "\ $ event-> {Bytes} = length (\ $ event-> {arg }) and \ $ event-> {hostname} = \ "$ HOSTNAME \""\
/Usr/local/mariamysql/data/localhost-slow.log
After the command is executed, the h = 192.168.11.28, D = slow_query_log, t = global_query_review_history and global_query_review tables already have some data. You can try to query them manually.
Now you can access anemometer through the web to see the relevant information.
In this step, anemometer is ready to run, and the next step is to deploy and use it.
The current environment is that the slow log of multiple MySQL servers needs to be monitored.
We have three solutions.
1. The global_query_review_history table in the manually imported install. SQL contains hostname_max and db_max. multiple data sources are stored in one table by hostname_max.
2. Each mysql Server puts the processed binlog data to the current server, and then anemometer connects to the corresponding server to obtain the data.
3. All mysql databases store the processed binlog data to the database where anemometer is located, and then distinguish it by the table name or database name.
If you select the second or third solution, you need to define multiple data sources and the corresponding history_db_name in anemometer_collect.sh.
Modify the configuration of the data source in conf/config. inc. php. Note that different data sources can point to different databases, which can be distinguished by project.
Modify the configuration of the data source in conf/config. inc. php. Note that different data sources direct to different databases.
$ Conf ['ces CES '] ['2017. 168.11.28 '] = array ('host' =>' 192. 168.11.28 ', 'Port' => 3306, 'db' => 'low _ query_log', 'user' => 'anemometer ', 'Password' => 'anemosterpass ', 'table' => array ('Global _ query_review' => 'fact', 'Global _ query_review_history '=> 'dimension '), 'source _ type' => 'slow _ query_log '); $ conf ['ces CES'] ['2017. 168.11.17 '] = array ('host' =>' 192. 168.11.28 ', 'Port' => 3306, 'db' => 'slow _ query_log_192_168_11_17', 'user' => 'anemometer ', 'Password' => 'anemometerpass ', 'table' => array ('Global _ query_review' => 'fact', 'Global _ query_review_history '=> 'dimension '), 'source _ type' => 'slow _ query_log ');
Deploy scripts/anemometer_collect.sh on all MySQL instances to collect slow query logs.
Notes for the anemometer_collect.sh script
1. Whether mysql bin PATH exists in PATH
2. If history_db_name needs to be modified based on different machines, it corresponds to the definition of config. inc. php.
3. Check whether the original slow query log path is correct.
4. Set long_query_time to set the slow query duration.
5. Install the percona-toolkit on all MySQL servers.
Add the following two configuration files in the current directory of anemometer_collect.sh.
The account connecting to the local MySQL server requires the super permission.
[Root @ localhost scripts] # cat anemometer-localhost.cnf [client] user = rootpassword = host = localhostsocket =/tmp/mysql. sock
The account connecting to the database of the anemometer server must have the all privileges permission on the database corresponding to history_db_name in the anemometer_collect.sh script.
[Root @ localhost scripts] # cat anemometer-server.cnf [client] user = anemometerpassword = anemometerpass
Run the following command to debug the script.
Sh-x./anemometer_collect.sh -- interval 30 -- history-db-host 192.168.11.17 -- defaults-file./anemometer-localhost.cnf -- history-defaults-file./anemometer-server.cnf
After debugging, add the following command in crontab to periodically collect slow query logs to the database. you can set the interval according to your needs. My settings are basically to collect all the slow query logs.
*/1 ***/opt/anemometer_collect.sh-interval 59-history-db-host 192.168.11.28-defaults-file/opt/anemometer-localhost.cnf-history-ults-file/opt/anemometer-server.cnf
Because anemometer cannot track which account and host request the SQL is, it is necessary to save the original log for exception analysis.
The following is my modified anemometer_collect.sh, which adds customizable slow query problems, mysql path, slow query duration variable settings, and saves historical slow query logs.
[Root @ localhost scripts] # cat anemometer_collect.sh #/usr/bin/env bash # anemometer collection script to gather and digest slow query logs # this is a quick draft script so please give feedback! # Basic usage wocould be to add this to cron like this: #*/5 * anemometer_collect.sh -- interval 15 -- history-db-host anemometer-db.example.com # This will have to run as a user which has write privileges to the mysql slow log ## additionally there are two sets of permissions to worry about: the local mysql instance, and the remote digest storage instance # These are handled through defaults files, just create a file in the: my. cnf format such: # [client] # user = # password = # use -- defaults-file for permissions to the local mysql instance # and use -- history-defaults-file for permissions to the remote digest storage instance # PATH =/usr/local/mysql-6/bin /: $ PATHsocket = defaults_file = rate_limit = mysqlopts = interval = 30 digest = '/usr/bin/pt-query-digest' # set log prefixLOG_PREFIX = '/usr/local/mariamysql/data /'# set slow log history fileLOG_HISTORY_FILE ='/var/log/comment 'long _ query_time = 0.1 # set iphostname = '/sbin/ifconfig | grep 'inet addr' | egrep' 172. | 192. '| awk' {print $2} '| awk-F ":"' {print $2} ''PORT = 3306 HOSTNAME =" $ HOSTNAME \: $ PORT "history_db_host = history_db_port = 3306history_db_name = 'slow _ query_log 'history _ defaults_file = help () {cat & 2 "Invalid argument: $1"; esacshiftdoneif [! -E "$ {digest}"]; thenecho "Error: cannot find digest script at :at {digest}" exit 1 fiif [! -Z "$ {defaults_file}"]; thenmysqlopts = "-- defaults-file =$ {defaults_file}" fi # path to the slow query logLOG =$ (mysql $ mysqlopts-e "show global variables like 'slow _ query_log_file '" -B | tail-n1 | awk '{print $2 }') LOG = "$ LOG_PREFIX $ LOG" if [$? -Ne 0]; thenecho "Error getting slow log file location" exit 1 fiecho "Collecting from slow query log file: $ {LOG}" # simple 30 second collectionif [! -Z "$ {rate}"]; thenmysql $ mysqlopts-e "set global latency =$ {rate}" fimysql $ mysqlopts-e "set global long_query_time = $ long_query_time" mysql $ mysqlopts-e "set global slow_query_log = 1" if [$? -Ne 0]; thenecho "Error: cannot enable slow log. aborting "exit 1 fiecho" Slow log enabled; sleeping for $ {interval} seconds "sleep" $ {interval} "mysql $ mysqlopts-e" set global slow_query_log = 0 "echo" Done. processing log and saving to $ {history_db_host }:: {history_db_port}/$ {history_db_name} "# process the logif [[! -E "$ LOG"] thenecho "No slow log to process"; exitfimv "$ LOG"/tmp/tmp_slow_logif [! -Z "$ {history_defaults_file}"]; thenpass_opt = "-- defaults-file =$ {history_defaults_file}" fi "$ {digest}" $ pass_opt \ -- review h = "$ {history_db_host }", D = "$ history_db_name", t = global_query_review \ -- history h = "$ {history_db_host}", D = "$ history_db_name ", t = global_query_review_history \ -- no-report -- limit = 0 \ % \ -- filter = "\ $ event-> {Bytes} = length (\ $ event-> {arg }) and \ $ event-> {hostname }=\ "$ HOSTNAME \" "\"/tmp/tmp_slow_log "# store history slow_logcat/tmp/tmp_slow_log> $ LOG_HISTORY_FILE,