Logstash push MySQL slow query log

Source: Internet
Author: User
Tags mysql version mysql slow query log percona server kibana logstash

This article will introduce the slow query log used by Logstash to collect MySQL , then push it to elasticsearchand create a custom index that will eventually be Kibana Web Showcase.

Environment Introduction:

Operating system version:centos6.6 64bit

MySQL version: mysql5.6.17 and mysql5.1.36

Logstash version : logstash-2.0.0.tar.gz

Elasticsearch version:elasticsearch-2.1.0.tar.gz

Kibana version:Kibana 4.2.1

Java version:1.8.0_45

One:mysql5.1.36 version

1: Configure mysql5.1.36 version slow query log, here in order to test, the query time is more than 0.1s recorded in the slow query log

Mysql> Show variables like '%slow% ';mysql> show variables like '%long% ';

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M00/76/F8/wKiom1Zf4h3hwRsSAABQzxZFlpM360.png "title=" Picture 1.png "alt=" Wkiom1zf4h3hwrssaabqzxzflpm360.png "/>

2: Configure Logstash

# cat /usr/local/logstash/etc/logstach.conf input {  file {     type =>  "Mysql-slow"     path =>  "/mydata/ Slow-query.log "    codec => multiline {       pattern =>  "^# [email protected]:"       negate  => true      what =>  "Previous"      }  }} #input节的配置定义了输入的日志类型为mysql慢查询日志类型以及日志路径, with multiple rows of data merged. The Negate field is a selection switch that can match forward and reverse filter {  # drop sleep events  grok {     match => {  "message"  =>  "Select sleep" &NBSP;}     add_tag => [  "Sleep_drop"  ]    tag_on_ failure => [] # prevent default _grokparsefailure tag on real records  }  if  "Sleep_drop"  in [tags] { &NBSP;&NBSP;&NBSP;&NBSP;DROP&NBSP;{}&NBSP;&NBSP,} #filter节的配置定义了过滤mysql查询为sleep状态SQL语句grok  {match = > [  "Message",  "(? m) ^# [email protected]: %{user:user}\[[^\]]+\] @  (?:(? <clienthost>\s*)   \[(?:%{ip:clientip})? \]\s*# query_time: %{number:query_time:float}\s+ lock_time: %{number:lock_time:float}\s+rows_sent: %{number:rows_sent:int}\s+rows_examined: %{ number:rows_examined:int}\s* (?: use %{data:database};\s*)? set timestamp=%{number:timestamp};\s* (?<query> (<action>\w+) \s+.*) \n# Time:.*$ "&NBSP;]}    date {    match => [  "timestamp",  "UNIX"  ]    remove_field => [  "Timestamp"  ]  }}# Grok section defines a regular cut for slow query log output, which is prone to dizziness! Output { stdout { codec => rubydebug {}} elasticsearch {  hosts =>  "192.168.1.226:9200 "  index => " mysql-server81-%{+yyyy. MM.DD} "}&NBSP; #output节定义了输出, in addition to printing to the screen, but also input to Elasticsearch, a custom index name

3: Start the test

#/usr/local/logstash/bin/logstash-f/usr/local/logstash/etc/logstach.conf

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/76/F7/wKioL1Zf4xqB8tebAAFfsjKAAVo381.png "title=" Picture 2.png "alt=" Wkiol1zf4xqb8tebaaffsjkaavo381.png "/>

# tail-f/mydata/slow-query.log

650) this.width=650; "src=" Http://s1.51cto.com/wyfs02/M02/76/F8/wKiom1Zf4tXw7KpzAABc7oIura8872.png "title=" Picture 3.png "alt=" Wkiom1zf4txw7kpzaabc7oiura8872.png "/>

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M00/76/F8/wKiom1Zf4wHjCSXLAACu9og2lg8186.png "style=" float: none; "title=" Image 4.png "alt=" Wkiom1zf4whjcsxlaacu9og2lg8186.png "/>

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M01/76/F8/wKiom1Zf4wODVhEYAAHZCwYKraA264.png "style=" float: none; "title=" Image 5.png "alt=" Wkiom1zf4wodvheyaahzcwykraa264.png "/>

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M01/76/F7/wKioL1Zf423jbuOHAACbOVrvnsY108.png "style=" float: none; "title=" Image 6.png "alt=" Wkiol1zf423jbuohaacbovrvnsy108.png "/>

II:mysql5.6.17 version

Because the mysql5.6.17 version of Slowlog has an ID field, you need to adjust the regular configuration of the grok section.

Mysql5.1.36 's slowlog:

# tail-f/mydata/slow-query.log # time:151202 17:29:24# [email protected]: Root[root] @ [192.168.1.156]# query_time:6. 578696 lock_time:0.000039 rows_sent:999424 rows_examined:999424set timestamp=1449048564;select * from Users_test;

Mysql5.6.17 Slowlog: Compared to the mysql5.1.36 version of the slow query log output, more id:84589.

# tail-f/mydata/slow-query.log # time:151202 16:09:54# [email protected]: Root[root] @ [192.168.1.156] id:84589# Que ry_time:7.089324 lock_time:0.000112 rows_sent:1 Rows_examined:33554432set timestamp=1449043794;select Count (*) from T1;

Here by the way, before also tested the Percona Server 5.5.34 version, found that the slow query log more thread_id,Schema,Last_errno, killed of 4 fields.

# tail -f /mydata5.5/slow-query.log # [email protected]: root[root] @   [192.168.1.228]# thread_id: 1164217  schema: mgr  last_errno:  0  Killed: 0# Query_time: 0.371185  Lock_time: 0.000056   rows_sent: 0  rows_examined: 0  rows_affected: 2  rows_read : 0# bytes_sent: 11set timestamp=1449105655; Replace into  edgemgr_dbcache (id, type, data, expire_time)   VALUES ( Unhex (' ec124ee5766c4a31819719c645dab895 '),  ' sermap ',  ' {\ ' storages\ ': {\ ' sg1-s1\ ': [{\ ' download_port\ '] : 9083,\ "p2p_port\": 9035,\ "rtmp_port\": 9035,\ "addr\": \ "{\\\" l\\\ ": {\\\" https://192.168.1.227:9184/storage\\\ ": \ \ \ "\\\"},\\\ "m\\\": {},\\\ "i\\\": {\\\ "https://192.168.1.227:9184/storage\\\": \\\ "\\\"}}\ ", \" cpu\ ": 6,\" mem\ ": 100 , \ "bandwidth\": 0,\ "disk\": 0,\ "dead\": 0}]},\ "Lives\": {}} ',  ' 2016-01-02 09:20:55 '); 

Thus the 5.6.17 version only needs to modify the contents of the grok section in the logstash.conf configuration file to restart the Logstash process as follows.

Grok {match + = ["Message", "(? m) ^# [email protected]:%{user:user}\[[^\]]+\] @ (?:(? <clienthost>\s*)? \[(?:%{ip:clientip})? \]\s*id:%{number:id:int}\s+# query_time:%{number:query_time:float} \s+lock_time:%{number:lock_time:float}\s+rows_sent:%{number:rows_sent:int}\s+rows_examined:%{NUMBER:rows_ examined:int}\s* (?: Use%{data:database};\s*)? SET timestamp=%{number:timestamp};\s* (?<query> (? <action>\w+) \s+.*) \n# time:.*$ "]}

650) this.width=650; "src=" Http://s1.51cto.com/wyfs02/M00/76/F8/wKiom1Zf45Cyh9OFAACafD6n1Qk068.png "title=" Picture 7.png "alt=" Wkiom1zf45cyh9ofaacafd6n1qk068.png "/>

Kibana Log Output

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M02/76/F7/wKioL1Zf5BagM636AACeCxz_-AA033.png "title=" Picture 8.png "alt=" Wkiol1zf5bagm636aacecxz_-aa033.png "/>

This article from "Chop Month" blog, declined reprint!

Logstash push MySQL slow query log

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.