650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M01/83/C9/wKiom1d8ekGgX_HBAABasyVQt-E025.png-wh_500x0-wm_3 -wmp_4-s_3976130162.png "title=" qq picture 20160706102831.png "alt=" Wkiom1d8ekggx_hbaabasyvqt-e025.png-wh_50 "/>
One: Install Logstash (Download the tar package installation is OK, I directly yum installed)
#yum Install logstash-2.1.1
Two: Cloning code from GitHub
#git Clone https://github.com/heqin5136/logstash-output-webhdfs-discontinued.git#lslogstash-output-webhdfs-discontinued
Three: Install Logstash-output-webhdfs plug-in
There is a plugin in the bin directory of the #cd Logstash-output-webhdfs-discontinuedlogstash, using plugin to install the plugin #/opt/logstash/bin/plugin installed Logstash-output-webhdfs
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M01/83/C8/wKiom1d8b9HzWVqnAAAN1y8C7dQ438.png-wh_500x0-wm_3 -wmp_4-s_3236459782.png "title=" Image.png "alt=" Wkiom1d8b9hzwvqnaaan1y8c7dq438.png-wh_50 "/>
Four: Configuration Logstash
#vim /etc/logstash/conf.d/logstash.confinput { kafka { zk_ connect => ' 10.10.10.1:2181,10.10.10.2:2181,10.10.10.3:2181 ' #kafka的zk集群地址 group_id => ' HDFs ' #消费者组, not the same as the consumers on Elk topic_id => ' apiappwebcms-topic ' #topic consumer_id => ' logstash-consumer-10.10.8.8 ' #消费者id, custom, I write machine IP. consumer_threads => 1 queue_size => 200 codec => ' JSON ' }}output { #如果你一个topic中会有好几种日志 can be extracted and stored separately on HDFs. if [type] == "Apinginxlog" {&Nbsp; webhdfs { workers => 2 host => " 10.10.8.1 " #hdfs的namenode地址 port => 50070 #webhdfs端口 user => "HDFs" #hdfs运行的用户啊 to write HDFs with this user's permission. path => "/data/logstash/ apinginxlog-%{+yyyy}-%{+mm}-%{+dd}/logstash-%{+hh}.log #按天建目录, log files are built by the hour. &nBsp; flush_size => 500# compression => "Snappy" #压缩格式, Can not compress idle_flush_time => 10 retry_interval => 0.5 } }if [type] == "Apiapplog" { webhdfs { workers => 2 host => "10.64.8.1" port => 50070 user => "HDFs" path => "/data/logstash/api/apiapplog-%{+yyyy}-%{+mm}-%{+dd}.log" flush_size => 500# compression => "Snappy" idle_flush_time => 10 retry_interval => 0.5 } } stdout { codec => rubydebug }}
V: Start Logstash
#/etc/init.d/logstash start
Can already be successfully written.
650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M02/83/C9/wKiom1d8d97DWrGPAABNBo8GGGY014.png "style=" float: none; "title=" QQ picture 20160706110602.png "alt=" Wkiom1d8d97dwrgpaabnbo8gggy014.png "/>
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M02/83/C7/wKioL1d8eFTSO0dwAACYhM0Li-E358.png-wh_500x0-wm_3 -wmp_4-s_830315073.png "title=" qq picture 20160706111719.png "alt=" Wkiol1d8eftso0dwaacyhm0li-e358.png-wh_50 "/>
This article is from the "8931355" blog, please be sure to keep this source http://8941355.blog.51cto.com/8931355/1796776
Logstash subscribing log data in Kafka to HDFs