Logstash analysis httpd_log
Logstash analysis: httpd_loghttpd or nginx format
Logstash supports two built-in formats: common and combined compatible with httpd.
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
Equivalent to apache httpd:
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combinedLogFormat "%h %l %u %t \"%r\" %>s %b" common
This is equivalent to removing "$ http_x_forwarded_for" from main on nginx ":
log_format combined '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"';
Configure logstash on the collected machine and output it to redis on the elasticsearch machine.
input { file { type => "apache_log" path => ["/var/log/httpd/access_log"] }}output { redis { host => "xx.xx.xx.xx" data_type => "list" key => "logstash:redis" } stdout { codec => rubydebug }}
Make sure that the machine can correctly connect to the redis port.
Telnet IP 6397
The elasticsearch machine reads the content in the redis queue to elasticsearch:
input{ redis { host => "127.0.0.1" data_type => "list" key => "logstash:redis" }}filter{ grok { match => { "message" => "%{COMMONAPACHELOG:apachelog}" } add_field => [ "response", "%{NUMBER:response}" ] }}output { elasticsearch { host => localhost } stdout { codec => rubydebug }}
Then, kibana is displayed.