Logstash,elasticsearch,kibana How to perform the Nginx log analysis? First of all, the schema, Nginx is a log file, its status of each request and so on have log files to record. Second, there needs to be a queue, and the Redis list structure can be used just as a queue. Then analysis and query can be done using Elasticsearch.
What we need is a distributed, log collection and analysis system. Logstash has agent and indexer two characters. For the agent role, put on a separate web machine, and then the agent constantly read the Nginx log file, whenever it read the new log information, the log is sent to a redis queue on the network. For these unhandled logs on the queue, there are several different Logstash indexer to receive and analyze. The analysis is then stored to elasticsearch for search analysis. The display of the log Web interface is then performed by the unified Kibana.
Log_format '$http _host$server _addr$remote _addr [$time _local] "$request" ' '$request _body$status$body _bytes_sent "$http _referer" "$ Http_user_agent"'$request _time$upstream _response_time';
To set up an access log in vhost/test.conf:
Access_log /usr/share/nginx/logs/test. Access. log Logstash;
Open Logstash Agent
Note: This can also be used without logstash, directly using Rsyslog
Creating the Logstash Agent configuration file
/usr/Local/logstash/etc/logstash_agent.conf
The code is as follows:
Input { type"nginx_access" Path = = ["/usr/share/nginx/logs/test.access.log"] }} Output { Redis { "localhost""list""Logstash:redis" }}
Start Logstash Agent
/usr/Local/logstash/bin/logstash-f/u/SR/local/logstash/etc/ Logstash_agent.conf
At this point, it sends the data in the Test.access.log to Redis, which is equivalent to tail-f.
Open Logstash Indexer
Create a logstash indexer configuration file
/usr/Local/logstash/etc/logstash_indexer.conf
The code is as follows:
Input {redis {host = ="localhost"Data_type ="List"Key ="Logstash:redis" type="Redis-input"}}filter {grok {Match= ["Message","%{word:http_host}%{urihost:api_domain}%{ip:inner_ip}%{ip:lvs_ip} \[%{httpdate:timestamp}\] \"%{WORD:http_ verb}%{uripath:baseurl} (?: \? %{notspace:request}|) Http/%{number:http_version}\ "(?:-| %{notspace:request})%{number:http_status_code} (?:%{number:bytes_read}|-)%{qs:referrer}%{QS:agent}%{NUMBER:time _duration:float} (?:%{number:time_backend_response:float}|-) "]} kv {prefix ="request."Field_split ="&"Source ="Request"} urldecode {all_fields =true} Date {type="Log-date" Match= ["Timestamp","Dd/mmm/yyyy:hh:mm:ss Z"]}}output {elasticsearch {embedded =falseprotocol ="http"Host ="localhost"Port ="9200"index ="access-%{+yyyy. MM.DD} "}}
This configuration is to plug nginx_access into the elasticsearch after structuring.
This configuration is described below:
- Match matches exactly in the Grok, whether it's a get or a POST request.
- KV is to extend the key,value of the A=b&c=d in the request, and use the non-schema feature of ES to ensure that if you add a parameter, it can take effect immediately.
- UrlDecode is to ensure that the parameters have Chinese words to UrlDecode
- Date is the time of day for the document to be saved in ES, otherwise the time to insert ES
Well, now that the structure is complete, you can access the log of this access at the Kibana console once you have visited Test.dev. And the structure is good, very convenient to find.
Use Kibana to view
After you turn on Es,logstash,kibana, You can use the Head plugin es to confirm the access-xx.xx.xx index in the next ES, and then open the Kibana page, the first time you enter will let you choose Mapping, index name fill access-*, Kibana automatically create mapping
Wish: Have fun
Using Elk+redis to build nginx log analysis Platform