Using Elk+redis to build nginx log analysis Platform

Source: Internet
Author: User
Tags kibana logstash

Logstash,elasticsearch,kibana How to perform the Nginx log analysis? First of all, the schema, Nginx is a log file, its status of each request and so on have log files to record. Second, there needs to be a queue, and the Redis list structure can be used just as a queue. Then analysis and query can be done using Elasticsearch.

What we need is a distributed, log collection and analysis system. Logstash has agent and indexer two characters. For the agent role, put on a separate web machine, and then the agent constantly read the Nginx log file, whenever it read the new log information, the log is sent to a redis queue on the network. For these unhandled logs on the queue, there are several different Logstash indexer to receive and analyze. The analysis is then stored to elasticsearch for search analysis. The display of the log Web interface is then performed by the unified Kibana.

  • Redis installed, open on port 6379
  • Elasticsearch installed, open on port 9200
  • Kibana is installed and the monitoring web is turned on
  • Logstash installed in/usr/local/logstash
  • Nginx opened the log, the directory is:/usr/share/nginx/logs/test.access.log
  • Set Nginx log Format

    To set the log format in nginx.conf: Logstash

    Log_format '$http _host$server _addr$remote _addr [$time _local] "$request" '  '$request _body$status$body _bytes_sent "$http _referer" "$ Http_user_agent"'$request _time$upstream _response_time';

    To set up an access log in vhost/test.conf:

    Access_log  /usr/share/nginx/logs/test.  Access. log Logstash;
    Open Logstash Agent

    Note: This can also be used without logstash, directly using Rsyslog

    Creating the Logstash Agent configuration file

    /usr/Local/logstash/etc/logstash_agent.conf

    The code is as follows:

    Input {        type"nginx_access" Path = = ["/usr/share/nginx/logs/test.access.log"]        }} Output {        Redis {                "localhost""list""Logstash:redis" }}

    Start Logstash Agent

    /usr/Local/logstash/bin/logstash-f/u/SR/local/logstash/etc/ Logstash_agent.conf

    At this point, it sends the data in the Test.access.log to Redis, which is equivalent to tail-f.

    Open Logstash Indexer

    Create a logstash indexer configuration file

    /usr/Local/logstash/etc/logstash_indexer.conf

    The code is as follows:

    Input {redis {host = ="localhost"Data_type ="List"Key ="Logstash:redis" type="Redis-input"}}filter {grok {Match= ["Message","%{word:http_host}%{urihost:api_domain}%{ip:inner_ip}%{ip:lvs_ip} \[%{httpdate:timestamp}\] \"%{WORD:http_ verb}%{uripath:baseurl} (?: \? %{notspace:request}|) Http/%{number:http_version}\ "(?:-| %{notspace:request})%{number:http_status_code} (?:%{number:bytes_read}|-)%{qs:referrer}%{QS:agent}%{NUMBER:time _duration:float} (?:%{number:time_backend_response:float}|-) "]} kv {prefix ="request."Field_split ="&"Source ="Request"} urldecode {all_fields =true} Date {type="Log-date" Match= ["Timestamp","Dd/mmm/yyyy:hh:mm:ss Z"]}}output {elasticsearch {embedded =falseprotocol ="http"Host ="localhost"Port ="9200"index ="access-%{+yyyy. MM.DD} "}}

    This configuration is to plug nginx_access into the elasticsearch after structuring.

    This configuration is described below:

      • Match matches exactly in the Grok, whether it's a get or a POST request.
      • KV is to extend the key,value of the A=b&c=d in the request, and use the non-schema feature of ES to ensure that if you add a parameter, it can take effect immediately.
      • UrlDecode is to ensure that the parameters have Chinese words to UrlDecode
      • Date is the time of day for the document to be saved in ES, otherwise the time to insert ES

    Well, now that the structure is complete, you can access the log of this access at the Kibana console once you have visited Test.dev. And the structure is good, very convenient to find.

    Use Kibana to view

    After you turn on Es,logstash,kibana, You can use the Head plugin es to confirm the access-xx.xx.xx index in the next ES, and then open the Kibana page, the first time you enter will let you choose Mapping, index name fill access-*, Kibana automatically create mapping

    Wish: Have fun

Using Elk+redis to build nginx log analysis Platform

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.