Elk Log Collection Analysis System configuration

Source: Internet
Author: User
Tags kibana logstash

Elk is a powerful tool for log revenue and analysis.

1, elasticsearch cluster construction

Slightly

2. Logstash Log Collection

I am here to achieve the following 2 steps, in the middle with Redis queue buffer, can effectively avoid the ES pressure too large:

1, n agent on the log of n services (1 to 1 of the way), from the log file parsing data, deposit broker, here is a Redis subscription mode message queue, of course, you can choose Kafka,redis more convenient;

3, indexer do log summary, from the Redis queue to take data into ES;

A sample configuration of the agent and index is given below:

1, driver_schedule.conf

Input {file{#这是日志路径 path= [          "/home/xiaoju/driver-schedule-api/logs/driver-schedule-api.info.*",          "/home/xiaoju/driver-schedule-api/logs/driver-schedule-api.error.*"] #排除路径, support glob expansion, but not recursive exclude= [          "access.*"] #开始位置, beginning starts reading from the log start_position="beginning"#sincedb指示的文件, logging read location Sincedb_path="/home/xiaoju/yangfan/local/logstash-1.4.2/sincedb/driver_schedule_progress"#添加记录字段 Add_field= {          "Server"="Driver_schedule"} #编码器, regular pattern multi-line Merge codec=Multiline {pattern="^\d+:\d+"negate=true What="previous"}}}filter {#匹配路径中包涵infoif[Path] =~"Info"{#mutate更改值 Mutate {replace= = {"type"="Info"}} grok {match= = {"message"="%{combinedapachelog}" }        }    }Else if[Path] =~"Error"{mutate {replace= = {"type"="Error" }        }    } Else{mutate {replace= = {"type"="Unknow" } }    }    Date{Match= ["timestamp","Dd/mmm/yyyy:hh:mm:ss Z"]}}output {#debug格式化打印 #stdout {codec=Rubydebug} redis {host="10.94.99.55"#这里用的是redis订阅模式, the data_type corresponding to indexer should be Pattern_channel data_type="Channel"Port=6379DB=5Key="Logstash:%{[server]}"    }}

Run up:

Nohup./bin/logstash-f./conf/agent/driver_schedule.conf &

2, indexer.conf

Input {redis {host="10.94.99.55"Port=6379DB=5#如果这里选择了pattern_channel, using the Redis subscription method, the agent data_type will correspond to channel Data_type="Pattern_channel"#这是所有的key的匹配pattern Key="logstash:*"}}output {elasticsearch {embedded=falseProtocol="http"Host="10.94.99.56"Port=9211#配置es索引名字 Index="%{[server]}"#配置es索引类型 Index_type="%{[type]}"} #debug使用, format print #stdout {codec=Rubydebug}}

Run up:

Nohup./bin/logstash-f./conf/indexer/indexer.conf &

3. Kibana Configuration

Online tutorials More, here I only mark some of the solutions to the problem:

1,connection failure:

Checklist:

1. Configure ES address of Kibana in Config.js

2, if the ES version >1.4 will need to add in the ES configuration

Http.cors.allow-origin: "/.*/"

Http.cors.enabled:true

Precautions:

1, ES and logstash the best choice of the same large version, or may not be written in

2, Logstash will write a Syncsys file, record the last time to read the file where

Elk Log Collection Analysis System configuration

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.