Build elasticsearch-2.x logstash-2.x kibana-4.5.x Kafka the Elk Log Platform for message center in Linux

Source: Internet
Author: User
Tags redis kibana logstash


Introduced

Elk is the industry standard log capture, storage index, display analysis System solution
Logstash provides flexible plug-ins to support a variety of input/output
Mainstream use of Redis/kafka as a link between log/message
If you have a Kafka environment, using Kafka is better than using Redis
Here is one of the simplest configurations to make a note, Elastic's official website offers very rich documentation
Do not use search engines to search, not much results, please directly reader Web documents

Elk/kafka version in use

elasticsearch-2.x
logstash-2.3
kibana-4.5.1

Kafka 0.9.0.1

application/network environment

Nginx Machine
10.0.0.1

Kafka Cluster
10.0.0.11
10.0.0.12
10.0.0.13

Elasticsearch Machine
10.0.0.21

Overall description

Data flow

Log/Message Overall flow
Logstash => Kafka => logstash => elasticsearch => Kibana

Installation

Elk All installation can use the RPM binary package way, increase elastic official website warehouse repo can be installed with Yum

Elasticsearch, look at this.
Https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html

Logstash, look at this.
Https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

Kibana, look at this.
Https://www.elastic.co/guide/en/kibana/current/setup.html

Installation Overview

Nginx Machine 10.0.0.1
Run Nginx log format to JSON
Run Logstash input inputs from Nginx JSON, output to Kafka

Kafka Cluster 10.0.0.11 10.0.0.12 10.0.0.13
Kafka Cluster topic as Logstash

Elasticsearch Machine 10.0.0.21
Run Elasticsearch
Run Logstash input inputs from Kafka, output to Elasticsearch

Nginx Machine

Nginx log format to JSON

Define a Logstash_json format in Nginx's http{} and format the log as JSON

Log_format Logstash_json ' {"@timestamp": "$time _local", '
' @fields ': {'
"REMOTE_ADDR": "$remote _addr", '
"Remote_user": "$remote _user", '
"Body_bytes_sent": "$body _bytes_sent", '
"Request_time": "$request _time", '
' Status ': ' $status ', '
' Request ': ' $request ', '
"Request_method": "$request _method", '
"Http_referrer": "$http _referer", '
"Body_bytes_sent": "$body _bytes_sent", '
"Http_x_forwarded_for": "$http _x_forwarded_for", '
' Http_user_agent ': ' $http _user_agent '} ';

Increase the logging Logstash_json log in server{}, can coexist with the original log output

Access_log/data/wwwlogs/iamle.log Log_format;
Access_log/data/wwwlogs/nginx_json.log Logstash_json;
Logstash Log Acquisition Configuration

/etc/logstash/conf.d/nginx.conf

Input {
File {
Path => "/data/wwwlogs/nginx_json.log"
Codec => "JSON"
}
}
Filter {
Mutate {
Split => ["Upstreamtime", ","]
}
Mutate {
Convert => ["Upstreamtime", "float"]
}
}
Output {
Kafka {
Bootstrap_servers => "10.0.0.11:9092"
topic_id => "Logstash"
Compression_type => "gzip"
}
}
Kafka Cluster

Create a new topic

A new topic is called
Logstash

Topic
Each message published to the Kafka Cluster has a category, which is called topic. (Physically different topic messages are stored separately, and logically a topic message is saved on one or more broker but the user only needs to specify the topic of the message to produce or consume the data without having to care where the data is stored)

Elasticsearch Machine

Logstash to save data from Kafka to Elasticsearch configuration

Select Kafka Cluster any one with ZK IP for connection use
TOPIC_ID is the topic Logstash set in Kafka
/etc/logstash/conf.d/logstashes.conf

Input {
Kafka {
Zk_connect => "10.0.0.13:2181"
topic_id => "Logstash"
}
}
Filter {
Mutate {
Split => ["Upstreamtime", ","]
}
Mutate {
Convert => ["Upstreamtime", "float"]
}
}
Output {
Elasticsearch {
Hosts => ["10.0.0.21"]
Index => "logstash-iamle-%{+yyyy. MM.DD} "
Document_type => "Iamle"
Workers => 5
Template_overwrite => True
}
}
Supplementary notes

The above is the main configuration, on the poor Kibana view/show the

Kibana

I Kibana and Elasticsearch are the same machine here.
The official Yum installation of the Kibana configuration file in
/opt/kibana/config/kibana.yml
Need to change 2 places, monitor port and ES connection information

Server.host: "10.0.0.21"
Elasticsearch.url: "http://10.0.0.21:9200"

can be accessed via http://10.0.0.21:5601 after starting Kibana/etc/init.d/kibana start

Kibana use of many reader Web documents, online Chinese information is not much, about elk there is a Raochen Lin wrote
Elkstack Chinese Guide

Https://www.gitbook.com/book/chenryn/kibana-guide-cn/details
Kibana Discover filter static files
Not \/static and not \/upload\/

Elasticsearch

The official Yum installation of the Elasticsearch configuration file in

/etc/elasticsearch/elasticsearch.yml

Need to configure the listening IP, the default is 127.0.0.1

network.host:10.0.0.21
Path.data:/data

Elasticsearch can see the ES state after installing the head plugin
http://10.0.0.21:9200/_plugin/head/

Security issues

Special attention should be paid to elk all software port monitoring, do not expose the public network to hear, in addition, even if you have to pay attention to configure the intranet access restrictions

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.