Elk System mainly consists of three parts, namely Elasticsearch, Logstash, Kibana.
After the elk system receives a push-over log, it is first parsed into a single keyword by logstash the fields in the log. Elasticsearch associates the keyword with the log information and stores the data to the hard disk in a specific format. Kibana provides an interactive interface with the user that reads information from the Elasticsearch and displays it on the Web page, depending on the user's needs.
This article takes Redhat as an example to build a very simple set of elk system steps:
Logstash reading information from a local log file
Elasticsearch Storing information
Show full information in Kibana
All work is done locally, i.e. all server and client addresses are 127.0.0.1
First, installation Tool 1. Installing Elasticsearch
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.tar.gztar -xvzf elasticsearch-1.7.1.tar.gzcp -a elasticsearch-1.7.1 /usr/localcd /usr/localln –s elasticsearch-1.7.1 elasticsearch
2. Installing Logstash
wget https://download.elastic.co/logstash/logstash/logstash-1.5.4.tar.gztar –xvzf logstash-1.5.4.tar.gzcp –a logstash-1.5.4 /usr/localcd /usr/localln –s logstash-1.5.4 logstash
3. Installing Kibana
wget https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gztar –xvzf kibana-4.1.2-linux-x64.tar.gzcp –a kibana-4.1.2-linux-x64 /usr/localcd /usr/localln –s kibana-4.1.2-linux-x64 kibana
Second, the configuration Logstash
cd /usr/local/logstashmkdir etctouch central.conf
Central.conf is the configuration file of Logstash, the file name is freely configured, the contents are as follows:
input{ file { "/tmp/*.log" start_position => beginning }}output { stdout {} elasticsearch { "elasticsearch" "json" "http" }}
To start the Logstash program:
/usr/local/logstash/bin/logstash agent--verbose--config/usr/local/logstash/etc/central.conf
The configuration file will read the log from the/tmp/*.log and pass it to elasticsearch and standard output. I have not configured Elasticsearch now and can be observed from the standard Output window. If there is the output of the log content, we know Logstash build success.
Third, configuration Elasticsearch
Since Elasticsearch and Logstash are installed on a single machine, the default configuration is Elasticsearch.
-d#以deamon方式启动elasticsearch
Open 127.0.01:9200 See this content Elasticsearch build success
{ "Status" : $, "name" :"Blaquesmith", "cluster_name" :"Elasticsearch", "version" :{"number": "1.7.1", "build_hash": "B88f43fc40b0bcd7f173a1f9ee2e97816de80b19"
, "build_timestamp": "2015-07-29t09:54:16z", "build_snapshot": false, "lucene_version": "4.10.4" }, "tagline" :"Know, for Search"}
Iv. Configuration Kibana
Kibana also does not need to be configured to start directly
/usr/local/kibana/bin/kibana
Open 127.0.01:5601 can see Kibana page, choose Default configuration, enter,/tmp/*.log in the information in Kibana display, know Kibana build success.
V. Problems encountered
1. When 127.0.0.1:9200 or 127.0.0.1:5601 is turned on, the Web page does not open, but the Kibana and Elasticsearch server did start.
Workaround: Agent off
2./tmp/*.log is present, but there is no data on Kibana, Logstash stdout cannot see the data
WORKAROUND: Logstash only read the last period of time of the log, the log file time update can be resolved
Vi. Articles of Reference
Http://my.oschina.net/lenglingx/blog/504883?fromerr=a2z8OWmY
Locally built Elk System