Elk builds a real-time Log Analysis Platform

Source: Internet
Author: User
Tags kibana logstash
Elk builds a real-time Log Analysis Platform

Introduction
Elk consists of three open-source tools: elasticsearch, logstash, and kiabana. The elk platform supports log collection, log search, and log analysis at the same time. Analyzing and processing the volume of logs in the production environment is undoubtedly not a good solution.
Https://www.elastic.co/

1 ). elasticsearch is an open-source distributed search engine that features: distributed, zero-configuration, automatic discovery, automatic index sharding, index copy mechanism, restful APIs, and multiple data sources, automatically search for loads.

2). logstash is a tool for receiving, processing, and forwarding logs.

3). kibana is an open-source analysis and visualization platform designed for use with elasticsearch. You can use kibana to search, view, and interact data stored in elasticsearch indexes, and visualize data using various charts, tables, and maps.

First, the previous schematic:

From "Ferry"

I believe that everyone who understands big data should be familiar with the above picture and understand it at a glance. Similar to our webserver => flume => Kafka. Here, logstash collects logs generated by apps and webservers and stores them in the es cluster. kibana generates charts from the indexes defined in the es cluster and then returns them to browser.

Elk Platform Construction

Component version selection:
Centos6.4 +

Jre1.7 + [I have installed jdk1.8 before, so I can leave it empty.]

Elasticsearch: 2.1.1

Logstash: 2.1.1 [Running depends on the Java environment]

Kibana: 4.3.1

:
Download from official website: Workshop
Link: http://pan.baidu.com/s/1eROKwDk password: w7uy

After the download is complete, install

Note: elk does not seem to be able to use the root user, so we need to create a new user.

[[email protected] ~]# useradd liuge

I generally like to put the downloaded software under Softwares in the current user's home directory. It's nice to see where the software is downloaded and stored.

[[email protected] ~]$ cd software/[[email protected] software]$ lselasticsearch-2.1.1.zip        logstash-2.1.1.zipkibana-4.3.1-linux-x64.tar.gz[[email protected] software]$ 

Next, we will start one-by-one decompression and installation.

Elasticsearch
Decompress:

[[email protected] software]$ tar -zxvf elasticsearch-2.1.1.zip -C ../app/

Install the head plug-in:

[[email protected] app]$ cd elasticsearch/[[email protected] elasticsearch]$ ./bin/plugin install mobz/elasticsearch-head

Check whether the installation is successful:

[[Email protected] elasticsearch] $ CD plugins/[[email protected] plugins] $ lshead [[email protected] plugins] $ # No problem if you see the head

Next, let's start simple configuration.

[[Email protected] elasticsearch] $ [[email protected] elasticsearch] $ Vim config/elasticsearch. y # You only need to modify the following information # The name can be any cluster. name = liuge_clusternode.name = node0000 # the data/logs directory here needs to be created by yourself. data =/home/liuge/elasticsearch/datapath. logs =/home/liuge/elasticsearch/logs # Set the host name to your own host name. host = hadoop000network. port = 9200

After the above configuration is set, you can start it:

[[email protected] elasticsearch]$ ./bin/elasticsearch

Check whether the service is started properly. Check the JPs.

[[Email protected] ~] $ Jps9954 elasticsearch10086 JPs [[email protected] ~] $ # As you can see, there are processes.

At the same time, it can also be accessed through the web, host + 9200:
Here I am using http: // hadoop000: 9200

The configuration name, cluster_name, and installed version are displayed.

The just-installed head plug-in is a plug-in that interacts with the es cluster using a browser. It can view the cluster status, cluster Doc content, execute searches, and common rest requests. Access address: http: // hadoop000: 9200/_ plugin/head

Logstash
Logstash is a tool for receiving, processing, and forwarding logs.

Next, start the installation. In fact, logstash can be extracted and used, which is relatively simple.

[[email protected] software]$ unzip logstash-2.1.1.zip

After decompression, You can compile our configuration file. Here, my configuration file is stored in the config directory under the decompression directory, and you can create

[[email protected] logstash]$ vim config/test1.conf 

Then, let's write our code in it. I will paste the Code directly here. Let's take a look and understand it,
Note: our goal is to get the log file from the nginx server and output the file to ES. This is so simple:

Input {file {# This is the file location of your own nginx Log Path => "/home/hadoop/data/project/logs/access. log "start_position => beginning} filter {} output {elasticsearch {# Our es address hosts =>" hadoop000: 9200 "# Set an elasticsearch index, used for kibana to use index => "Python-logstash-elasticsearch-kibana"} stdout {codec => rubydebug }}

I have modified this document on the official website. It is basically the simplest example. PS: the official website is a good learning resource. I hope you will not waste it.

Analysis: From the above example, we can see that writing a combination file is basically writing three parts: inout, filter, and output.
This is also like the Three agent components in our flume.

Next, we will start our configuration file and start log collection.

[[Email protected] logstash] #./bin/logstash agent-F config/test1.confsettings: default filter workers: 1 logstash startup completed # It is OK to see the above startup.
Kibana

Kibana is an open-source analysis and visualization platform. It is a log visualization tool.

Similarly, kibana can be directly decompressed for installation.

[[email protected] software]$ unzip kibana-4.3.1-linux-x64.tar.gz 

Go to the decompressed directory and modify the default matching file.

[[Email protected] kibana] $ CD config/[[email protected] config] $ lskibana. yml [[email protected] config] $ Vim kibana. yml # modify the following servers. port: 5601 server. HOST: "hadoop" elasticsearch. URL: http: // hadoop000: 9200 kibana. index :". kibana "#: WQ save and exit

Start,

[[email protected] ~]$ cd app/kibana/

Use a browser to access the Web interface. Address:

Here, we have successfully installed the elk platform on our own machine.

Next, start real-time log statistics:

Task requirement: count the number of 404 records in the log and update the log in real time on the page.

Start

1. Create an index on the setting page of kibana

2. Go to our discovery page

3. Write the filtering conditions here and save them.


Set Name:

4. Visual options


In visualization, we can create various types of charts
Here we select a pie chart

5. design our images

6. Save as a dashboard

7. Go to the dashboard

8. Set the update cycle of chart data

After setting the update cycle, we can see that the data is dynamically changing. It is still very interesting. If you are interested, try it.

Because I am also a beginner, I can only do such a simple thing first. However, in this example, at least the entire process is passed, and I will continue to learn it later.

Elk builds a real-time Log Analysis Platform

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.