Install logstash + kibana + elasticsearch + redis to build a centralized Log Analysis Platform

Source: Internet
Author: User
Tags kibana logstash

This article is a reference to the practice of logstash official documentation. The environment and required components are as follows:

  • RedHat 5.7 64bit/centos 5.x
  • JDK 1.6.0 _ 45
  • Logstash 1.3.2 (with kibana)
  • Elasticsearch 0.90.10
  • Redis 2.8.4

The process of building a centralized log analysis platform is as follows:

 

Elasticsearch

1. Download elasticsearch.

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.10.tar.gz

2. decompress the package and enter the bin directory. Run the following command to start elasticsearch in the previous mode:

./elasticsearch -f
[2014-01-16 16:21:31,825][INFO ][node                     ] [Saint Elmo] version[0.90.10], pid[32269], build[0a5781f/2014-01-10T10:18:37Z]
[2014-01-16 16:21:31,826][INFO ][node                     ] [Saint Elmo] initializing ...
[2014-01-16 16:21:31,836][INFO ][plugins                  ] [Saint Elmo] loaded [], sites []
[2014-01-16 16:21:35,425][INFO ][node                     ] [Saint Elmo] initialized
[2014-01-16 16:21:35,425][INFO ][node                     ] [Saint Elmo] starting ...
[2014-01-16 16:21:35,578][INFO ][transport                ] [Saint Elmo] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/10.0.2.15:9300]}
Redis

1. For the installation method, refer to my other article redis compilation and installation.

2. Go to the bin directory and run the following command to output the debug information on the console:

./redis-server --loglevel verbose
[32470] 16 Jan 16:45:57.330 * The server is now ready to accept connections on port 6379[32470] 16 Jan 16:45:57.330 - 0 clients connected (0 slaves), 283536 bytes in use
Logstash log generator (shipper)

1. Create a configuration file:shipper.confThe content is as follows:

input {
    stdin {
        type => "example"
    }
}

output {
    stdout {
        codec => rubydebug
    }
    redis {
        host => "127.0.0.1"
        port => 6379
        data_type => "list"
        key => "logstash"
    }
}

2. Start shipper. Run the following command:

java -jar logstash-1.3.2-flatjar.jar agent -f shipper.conf

The following message is displayed in the terminal window:

Using milestone 2 output plugin ‘redis‘. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.3.2/plugin-milestones {:level=>:warn}

Press enter in the terminal window to display the following information:

{
       "message" => "",
      "@version" => "1",
    "@timestamp" => "2014-01-16T08:15:19.400Z",
          "type" => "example",
          "host" => "redhat"
}

This JSON information will be sent to redis, and the following prompt will appear in the terminal window of redis:

[32470] 16 Jan 17:09:23.604 - Accepted 127.0.0.1:44640[32470] 16 Jan 17:09:27.127 - DB 0: 1 keys (0 volatile) in 4 slots HT.[32470] 16 Jan 17:09:27.127 - 1 clients connected (0 slaves), 304752 bytes in use
Logstash log Indexer)

1. Create a configuration file:indexer.confThe content is as follows:

input {
  redis {
    host => "127.0.0.1"
    # these settings should match the output of the agent
    data_type => "list"
    key => "logstash"

    # We use the ‘json‘ codec here because we expect to read
    # json events from redis.
    codec => json
  }
}

output {
  stdout { debug => true debug_format => "json"}

  elasticsearch {
    host => "127.0.0.1"
  }
}

2. Start the log indexer. Run the following command:

java -jar logstash-1.3.2-flatjar.jar agent -f indexer.conf

The following message is displayed in the terminal window:

[32470] 16 Jan 17:09:23.604 - Accepted 127.0.0.1:44640
[32470] 16 Jan 17:09:27.127 - DB 0: 1 keys (0 volatile) in 4 slots HT.
[32470] 16 Jan 17:09:27.127 - 1 clients connected (0 slaves), 304752 bytes in use

The indexer receives information from redis. The following information is displayed in the terminal window:

{"message":"","@version":"1","@timestamp":"2014-01-16T17:10:03.831+08:00","type":"example","host":"redhat"}{"message":"","@version":"1","@timestamp":"2014-01-16T17:13:20.545+08:00","type":"example","host":"redhat"}{
Logstash Web Interface (kibana)

1. Start kibana. Run the following command:

java -jar logstash-1.3.2-flatjar.jar web

2. Open a browser (HTML5 must be supported), Input address: http: // 127.0.0.1: 9292/index.html #/dashboard/file/logstash. JSON. The interface effect is as follows:

 

References

  • Logstash-getting-started-centralized
  • Interview and book reviews: "logstash to make log management simpler"

 

From: http://aofengblog.blog.163.com/blog/static/6317021201401664935685/

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.