The current online environment (Ubuntu server) has finally deployed the good one Logstash log collection system.

Source: Internet
Author: User
Tags logstash log4j saltstack

After a week of Logstash's documentation, I finally set up an Logstash environment for Ubuntu Online. Now share your experience.

About Logstash

This thing is still hot, relying on the elasticsearch under the big tree, Logstash's attention is very high, the project is now active. Logstash is a system for log collection and analysis, and the architecture is designed to be flexible enough to meet the needs of all sizes.

The logical architecture of Logstash

The logical architecture of the Logstash is not complex at all, with the three steps of filtering----filter----to simply filter and manage logs, a simple three-step process that meets the needs of all sizes after a business-scale design.

The Logstash configuration file is also very simple

input { }filter { }output { }

After these three basic configuration sections, you can filter out the logs that meet your requirements.

Some references

To tell the truth, Logstash official documents is too simplistic, I first read after completely a smear, will not write, and then read the official recommendation of the "Logstash book" after slowly understand the logstash what is going on. So this book is also highly recommended. But the new version of the book has not been found free, I was looking at 1.3.4 version, although the version is somewhat lower, and now the Logstash some different (no longer use Fatjar packaging, but directly with the bash script to launch the Ruby script), but the main function does not change much, Some of the instructions for the official documentation are still available for learning.

Others can be written by some other Google configuration for reference

The business model I used

My size is actually very small, on the collection of several server Java program log (log4j output), pushed to the central server for viewing only, when necessary to alarm.

So the scale I'm using is this, on the online server (input + = file, filter = = Grok, output + TCP), accept input from the log file, use the Grok regular filter to extract the log, Sent directly to the central log server for collection and display using the TCP protocol.

On the log server (input + = TCP, output + elasticsearch and email), do not increase the load of the central server too high, put the regular filtering part on the product server, The central server just collects filtered logs and does an index show, the entire architecture does not use Redis, because my size is less, log is very small, there is no need to use Redis to store logs, only the index log for query is enough.

My configuration

After you've cleared the schema you're using, it's easy to write the configuration.

On the product server, configure the following:

input {    file {        type => "my_app"        path => "/var/log/tomcat7/my_app.log"        tags => [ "my_app", "log4j" ]        codec => multiline {            pattern => "^%{TIMESTAMP_ISO8601}"            negate => true            what => "previous"        }    }}filter {    if [type] == "my_app" {        grok {            match => { "message" =>                 "%{TIMESTAMP_ISO8601:date} \[(?<thread_name>.+?)\] (?<log_level>\w+)\s*(?<content>.*)"            }        }        if [log_level] == "ERROR" and "Invalid password for user" not in [content] {            throttle {                after_count => 2                key => "%{content}"                add_tag => "throttled"            }        }    }}output {    tcp {        codec => json_lines        host => "center.com"        mode => "client"        port => "tcp_port"    }}

To filter such logs:

          date                 thread_name            log_level                      content           |                         |                   |                              |2015-01-27 10:37:32,131 [ajp-apr-127.0.0.1-8009-exec-2] INFO  mobile.ShortMessageService  - {success=true}2015-01-27 10:41:18,447 [ajp-apr-127.0.0.1-8009-exec-1] ERROR security.UserService  - grails.validation.ValidationException: Validation Error(s) occurred during save():

The log server is configured as follows

  Input {tcp {codec = Json_lines host = "0.0.0.0" mode = "Server" PO RT = "Tcp_port"}}filter {if! [tags] or! [Type] {drop {}}}  Output {elasticsearch_http {host = "localhost"} if "throttled" not in [tags] and [type] = = "My_app" and [log_level] = = "ERROR" and "Invalid password for user" not in [content]) {email {BODY = =]%{ Message} "from = =" [email protected] "contenttype =" text/plain; Charset=utf-8 "Options" = "Smtpiporhost", "smtp.email.com", "UserName", "Log In_name "," Password "," Password "," AuthenticationType "," Login "] subj ECT = "Server%{host}%{type} log found exception:%{content}" to "[Email protected]"}}}  

In this way, a preliminary usable logstash is set up, and can adapt to my current scale. Display to the page with Kibana This project, is also the project of Elasticsearch, use and configuration are very simple, no longer elaborate.

Batch Deployment

After the configuration has been tested, there is only a batch deployment, the use of (Saltstack) (http://docs.saltstack.com/en/latest/), to add a state to Logstash, is also very simple:

Cat/srv/salt/logstash.sls

#logstash_repo:#  pkgrepo.managed:#    - name: deb http://packages.elasticsearch.org/logstash/1.4/debian stable main#    - file: /etc/apt/sources.list.d/logstash.list#    - key_url: http://packages.elasticsearch.org/GPG-KEY-elasticsearch##logstash:#  pkg.latest:#    - refresh: Truelogstash:  pkg.installed:    - sources:      - logstash: http://myserver/downloads/logstash_1.4.2-1-2c0f5a1_all.deblogstash-config:  file.recurse:    - name: /etc/logstash/conf.d    - source: salt://servers/logstash/conf.d/    - makedirs: Truelogstash-service:  service.running:    - name: logstash    - enable: True    - watch:      - pkg: logstash      - file: logstash-config

Note section is the official source of Logstash, refer to here, but my side of the use is too slow, not conducive to batch deployment. Forced to download the Deb package on their own server, using pkg.install the sources instructions from their own server to download the Deb package installed.

Finally, the use is salt ‘*‘ state.sls logstash done, the batch deployment is complete, the log comes up

The current online environment (Ubuntu server) has finally deployed the good one Logstash log collection system.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.