LogStash log analysis Display System

Source: Internet
Author: User
Tags apache log kibana logstash install redis

Introduction

Generally, log management gradually crashes. When logs are the most important to people, that is, when problems arise, this gradual process begins.
Log Management generally goes through three phases:

  1. The primary administrator will check logs using some traditional tools (such as cat, tail, sed, awk, perl, and grep), but its applicability is limited to a small number of hosts and log file types;
  2. Considering the scalability in reality, log management will gradually evolve, using tools such as rsyslog and syslog-ng for centralized management;
  3. When the log information increases, it becomes more difficult to extract the required information from the rapidly growing log data stream and associate it with other related events, logStash provides a good solution.

Advantages of LogStash:
Better syntax analysis for log data;

More flexible log storage methods

Additional search and directory Functions

Ease of installation, scalability, and good performance

Design and architecture
LogStash is a simple message-based architecture written in JRuby and runs on the Java Virtual Machine (JVM. Unlike the separated agent or server, LogStash can be used to configure a single agent and other open-source software to implement different functions.
In the LogStash ecosystem, there are four main components:
Shipper: Send events (events) to LogStash. Generally, the remote agent only needs to run this component;

Broker and Indexer: receives and indexes events;

Search and Storage: allows you to Search and store events;

Web Interface: Web-based display Interface

The above components can be deployed independently in the LogStash architecture to provide better cluster scalability.

In most cases, LogStash hosts can be divided into two categories:

Agent host: as the event sender (shipper), send various log data to the central host. You only need to run the Logstash agent program;

Central host: it can run Broker, Indexer, Search and Storage, and Web Interface) to receive, process, and store log data.

Deployment
Basic Environment
Yum install java-1.7.0-openjdk
Java-version # Make sure that the java version is 1.7
Deploy LogStash
# Download
Wget https://download.elasticsearch.org/logstash/logstash/logstash-1.3.1-flatjar.jar-O logstash. jar
# Start
Java-jar logstash. jar agent-v-f shipper. conf # Start shipper
Java-jar logstash. jar agent-v-f indexer. conf # Start indexer

Deploy Redis
# Installation
Yum install redis-server
# Start
/Etc/init. d/redis-server start
# Test
$ Redis-cli-h 192.168.12.24
Redis 192.168.12.24: 6379> PING
PONG
Deploy Elasticsearch
# Download
Wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.8.noarch.rpm
# Installation
Rpm-ivh elasticsearch-0.90.8.noarch.rpm
# Start
/Etc/init. d/elasticsearch status
Start Kibana
# Installation
Java-jar logstash. jar web # LogStash 1.3.1 comes with Kibana
# View
Http: // 192.168.12.24: 9292

Logstash configuration file and plug-in
Input {
Stdin {}
File {
Type => "syslog"
Path => ["/var/log/secure", "/var/log/messages"]
Exclude => ["*. gz", "shipper. log"]
}
Zeromq {
Address => ["tcp: // 192.168.8.145: 8889"]
Mode => "client"
Type => "zmq-input"
Topic => "weblog"
Topology => "pubsub"
Codec => "json"
}
}
Filter {
Mutate {
Gsub => ["message", "APPS weblog", ""]
Gsub => ["message", "{", ""]
Gsub => ["message", "}", ""]
}
}
Output {
Stdout {debug => true debug_format => "json "}

Elasticsearch {
Cluster => "logstash"
Codec => "json"
}
}

Log category and Processing Method
Apache Log: Custom apache output log format, json output, without filter

Postfix log: the log cannot be customized and must be filtered using filters such as grok.

Tomcat logs: You need to combine multiple lines of logs into one event and exclude blank lines.

Cluster Expansion
Extended Architecture

Notes
Redis: deployed on multiple servers. It only provides high availability and does not share the load. It can be replaced by ZeroMQ.
ElasticSearch:
# Check node status:
Curl-XGET 'HTTP: // 127.0.0.1: 9200/_ cluster/health? Pretty = true'
Green status: All shard instances are allocated and run properly.
Yellow status: only the primary shard is allocated. For example, when the cluster is replicating data between nodes
Red status: an unallocated shard exists.
# Cluster Monitoring:
Paramedic tool:
Installation:/usr/share/elasticsearch/bin/plugin-install karmi/elasticsearch-paramedic
View: http://log.bkjia.net: 9200/_ plugin/paramedic/index.html
Bigdesk tool:
Install:/usr/share/elasticsearch/bin/plugin-install lukas-vlcek/bigdesk
View: http://log.bkjia.net: 9200/_ plugin/bigdesk/index.html

# Data retention policy:
1. By default, LogStash creates one index per day. You can manually delete the index.
Curl-XDELETE http: // 127.0.0.1: 9200/logstash-2013.12.19
Shell optimization Script: https://github.com/cnf/logstash-tools/blob/master/elasticsearch/clean-elasticsearch.sh
2. Optimize index:
Curl-XPOST 'HTTP: // 127.0.0.1: 9200/logstash-2013.12.19/_ optimize'
Curl-XPOST 'HTTP: // 127.0.0.1: 9200/_ optimize '# optimize all indexes
Curl 'HTTP: // 127.0.0.1: 9200/logstash-2013.12.19/_ stats? Clear = true & store = true & pretty = true' # Check the index size. Too many indexes will affect the optimization time.
3. Default index data directory:/var/lib/elasticsearch/logstash

References
LogStash Official Website: http://www.logstash.net/
Official Elasticsearch Website: http://www.elasticsearch.org/
Kibana query syntax: http://lucene.apache.org/core/3_6_1/queryparsersyntax.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.