Logstash + Kibana log system deployment configuration

Source: Internet
Author: User
Tags kibana logstash

Logstash + Kibana log system deployment configuration

Logstash is a tool for receiving, processing, and forwarding logs. Supports system logs, webserver logs, error logs, and application logs. In short, it includes all types of logs that can be flushed.

Typical use cases (ELK ):

Elasticsearch is used as the storage of background data, and kibana is used for front-end report presentation. Logstash acts as a porter in the process. It creates a powerful pipeline chain for data storage, REPORT query, and log parsing. Logstash provides a variety of input, filters, codecs and output components, allowing users to easily implement powerful functions.

The best materials for learning Logstash are the official website, which introduces three learning addresses:

1. ELK help documentation
Https://www.elastic.co/guide/en/logstash/5.1/plugins-outputs-stdout.html

2. Logstash matching help document
Http://grokdebug.herokuapp.com/patterns #

3. Grok online Regular Expression matching
Http://grokdebug.herokuapp.com/

The topic will be officially started below ~~

Logstash deployment configuration 1. Basic Environment Support (JAVA)
yum -y install java-1.8*
java --version
2. Download and decompress logstash
Logstash-5.0.1.tar.gztar-zxvf logstash-5.0.1.tar.gzcd logstash-5.0.1mkdir conf # create conf folder to store configuration file cd conf
3. Configuration File

Configure the test file (combined with the elasticsearch cluster created earlier)

[root@logstash1 conf]# cat test.conf     input {        stdin {        }    }    output {            elasticsearch {                hosts =>["172.16.81.133:9200","172.16.81.134:9200"]                index => "test-%{+YYYY.MM.dd}"            }            stdout {                codec => rubydebug            }    }

# Check the Configuration File Syntax
[Root @ logstash1 conf] #/opt/logstash-5.0.1/bin/logstash-f/opt/logstash-5.0.1/conf/test. conf -- config. test_and_exit
Sending Logstash's logs to/opt/logstash-5.0.1/logs which is now configured via log4j2. properties
Configuration OK
[2017-12-26T11: 42: 12,816] [INFO] [logstash. runner] Using config. test_and_exit mode. Config Validati on Result: OK. Exiting Logstash
# Execute commands
/Opt/logstash-5.0.1/bin/logstash-f/opt/logstash-5.0.1/conf/test. conf
Manually enter information
2017.12.26 admin 172.16.81.82 200
Result:
{
"@ Timestamp" => 2017-12-26T03: 45: 48.926Z,
"@ Version" => "1 ",
"Host" => "0.0.0.0 ",
"Message" => "2017.12.26 admin 172.16.81.82 200 ",
"Tags" => []
}

Configure the logstash configuration file of the kafka Cluster

Client logstash push log

# Configure the logstash configuration file [root @ www conf] # cat nginx_kafka.conf input {file {type => "access. log "path =>"/var/log/nginx/imlogin. log "start_position =>" beginning "} output {kafka {bootstrap_servers =>" 172.16.81.131: 9092,172.16 .81.132: 9092 "topic_id => 'sumer '}}

Configure logstash on the server to filter and split logs

[root@logstash1 conf]# cat kafka.conf     input {        kafka {            bootstrap_servers => "172.16.81.131:9092,172.16.81.132:9092"            group_id => "logstash"            topics => ["summer"]            consumer_threads => 50            decorate_events => true        }    }    filter {        grok {            match => {                "message" => "%{NOTSPACE:accessip} \- \- \[%{HTTPDATE:time}\] %{NOTSPACE:auth} %{NOTSPACE:uri_stem} %{NOTSPACE:agent} %{WORD:status} %{NUMBER:bytes} %{NOTSPACE:request_url} %{NOTSPACE:browser} %{NOTSPACE:system} %{NOTSPACE:system_type} %{NOTSPACE:tag} %{NOTSPACE:system}"            }        }        date {            match => [ "logdate", "MMM dd YYYY HH:mm:ss" ]            }    }    output {        elasticsearch {            hosts => ["172.16.81.133:9200","172.16.81.134:9200"]            index => "logstash-%{+YYYY.MM.dd}"        }    }

Then observe the consumption on the es cluster.

[root@es1 ~]# curl -XGET '172.16.81.134:9200/_cat/indices?v&pretty'    health status index                  uuid                   pri rep docs.count docs.deleted store.size pri.store.size    green  open   logstash-2017.12.29.03 waZfJChvSY2vcREQgyW7zA   5   1    1080175            0    622.2mb        311.1mb    green  open   logstash-2017.12.29.06 Zm5Jcb3DSK2Ws3D2rYdp2g   5   1        183            0    744.2kb        372.1kb    green  open   logstash-2017.12.29.04 NFimjo_sSnekHVoISp2DQg   5   1       1530            0      2.7mb          1.3mb    green  open   .kibana                YN93vVWQTESA-cZycYHI6g   1   1          2            0     22.9kb         11.4kb    green  open   logstash-2017.12.29.05 kPQAlVkGQL-izw8tt2FRaQ   5   1       1289            0        2mb            1mb

Used with the elasticsearch cluster head plug-in !! Observe log generation !!

 

4. Install and deploy kibana

Download rpm package

kibana-5.0.1-x86_64.rpm

Install kibana Software

rpm -ivh kibana-5.0.1-x86_64.rpm

Configuration File

[root@es1 opt]# cat /etc/kibana/kibana.yml | grep -v "^#" | grep -v "^$"    server.port: 5601    server.host: "172.16.81.133"    elasticsearch.url: "http://172.16.81.133:9200"

Start kibana

systemctl start kibanasystemctl enable kibana

Browser browsing

http://172.16.81.133:5601/

 

Data is displayed normally !!

Please point out any problems! There are a lot of IP addresses that you don't know. You can view the articles on this site!

Elasticsearch Tutorials:

Full record of installation and deployment of ElasticSearch on Linux

Install and configure Elasticsearch 1.7.0 in Linux

Install the Elasticsearch 16.04 analysis engine in Ubuntu 5.4

Elasticsearch1.7 upgrade to 2.3 practice summary

Elasticsearch cluster configuration in Ubuntu 14.04

Elasticsearch-5.0.0 porting to Ubuntu 16.04

ElasticSearch 5.2.2 Cluster Environment Construction

Install the search engine Elasticsearch in Linux

How to install ElasticSearch on CentOS

Install the plug-in head in Elasticsearch5.3

ElasticSearch details: click here
ElasticSearch: click here

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.