Logstash+elasticsearch+kibana Log Collection

Source: Internet
Author: User
Tags kibana logstash

I. Environmental preparedness

Role SERVER IP
Logstash Agent 10.1.11.31
Logstash Agent 10.1.11.35
Logstash Agent 10.1.11.36
Logstash Central 10.1.11.13
Elasticsearch 10.1.11.13
Redis 10.1.11.13
Kibana 10.1.11.13

The architecture diagram is as follows:

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M02/76/67/wKiom1ZSrUfA29-4AABBqXRaH_Y419.png "title=" Elk.png "alt=" Wkiom1zsrufa29-4aabbqxrah_y419.png "/>

The entire process is as follows:

1) The Logstash agent of the remote node collects the local log and sends it to the remote Redis list queue

2) using Redis as the middleware of log collection, can stage the log data of remote node, play data buffering, and improve the function of concurrency.

3) Central Logstash reads data from Redis and local log files to Elasticsearch for storage, index

4) Kibana reads data from the Elasticsearch and presents it to the user via the web GUI


Two. Installation

The installation of elk is simple, just download the binary package and unzip it, the required binary package is as follows:

Elasticsearch-1.7.1.tar.gz

Kibana-4.1.1-linux-x64.tar.gz

Logstash-1.5.3.tar.gz

1) Start Redis (10.1.11.13)

After the official download of the Redis source code compiled installation, after the following configuration to start:

#调整内核参数: echo 1 > /proc/sys/vm/overcommit_memory echo never > /sys/ kernel/mm/transparent_hugepage/enabledecho 524288 > /proc/sys/net/core/somaxconn# Modify the Redis configuration file as follows:[[email protected] logstash]# cat /etc/redis-logstash.confdaemonize  yespidfile /data/redis-logstash/run/redis.pidport 6377tcp-backlog 511timeout  0tcp-keepalive 0loglevel noticelogfile  "/data/redis-logstash/log/redis.log" databases  16save 900 1save 300 10save 60 10000stop-writes-on-bgsave-error  yesrdbcompression yesrdbchecksum yesdbfilename dump.rdbdir /data/redis-logstash/ Dbslave-serve-stale-data yesslave-read-only yesrepl-diskless-sync norepl-diskless-sync-delay  5repl-disable-tcp-nodelay noslave-priority 100appendonly noappendfilename  " Appendonly.aof "appendfsync everysecno-appendfsync-on-rewrite noauto-Aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mbaof-load-truncated yeslua-time-limit  5000slowlog-log-slower-than 10000slowlog-max-len 128latency-monitor-threshold  0notify-keyspace-events  "" hash-max-ziplist-entries 512hash-max-ziplist-value  64list-max-ziplist-entries 512list-max-ziplist-value 64set-max-intset-entries  512zset-max-ziplist-entries 128zset-max-ziplist-value 64hll-sparse-max-bytes  3000activerehashing yesclient-output-buffer-limit normal 0 0  0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub  32mb 8mb 60hz 10aof-rewrite-incremental-fsync yesmaxmemory  32212254720maxmemory-policy allkeys-lru# Start redis/usr/local/bin/redis-server /etc/redis-logstash.conf

2) Install Logstash agent (10.1.11.31/35/36)

Unzip logstash-1.5.3.tar.gz to/usr/local

cd /usr/localln -s logstash-1.5.3 logstash# Create the/etc/logstash directory, the user saves the agent-side rule file mkdir / etc/logstash#  Configure the agent side to collect the Tomcat log rules vim /etc/logstash/tomcat_api.conf# Configure the log input source input {     file {        type =>  "Tomcat_api"     #指定日志类别名称         path =>  "/data/logs/bd_api/api"     #指定日志路径         start_position =>  " Beginning "    #从日志文件首部开始收集     }} #过滤规则配置filter  {     if [type] ==  "Tomcat_api"  {        # Multiline is used to merge multiple rows of logs into a single line, because Java's exception will have multiple lines, but it should be treated as a log record          multiline {            patterns_dir = >  "/usr/local/logstash/patterns"     #patterns_dir用于指定patterns文件的位置, patterns file holds regular expressions for matching log fields              pattern =>  "^%{timestamp_iso8601}"    # Specify a matching pattern            negate =>  true      #true表示只要不匹配pattern的行都会进行合并, default is false             what =>  "Previous"     # Lines matching pattern are appended to the front of the merge row as a log output         }           #grok用来进行日志字段的解析         grok {             patterns_dir =>  "/usr/local/ Logstash/patterns "            match =>  {  "message"  =>  "%{log4jlog}" &nbSP;}      #在 Create%{log4jlog} in/usr/local/logstash/patterns as follows: #LOG4JLOG &NBSP;%{TIMESTAMP_ iso8601:datetime}\s+ (? <thread>\s+) \s+ (? <line>\d+) \s+ (? <level>\s+) \s+ (? <class>\S+) \s+-\ S+ (<msg>.*)              #mutate可以对字段的内容进行替换         mutate {             replace => [  "Host",  "10.1.11.31"]     &NBSP;&NBSP;&NBSP;&NBSP;}&NBSP;&NBSP;&NBSP;&NBSP,}} #日志输出源output  {     #规则调试时开启       #stdout  { codec =>  "Rubydebug"  }    # Output log data to a remote redis list     redis {      host = >  "10.1.11.13"       port => 6377       data_type =>  "List"       key =>  "Tomcat_api" &NBSP;&NBSP;&NBSP;&NBSP;} 

3) Install Central Logstash (10.1.11.13)

Unzip logstash-1.5.3.tar.gz to/usr/local

cd /usr/localln -s logstash-1.5.3 logstash# Create the/etc/logstash directory, The user saves the rules file for the central and local agents mkdir /etc/logstash# There are two rule files created here/etc/logstash/├── central.conf           #保存central端的logstash规则 └── tomcat_uat.conf        #保存本地agent的logstash规则vim  central.confinput {    # #product      #从redis中获取类别为tomcat_api的日志     redis {         host =>  "127.0.0.1"         port  => 6377        type =>  "Redis-input"          data_type =>  "List"          key =>  "Tomcat_api"     }    # Get the category Tomcat_editor log from Redis     redis {        host =>  "127.0.0.1"          port => 6377        type =>   "Redis-input"         data_type =>  "list"          key =>  "Tomcat_editor"     }}output  {     #stdout  { codec =>  "Rubydebug"  }      #日志输出到elasticsearch进行索引     elasticsearch {         flush_size => 50000        idle_flush_time  => 10        cluster =>  "logstash-1113"         host =>  ["127.0.0.1:9300"]         workers => 2    }}#------------------------------------------------------------- ----Vim tomcat_uat.conf   input {        file  {        type =>  "Tomcat_api_ab"          path =>  "/data/logs/bd_api/errors/api_error"          start_position =>  "Beginning"     }     file {        path =>  "/data/logs/bd_ Admin/admin "        type => " tomcat_9083 "         start_position =>  "Beginning"     }}filter  {    if [type] in ["Tomcat_api_ab", "tomcat_9083"] {       &Nbsp; multiline {            patterns_ dir =>  "/usr/local/logstash/patterns"              pattern =>  "^%{timestamp_iso8601}"              negate => true             what =>  "Previous"         }                  grok {             patterns_dir =>  "/usr/local/ Logstash/patterns "            match =>  {  "message"  =>  "%{log4jlog}"  }          }         mutate {             replace => [  "Host",  "10.1.11.13"]         }    }}output {     #stdout  { codec =>   "Rubydebug"  }    elasticsearch {         flush_size => 50000        idle_flush_time = > 10        cluster =>  "logstash-1113"          host =>  ["127.0.0.1:9300"]         workers => 2    }}

4) Install Elasticsearch

#解压elasticsearch -1.7.1.tar.gz to/usr/localtar XF elasticsearch-1.7.1.tar.gz-c/usr/localcd/usr/localln-s elasticsearch-1.7.1 Elasticsearch[[email protected] config]# egrep-v ' ^#|^$ ' elasticsearch.yml #指定集群名称cluster. Name: logstash-1113# Data Index directory path path.data:/data/logstash/els/data# Data Temp directory path path.work:/data/logstash/els/work# Log path Path.logs:/data/logstash/els/logs# an issue that prompted the inability to connect elasticsearch when the access Kibana was resolved (previously Kibana3) http.cors.enabled:true# Adjust the JVM to memory size Vim/usr/local/elasticsearch/bin/elasticsearch.in.shif ["X$es_min_mem" = "x"]; Then Es_min_mem=4gfiif ["X$es_max_mem" = "x"]; Then ES_MAX_MEM=16GFI

5) Install Kibana

#解压kibana -4.1.1-linux-x64.tar.gz to/usr/localtar XF kibana-4.1.1-linux-x64.tar.gz-c/usr/localcd/usr/localln-s Kibana-4.1.1-linux-x64 Kibana


Three. Start Elk

Central Terminal start:

# # # starting Logstash ###/usr/local/elasticsearch/bin/elasticsearch-d | | /bin/truenohup/usr/local/logstash/bin/logstash agent-f/etc/logstash/central.conf-l/data/logstash/log/ Logstash-central.log &>/data/logstash/log/logstash-central.out | | /bin/true &sleep 3nohup/usr/local/logstash/bin/logstash agent-f/etc/logstash/tomcat_uat.conf-l/data/logstash/ Log/logstash-uat.log &>/data/logstash/log/logstash-uat.out | | /bin/true &sleep 1nohup/usr/local/kibana/bin/kibana &>/dev/null | | /bin/true &

Agent-side boot:

# # # starting Logstash api-agent ###/usr/bin/nohup/usr/local/logstash/bin/logstash agent-f/etc/logstash/tomcat_ Api.conf-l/data/logstash/log/logstash-api.log &>/dev/null | | /bin/true &

Copy the above command to the/etc/rc.local to enable automatic start-up

Logstash+elasticsearch+kibana Log Collection

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.