Building real-time log collection system with Elasticsearch,logstash,kibana
Introduction
- This set of systems, Logstash is responsible for collecting processing log file contents stored in the Elasticsearch search engine database. Kibana is responsible for querying the elasticsearch and presenting it on the web.
- After the Logstash collection process harvests the log file contents, it outputs to the Redis cache, and the other Logstash process reads from Redis and dumps it into elasticsearch to solve the slow-speed inconsistency of read-write.
- Official online Documentation: https://www.elastic.co/guide/index.html
First, install the JDK7
Elasticsearch,logstash are Java programs, so you need a JDK environment.
Download: jdk-7u71-linux-x64.rpm
Http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
RPM-IVH jdk-7u71-linux-x64.rpm
Configuring the JDK
Edit/etc/profile File
In the file, locate the export PATH USER LOGNAME, and add the following content:
JAVA_HOME=/usr/java/jdk1.7.0_71JRE_HOME=/usr/java/jdk1.7.0_71/jrePATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/libexport PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL JAVA_HOME JRE_HOME CLASSPATH
Check the JDK environment
Use the Source/etc/profile command to make the environment variable effective immediately.
To view the currently installed JDK version, command: java-version
Check environment variables, echo $PATH
Second, installation Elasticsearch
-
Elasticsearch is a search engine that is responsible for storing log content.
-
Download installation
wget https://download.elastic.co/elasticsearch/elasticsearch/ elasticsearch-1.4.4.tar.gz
Tar zxvf elasticsearch-1.4.4.tar.gz
-
Modify CONFIG/ELASTICSEARCH.YML configuration file
bootstrap.mlockall : true index.number_of_shards : 1 index.number_of_replicas : 0 #index. translog.flush_threshold_ops:100000 #index. Refresh_interval:-1 index.translog.flush_ Threshold_ops: 5000 index.refresh_interval: 1 # Security allows all HTTP requests http.cors.enabled: truehttp.cors .allow -origin: "/.*/"
- Modify the Bin/elasticsearch file
# 使jvm使用os,max-open-fileses_parms="-Delasticsearch -Des.max-open-files=ture"# Start up the service# 修改OS打开最大文件数1000000-l"$pidfile""$daemonized""$properties"
Run
./bin/elasticsearch-d
./logs down as log file
Check node status
Curl-xget ' Http://localhost:9200/_nodes?os=true&process=true&pretty=true '
{"Cluster_Name":"Elasticsearch","Nodes": {"7peazbvxtocl2o2kumgryq": {"Name":"Gertrude yorkes","Transport_address":"inet[/172.16.18.116:9300]","Host":"Casimbak","IP":"172.16.18.116","Version":"1.4.4","Build":"c88f77f","Http_address":"inet[/172.16.18.116:9200]","Settings": {"Index": {"Number_of_replicas":"0","Translog": {"Flush_threshold_ops":" the"},"Number_of_shards":"1","Refresh_interval":"1"},"Path": {"Logs":"/home/jfy/soft/elasticsearch-1.4.4/logs","Home":"/home/jfy/soft/elasticsearch-1.4.4"},"Cluster": {"Name":"Elasticsearch"},"Bootstrap": {"Mlockall":"true"},"Client": {"Type":"Node"},"http": {"Cors": {"Enabled":"true","Allow-origin":"/.*/"} },"Foreground":"Yes","Name":"Gertrude yorkes","Max-open-files":"Ture"},"Process": {"Refresh_interval_in_millis": +,"id":13896,"Max_file_descriptors":1000000,"Mlockall": true},...} }}
Indicates that the Elasticsearch is running and that the status is consistent with configuration
"Index": {"Number_of_replicas":"0","Translog": {"Flush_threshold_ops":" the"},"Number_of_shards":"1","Refresh_interval":"1"},"Process": {"Refresh_interval_in_millis": +,"id":13896,"Max_file_descriptors":1000000,"Mlockall":true},
Install head plugin to monitor elasticsearch status
Elasticsearch/bin/plugin-install Mobz/elasticsearch-head
http://172.16.18.116:9200/_plugin/head/
third, installation Logstash
Logstash a Log collection processing filtering program.
The Logstash is divided into the log collection end process and the log processing end process, which collects multiple log files in real time to output the log content to the Redis queue cache, and the processing side is responsible for outputting the contents of the Redis queue cache to the Elasticsarch store. The collection-side process runs on the server that generated the log files, and the processing-side process runs on the same server as the Redis,elasticsearch.
Download
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
Redis installation configuration is slightly, but be careful to monitor the Redis queue length, if the long heap description elasticsearch problem
Check the length of the data list in Redis every 2S, 100 times
Redis-cli-r 100-i 2 Llen Logstash:redis
Configuring the Logstash Log collection process
VI./lib/logstash/config/shipper.conf
Input {#file { # type = "Mysql_log" # Path = "/usr/local/mysql/data/localhost.log" # codec = plain{ # charset = "GBK" # } #}file {type = ="Hostapd_log"Path ="/root/hostapd/hostapd.log"Sincedb_path ="/home/jfy/soft/logstash-1.4.2/sincedb_hostapd.access" #start_position = "Beginning" #http://logstash.net/docs/1.4.2/codecs/plaincodec = plain{CharSet ="GBK"}} file {type = ="Hkt_log"Path ="/usr1/app/log/bsapp.tr"Sincedb_path ="/home/jfy/soft/logstash-1.4.2/sincedb_hkt.access"Start_position ="Beginning"codec = plain{CharSet ="GBK"} }# stdin {# type = "Hostapd_log"# }}#filter {# grep {# match = ["@message", "mysql| Get|error "]# }#}Output {redis {host = =' 172.16.18.116 'Data_type =' list 'Key =' Logstash:redis '# codec = plain{# charset = "UTF-8"# }}# elasticsearch {# #embedded = True# host = "172.16.18.116"# }}
Run the collection-side process
./bin/logstash agent-f./lib/logstash/config/shipper.conf
Configuring the Logstash log processing process
VI./lib/logstash/config/indexer.conf
input { redis { ‘127.0.0.1‘ ‘list‘ ‘logstash:redis‘ #threads => 10 #batch_count => 1000 }}output { elasticsearch { #embedded => true host => localhost #workers => 10 }}
Running process-side processes
./bin/logstash agent-f./lib/logstash/config/indexer.conf
The processing side reads the cached log content from the Redis and outputs it to the Elasticsarch store
iv. installation of Kibana
Kibana is the Web display interface of Elasticsearch search engine, a set of JS script under webserver, can customize complex query filter criteria to retrieve Elasticsearch, and display it in many ways (table, chart).
Download
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz
After decompression, put the Kibana directory where webserver can access it.
Configuration
Modify Kibana/config.js:
"http://192.168.91.128:9200",#这里实际上是浏览器直接访问该地址连接elasticsearch否则默认,一定不要修改
If connection failed appears, modify the elasticsearch/config/elasticsearch.yml, adding:
http.cors.enabledtrue http.cors.allow-origin"/.*/"
Specific meanings See:
Http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html
Visit Kibana
Http://172.16.18.114:6090/kibana/index.html#/dashboard/file/logstash.json
Configuring the Kibana Interface
You can configure the log type to be accessed in filtering, such as _type=voip_log, which corresponds to the type shipper.conf in the above Logstash
You can also save the configuration of the current interface to Elasticsearch by clicking Save in the upper-right corner, which is saved in the Kibana-int index by default
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Building real-time log collection system with Elasticsearch,logstash,kibana