Elasticsearch+logstash+kinaba+redis Log Analysis System

Source: Internet
Author: User
Tags kibana logstash



First, Introduction



1. Composition



Elk consists of three parts: Elasticsearch, Logstash and Kibana.



Elasticsearch is an open source distributed search engine, it features: distributed, 0 configuration, automatic discovery, Index auto-shard, index copy mechanism, RESTful style interface, multi-data source, automatic search load, etc.



Logstash is a fully open source tool that collects, analyzes, and stores your logs for later use



Kibana is an open source and free tool that provides log analytics friendly Web interface for Logstash and ElasticSearch to help you summarize, analyze, and search for important data logs.



2. Components



Logstash:logstash server side used to collect logs;



Elasticsearch: Store all kinds of logs;



Kibana:web interfaces are used as search and visualization logs;



Logstash Forwarder:logstash client is used to send logs to Logstash server via Lumberjack Network protocol;



3. Work Flow



Deploy Logstash on all services that need to collect logs, as logstash_agent (Logstash shipper) to monitor and filter the collection logs, send filtered content to Redis, and Logstash_server Collect the logs together to the full-text search service Elasticsearch, you can use Elasticsearch to customize the search by Kibana to combine custom search for page presentation.






4. Service distribution


Host A 192.168.0.100 Elasticsearch + Logstash-server + Kinaba + Redis
Host B 192.168.0.101 Logstash-agent





Ii. start of Deployment Services






On Host B above 192.168.0.101



Deploying the Java Environment



#下载软件包, unzip, set environment variables


wget http://download.oracle.com/otn-pub/java/jdk/8u111-b14/jdk-8u111-linux-x64.tar.gz
tar -xf jdk-8u111-linux-x64.tar.gz -C /usr/local
mv /usr/local/jdk-8u111-linux-x64 /usr/local/java
echo "export PATH=\$PATH:/usr/local/java/bin" > /etc/profile.d/java.sh
. /etc/profile


2. Deploying Logstash-agent


wget https://download.elastic.co/logstash/logstash/logstash-2.2.0.tar.gz
tar -xf logstash-2.2.0.tar.gz -C /usr/local
echo "export PATH=\$PATH:/usr/local/logstash-2.2.0/bin" > /etc/profile.d/logstash.sh
. /etc/profile


3, Logstash Common parameters


-e: specifies the configuration information of logstash, which can be used for quick testing;
  -f: specify the configuration file of logstash; can be used in production environment;


4. Start Logstash



4.1 Specify the configuration information of the Logstash with the-e parameter for quick testing and direct output to the screen.


#logstash -e "input {stdin {}} output {stdout {}}"
             my name is zhengyansheng. // Enter after entering manually,
             After 10 seconds, the result will be returned. Logstash startup completed 2015-10-08T13: 55: 50.660Z 0.0.0.0
             my name is zhengyansheng. This output is directly returned intact ...


4.2 Specify the configuration information of the Logstash with the-e parameter for quick testing and output to the screen in JSON format.



5.1 Logstash output information is stored in the Redis database






Save the output information of the Logstash to the Redis database, as follows



if (192.168.0.100) has a Redis database, the next step is to install the Redis database.


# cat logstash_agent.conf
input {stdin {}}
output {
     stdout {codec => rubydebug}
     redis {
         host => ‘192.168.0.100’
          port => ‘6379’
         password => ‘12345678’
 
         data_type => ‘list’
         key => ‘logstash: redis’
     }
}
 
If Failed to send event to Redis is displayed, it means that the connection to Redis fails or is not installed, please check ...


6. View Logstash Process


# logstash agent -f logstash_agent.conf --verbose
#  ps -ef|grep logstash





192.168.0.100 on Host A






Deploying the Java Environment (IBID.)



Deploying Redis


wget http://download.redis.io/releases/redis-2.8.19.tar.gz
yum install tcl -y
tar zxf redis-2.8.19.tar.gz
cd redis-2.8.19
make MALLOC = libc
make test // This step will take longer ...
make install
 
cd utils /
./install_server.sh // After the script is executed, all options are subject to the default parameters.
Welcome to the redis service installer
This script will help you easily set up a running redis server
 
Please select the redis port for this instance: [6379]
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf]
Selected default-/etc/redis/6379.conf
Please select the redis log file name [/var/log/redis_6379.log]
Selected default-/var/log/redis_6379.log
Please select the data directory for this instance [/ var / lib / redis / 6379]
Selected default-/ var / lib / redis / 6379
Please select the redis executable path [/ usr / local / bin / redis-server]
Selected config:
Port: 6379
Config file: /etc/redis/6379.conf
Log file: /var/log/redis_6379.log
Data dir: / var / lib / redis / 6379
Executable: / usr / local / bin / redis-server
Cli Executable: / usr / local / bin / redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service ...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server ...
Installation successful!


View the monitoring ports for Redis





# netstat -tnlp |grep redis
tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      3843/redis-server * 
tcp        0      0 127.0.0.1:21365             0.0.0.0:*                   LISTEN      2290/src/redis-serv 
tcp        0      0 :::6379                     :::*                        LISTEN      3843/redi





Test if Redis works





# cd redis-2.8.19 / src /
# ./redis-cli -h 192.168.0. 100 -p 6379 // Connect redis
192.168.0.100:6379> ping
PONG
192.168.0.100:6379> set name zhengyansheng
OK
192.168.0.100:6379> get name
"zhengyansheng"
192.168.0.100:6379> quit
   
Start redis
/ usr / local / redis / bin / redis-server /usr/local/redis/conf/redis.conf





Boot Logstash based on ingress Redis (boot logstash-agent on Host B)





# cat logstash_agent.conf
input { stdin { } }
output {
    stdout { codec => rubydebug }
    redis {
        host => ‘192.168.0.100‘
        data_type => ‘list‘
        key => ‘logstash:redis‘
    }
}
# logstash agent -f logstash_agent.conf --verbose
Pipeline started {:level=>:info}
Logstash startup completed
dajihao linux
{
       "message" => "dajihao linux",
      "@version" => "1",
    "@timestamp" => "2015-10-08T14:42:07.550Z",
          "host" => "0.0.0.0"
}





Install Elasticsearch on Host a



1, installation Elasticsearch





# wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-2.2.0.tar.gz
# tar zxf elasticsearch-2.2.0.tar.gz -C /usr/local/





2. Modify the Elasticsearch configuration file elasticsearch.yml and make the following modifications.





# vim /usr/local/elasticsearch-2.2.0/config/elasticsearch.yml
discovery.zen.ping.multicast.enabled: false #Turn off the broadcast. If the LAN machine has port 9300, the service will not start.
network.host: 192.168.0.100 #Specifying the host address is actually optional, but it is best to specify that it will report an http connection error when it is integrated with kibana (visually it seems to be listening ::: 9200 instead of 0.0.0.0 : 9200)
http.port: 9200


3. Start Elasticsearch Service


Nohup/usr/local/elasticsearch-2.2.0/bin/elasticsearch >/usr/local/elasticsearch-2.2.0/nohub &





If this method fails to start, create a normal user es boot





groupadd elk
useradd es -g elk
chown -R es.elk /usr/local/elasticsearch-2.2.0
su - es
nohup  /usr/local/elasticsearch-2.2.0/bin/elasticsearch >/usr/local/elasticsearch-2.2.0/nohub &





4. View Elasticsearch's Listening port


# netstat -tnlp |grep java
tcp        0      0 :::9200                     :::*                        LISTEN      7407/java           
tcp        0      0 :::9300                     :::*                        LISTEN      7407/java





Install Logstash-server on Host A (IBID.) Note that the configuration file is different



cat logstash_server.conf input {redis {host => '192.168.0.100' port => '6379' password => '12345678' data_type => 'list' key => 'logstash: redis' type => "redis-input" }} output {elasticsearch {hosts => "192.168.0.100" index => "logstash-% {+ YYYY.MM.dd}"}}
Start logstash-server (get data from redis and transfer to es)
logstash agent -f logstash_server.conf --verbose





Installing the Elasticsearch Plugin






#Elasticsearch-kopf plug-in can query the data in Elasticsearch, install Elasticsearch-kopf, just execute the following command in the directory where you install Elasticsearch:


# cd /usr/local/elasticsearch-2.2.0/bin/
# ./plugin install lmenezes / elasticsearch-kopf
-> Installing lmenezes / elasticsearch-kopf ...
Trying https: //github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...
Downloading ............................. ............................
Installed lmenezes / elasticsearch-kopf into /usr/local/elasticsearch-2.2.0/plugins/kopf
 
After executing the plug-in installation, it will prompt failure, most likely it is the network, etc ...
-> Installing lmenezes / elasticsearch-kopf ...
Trying https: //github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...
Failed to install lmenezes / elasticsearch-kopf, reason: failed to download out of all possible locations ..., use --verbose to get detailed information
 
The solution is to download the software manually, without installing commands via the plugin ...
cd /usr/local/elasticsearch-2.2.0/plugins
wget https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip
unzip master.zip
mv elasticsearch-kopf-master kopf
The above operation is exactly equivalent to the installation command of the plugin





Browser Access Kopf page access to Elasticsearch saved data






Install Kibana on Host a



1, installation Kinaba


# wget https://download.elastic.co/kibana/kibana/kibana-4.4.0-linux-x64.tar.gz# Tar zxf Kibana-4.4.0-linux-x64.tar.gz-c/usr/local





2. Modify the Kinaba configuration file kinaba.yml





# Vim/usr/local/kibana-4.4.0-linux-x64/config/kibana.ymlelasticsearch_url: "http://192.168.0.100:9200"


3. Start Kinaba





Nohup/usr/local/kibana-4.4.0-linux-x64/bin/kibana >/usr/local/kibana-4.4.0-linux-x64/nohub.out &





Output the following information to indicate Kinaba success.



{"Name": "Kibana", "hostname": "Localhost.localdomain", "pid": 1943, "level": +, "msg": "No existing Kibana index found", " Time ":" 2015-10-08t00:39:21.617z "," V ": 0}



{"Name": "Kibana", "hostname": "Localhost.localdomain", "pid": 1943, "level": +, "msg": "Listening on 0.0.0.0:5601", " Time ":" 2015-10-08t00:39:21.637z "," V ": 0}



Kinaba default listener on local port 5601









4. Browser Access Kinaba






4.1 Use the default logstash-* index name, and it is time-based, click "Create".



650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/74/25/wKioL1YWFTbx56DuAAOecFQkxcA301.jpg "style=" float: none; "Title=" 01 (Home). png "alt=" wkiol1ywftbx56duaaoecfqkxca301.jpg "width=" 650 "/>






4.2 See the following interface to illustrate the completion of index creation.



650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/74/27/wKiom1YWFR-hbgxOAAQZ85RcxMg067.jpg "style=" float: none; "title=" (create). png "alt=" wkiom1ywfr-hbgxoaaqz85rcxmg067.jpg "width=" 650 "/>



4.3 Click "Discover" to search and browse the data in Elasticsearch.



650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/74/25/wKioL1YWFTaRWGZmAAQ81v02YE8823.jpg "style=" float: none; "Title=" (Discover). png "alt=" wkiol1ywftarwgzmaaq81v02ye8823.jpg "width=" 650 "/>









>>> End <<<








1. Elk Default port number elasticsearch:9200 9300logstash:9301kinaba:5601


2. Summary of Errors



(1) Java version is too low



[2015-10-07 18:39:18.071] WARN--Concurrent: [DEPRECATED] Java 7 is DEPRECATED and use Java 8.






(2) Kibana tip elasticsearch version too low ...



This version of Kibana requires Elasticsearch 2.0.0 or higher on all nodes. I found the following incompatible nodes in your cluster:



Elasticsearch v1.7.2 @ inet[/192.168.1.104:9200] (127.0.0.1)



Workaround:









Reference: http://467754239.blog.51cto.com/4878013/1700828









Elasticsearch+logstash+kinaba+redis Log Analysis System


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.