Logstash + kibana + elasticsearch + redis

Source: Internet
Author: User
Tags kibana logstash

This is the information that beginners can easily understand when installing logstash + kibana + elasticsearch + redis. The installation has been completed according to the following steps.

There are two servers:
192.168.148.201logstash index, redis, elasticsearch, kibana, JDK
192.168.148.129 logstash agent, JDK

1System Application

Logstash: a fully open-source tool for log collection, analysis, and storage. It can be used to collect and repost system logs. At the same time, various log plug-ins are integrated to facilitate log query and analysis. Generally, shipper is used for log collection and indexer is used for log reprinting.

  • Logstash shipper collects logs and forwards the logs to redis for storage.
  • Logstash indexer reads data from redis and forwards it to elasticsearch

Redis: A Caching mechanism is implemented here. logstash shipper forwards logs to redis (only queue processing is performed and no storage is performed ). Logstash indexer reads data from redis and forwards it to elasticsearch. Redis is added here to increase the speed at which logstash shipper logs are submitted to logstash indexer and avoid data loss caused by sudden power outages.

Elasticsearch: an open-source search engine framework that provides a distributed full-text search engine with multi-user capabilities, based on restful web interfaces. You can also perform multiple data clusters to improve efficiency. The purpose here is to read data from redis and forward it to kibana.

Kibana: displays the data after data mining in charts and other forms on a beautiful interface.

The following is a workflow of the work:

 

 

2Server Installation Procedure (192.168.148.201)

  2.1 JDK Installation

1. Download JDK: jdk-8u25-linux-x64.tar.gz

2. Unzip and install:

Install JDK in this path (defined by yourself):/OPT

cd /opttar -zxvf jdk-8u25-linux-x64.tar.gz

The jdk1.8.0 _ 25 folder appears.

3. Configure the environment variable: Vim ~ /. Bashrc

Add at the end of the opened file,

export JAVA_HOME=/opt/jdk1.8.0_25
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

Save and exit, and then enter the following command to make it take effect

source ~/.bashrc

 

4. Configure the default JDK

sudo update-alternatives --install /usr/bin/java java /opt/jdk1.8.0_25/bin/java 300 sudo update-alternatives --install /usr/bin/java java /opt/jdk1.8.0_25/bin/javac 300

Note: If the above two commands fail to find the path, restart the computer and repeat the above two lines of code.

Run the following code to view the current JDK versions and configurations:

sudo update-alternatives --config java

5. Test

Open a terminal and enter the following command:

java -version

Check whether the Java command can be run.

2.2 redis

The main configuration parameters of redis. conf are as follows:

  • Daemonize: whether to run in daemon mode
  • Pidfile: PID File Location
  • Port: the port number of the listener.
  • Timeout: Request timeout
  • Loglevel: log information level
  • Logfile: Location of the log file
  • Databases: number of databases Enabled
  • Save **: the frequency at which snapshots are saved. The first "*" indicates the duration and the third "indicates the number of write operations performed. Snapshots are automatically saved when a certain number of write operations are performed within a certain period of time. You can set multiple conditions.
  • Rdbcompression: whether to use Compression
  • Dbfilename: Data snapshot file name (only file name, excluding directory)
  • Dir: directory for storing data snapshots (this is the Directory)
  • Appendonly: whether to enable appendonlylog. If it is enabled, a log is recorded for each write operation, which improves data risk resistance but affects efficiency.
  • Appendfsync: How to synchronize appendonlylog to the disk (three options are force-call fsync for each write, enable fsync once per second, and do not call fsync to wait for the system to synchronize itself)

Now you can open a terminal for testing. The default listening port in the configuration file is6379

 

1. Deploy a single redis instance:

wget https://github.com/antirez/redis/archive/3.0.0-rc1.tar.gz

 

2.

tar zxvf 3.0.0-rc1.tar.gz

 

3. Compile

The installation of apsaradb for redis is very simple and there are ready-made makefile files. Run the make command directly.

makemake install

4. The redis. conf configuration file is:

 

daemonize yes
port 6379 appendonly yes

5. Start:

 redis.server redis.conf

6. Test

 redis-cli

127.0.0.1:6379> quit

/bin

redis-server redis.conf &

2.3 logstash

Download and unzip:

$ wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz

$ tar zxvf logstash-1.4.2.tar.gz -C /usr/local/

$ cd /usr/local/logstash-1.4.2

$ mkdir conf logs

 

Configuration File CONF/Central. conf:

 
input {

 

    file {

        path => "/var/log"

        type => "syslog"

        exclude => "*.gz"

    }

 

 

    redis {

        host => "127.0.0.1"

        port => 6379

        type => "redis-input"

        data_type => "list"

        key => "key_count"

    }

}

  output {

    elasticsearch {

     host => "192.168.148.201"

     port => "9300"

    }

 

 

Start:


 [email protected]:/opt/logstash-1.4.2# bin/logstash agent --verbose --config conf/central.conf --log logs/stdout.log  

 

 

2.4 elasticsearch
$ wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.tar.gz

 

$ Elasticsearch uses the default configuration. The default cluster name is elasticsearch;

Start:

lasticsearch -d

 

$ Bin/elasticsearch is easy to decompress. Next, let's take a look at the effect. First, start the es service, switch to the elasticsearch directory, and run elasticsearch under bin.

cd /search/elasticsearch/elasticsearch-0.90.5/bin./elasticsearch start

 

Access the default port 9200

curl -X GET http://localhost:9200

3. Start the service

# elasticsearch-1.1.1/bin/elasticsearch &# logstash-1.4.2/bin/logstash -f logstash-1.4.2/conf/logstash-apache.conf &

 

2.5 deploy kibana
$ wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.1.tar.gz$ tar zxvf kibana-3.1.1.tar.gz 

 

To modify the configuration file config. JS, you only need to configure the elasticsearch address:

elasticsearch: "http://192.168.148.201:9200"
 cp -r kibana-3.1.0 /var/www/html/kibana3

 

Problem: The kibana web interface cannot appear, and port 80 has been used

cd /etc/apache2/sites-available

cp 000-default.conf kibana3.conf

vim kibana3.conf

 


<VirtualHost *: 8080> # Modify the port

 

     ServerAdmin 192.168.148.201/kibana3# It doesn't matter if you have it

     DocumentRoot / var / www / html / kibana3 # kibana's root directory

     <Directory / var / www / html / kibana3>

         Options None

         AllowOverride None

         Allow from all

     </ Directory>

CustomLog / var / www / html / kibana combined # Where to put logs

</ VirtualHost>

 
 

cd / etc / apache2

vim ports.conf
 

Listen 80

Listen 8080 # newly added

<IfModule ssl_module>

     Listen 443

</ IfModule>

 

<IfModule mod_gnutls.c>

     Listen 443

</ IfModule>
 

  Various restarts:

cd  /var/www/html/kibana3/app/dashboards
mv logstash.json default.json /etc/init.d/apache2 restart
cd /opt
elasticsearch-1.1.1/bin/elasticsearch & logstash-1.4.2/bin/logstash -f logstash-1.4.2/conf/logstash.conf &
 

3. Client installation

  Install the server:

bin / logstash -e ‘input {stdin {}} output {stdout {codec => rubydebug}}’
Let's enter some more characters, this time we enter "goodnight moon":
goodnight moon {"message" => "goodnight moon", "@timestamp" => "2013-11-20T23: 48: 05.335Z", "@version" => "1", "host" => "my- laptop "} 

Netstat-atln view port information

4. Access: http: // 192.168.148.201: 8080

Welcome Page


 

 


Logstash + kibana + elasticsearch + redis

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.