Linux Open source real-time log Analysis Elk deployment detailed

Source: Internet
Author: User
Tags gpg kibana logstash rsyslog

Objective:

Elk is mainly a combination of 3 software, mainly Elasticsearch search engine, Logstash is a log collection log, Kibana real-time analysis to show.

[about the Log collection software, such as: Scribe,flume,heka,logstash,chukwa,fluentd, of course rsyslog rsyslog-ng can be collected.

About log phone after storage software, such as: Hdfs,cassandra MongoDB, Redis,elasticsearch.

About log analysis software such as to use HDFs can write MapReduce analysis, if the need for real-time analysis is to use Kibana to display. ]

112.74.76.115 #安装logstash agent, Nginx

115.29.150.217 #安装logstash Index, Elasticsearch, Redis, Nginx

A. Redis installation configuration (115.29.150.217)

1, download and install:

#wget https://github.com/antirez/redis/archive/2.8.20.tar.gz

# Tar XF 2.8.20.tar.gz

# CD redis-2.8.20/&& make

The Execute file for the response is generated in the/USR/LOCAL/REDIS-2.8.20/SRC directory after make

2, and then in the creation of a Redis data store, configuration files and other directories.

#mkdir/usr/local/redis/{conf,run,db}–pv# CP redis.conf/usr/local/redis/conf/# CD src

# CP Redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server Mkreleasehdr.sh/usr/local/redis

3, Start Redis

#/usr/local/redis/redis-server/usr/local/redis/conf/redis.conf & Back up Redis 6379

two. Elasticsearch installation configuration (115.29.150.217)

1, download and install

#wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.3.2/ Elasticsearch-2.3.2.tar.gz

# Tar XF elasticsearch-2.3.2.tar.gz

# MV elasticsearch-2.3.2/usr/local/elk/

# ln-s/usr/local/elk/elasticsearch-2.3.2/bin/elasticsearch/usr/bin/

2, Background boot

# Elasticsearch Start–d

3, the test is successful

# Curl 115.29.150.217:9200

{"Status": $, "name": "Gorgeous George", "cluster_name": "Elasticsearch", "version": {"number": "1.4.1", "Build_hash": "89d3241d670db65f994242c8e8383b169779e2d4", "Build_timestamp": "2014-11-26t15:49:29z", "Build_snapshot": false, " Lucene_version ":" 4.10.2 "}," tagline ":" You Know, for Search "}

4,yum installation and installation [small extension]

#rpm--import https://packages.elastic.co/GPG-KEY-elasticsearch Import Key

# Vim/etc/yum.repos.d/centos-base.repo Add Yum

[Elasticsearch-2.x]name=elasticsearch repository for 2.x packagesbaseurl=http://packages.elastic.co/elasticsearch/ 2.x/centosgpgcheck=1gpgkey=http://packages.elastic.co/gpg-key-elasticsearchenabled=1

# yum Makecache update Yum Cache

# Yum Install elasticsearch–y installation

#chkconfig--add Elasticsearch added to system system

#service Elasticsearch Start

5, install plug-in [small extension]

# cd/usr/share/elasticsearch/#/plugin-install mobz/elasticsearch-head &&/bin/plugin-install LUKAS-VL cek/bigdesk/2.5.0

More details about these 2 plugins check: Https://github.com/lukas-vlcek/bigdesk

http://115.29.150.217:9200/_plugin/bigdesk/#nodes

http://115.29.150.217:9200/_plugin/head/View Relasticsearch cluster information and monitoring situation

[Kibana Installation reference Official document Http://kibana.logstash.es/content/kibana/v4/setup.html]

three. Logstash installation (112.74.76.115)

1, download unzip

#wget https://download.elastic.co/logstash/logstash/logstash-1.5.3.tar.gz

#tar XF logstash-1.5.3.tar.gz-c/usr/local/

#mkdir/usr/local/logstash-1.5.3/etc

Four. yum installation logstash(115.29.150.217)

# RPM--import Https://packages.elasticsearch.org/GPG-KEY-elasticsearch

# Vi/etc/yum.repos.d/centos-base.repo

[Logstash-1.5]name=logstash repository for 1.5.x packagesbaseurl=http://packages.elasticsearch.org/logstash/1.5/ Centosgpgcheck=1gpgkey=http://packages.elasticsearch.org/gpg-key-elasticsearchenabled=1

Test Logstash:

Test the Logstash on 115.29.150.217

#cd/opt/logstash/bin

#./logstash-e ' Input{stdin{}}output{stdout{codec=>rubydebug}} '

Hellologstash startup completed{"message" + "Hello", "@version" and "1", "@timestamp" = "2016-05- 26t11:01:44.039z "," host "=" IZ947D960CBZ "}


You can also test by curl:

# Curl ' Http://115.29.150.217:9200/_search?pretty '

{"Took": 1, "timed_out": false, "_shards": {"Total": 0, "successful": 0, "failed": 0}, "hits": { "Total": 0, "Max_score": 0.0, "hits": []}


Five. Logstash configuration

650) this.width=650; "class=" AlignCenter size-full wp-image-1195 "src=" http://www.mrliangqi.com/wp-content/uploads/ 2016/05/2016052701.jpg "alt=" 2016052701 "height=" 341 "width=" 595 "/>

1, set the Nginx log format, in both servers to modify

Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" '

' $status $body _bytes_sent ' $http _referer '

' "$http _user_agent" "$http _x_forwarded_for";

Access_log Logs/test.access.log Logstash; Set the access log to automatically write to the file when it is accessed

#/usr/local/nginx/sbin/nginx-s Reload re-loading Nginx

2, open Logstash Agent

The Logstash agent is responsible for transmitting the hand information to the Redis queue,

[[email protected] logstash-1.5.3] #vim etc/logstash_agent.conf

input {        file {                 type =>  "Nginx access log"                 path = > ["/usr/local/nginx/logs/test.access.log"]           }}output {        redis {                 host =>  "115.29.150.217"   #redis  server                 data_type =>  "List"                  key =>  "Logstash:redis"          }}


#nohup/usr/local/logstash-1.5.3/bin/logstash-f/usr/local/logstash-1.5.3/etc/logstash_agent.conf & Push the logs to the 217 server.

[[Email protected]]vim/etc/logstash/logstash_agent.conf

input {        file {                 type =>  "Nginx access log"                 path = > ["/usr/local/nginx/logs/test.access.log"]           }}output {        redis {                 host =>  "115.29.150.217"   #redis  server                 data_type =>  "List"                  key =>  "Logstash:redis"          }}


#/opt/logstash/bin/logstash-f/etc/logstash/logstash_agent.conf & also pushes logs to the queue on 217. If you want to add more than one, the same way, first install Logstash, and then use Logstash to push the collected logs to the past.

#ps-ef |grep Logstash can see the process running in the background to make sure Redis is running, otherwise it will prompt: Failed to send event to Redis

Appears: Indicates a successful push.

[1460] 19:53:01.066 * Ten changes in seconds. Saving ...

[1460] 19:53:01.067 * Background saving started by PID 1577

[1577] 19:53:01.104 * DB saved on disk

[1577] 19:53:01.104 * rdb:0 MB of memory used by Copy-on-write

[1460] 19:53:01.167 * Background saving terminated with success

3, open Logstash Indexer (115.29.150.217)

# vim/etc/logstash/logstash_indexer.conf

input {        redis {                 host =>  "115.29.150.217"                  data_type =>  "List"                 key  =>  "Logstash:redis"                  type =>  "Redis-input"         }} filter {        grok {                 type =>  "Nginx_access"                  match => [                           "message",  "%{iporhost:http_host} %{iporhost:client_ip} \[%{httpdate:timestamp}\] \" (?:%{word:http_verb} %{notspace:http_request} (?:  http/%{number:http_version})? | %{data:raw_http_request}) \ " %{NUMBER:http_status_code}  (?:%{number:bytes_read}|-)  %{qs:referrer } %{qs:agent} %{number:time_duration:float} %{number:time_backend_response:float} ",                           "message",  "%{iporhost:http_host} %{iporhost:client_ip} \[%{httpdate: Timestamp}\] \ "(?:%{word:http_verb} %{notspace:http_request} (?:  http/%{number:http_version})? | %{data:raw_http_request}) \ " %{NUMBER:http_status_code}  (?:%{number:bytes_read}|-)  %{qs:referrer } %{qs:agent} %{number:time_durAtion:float} "                ]         }}output {         elasticsearch {                 embedded => false                 protocol =>  "http"                  host =>  "localhost"                  port =>  "9200"          }}


# nohup/opt/logstash/bin/logstash-f/etc/logstash/logstash_indexer.conf &

Six. kibana installation

Introduction to the new features of Kibana4:

1) Highlight the label, the key link is better to use, style to support data density and more consistent UI.

2) Consistency query and filter layout

3) 100% new Time range Selector

4) List of fields that can be filtered

5) dynamic dashboards and URL parameters, etc.

1, download unzip

# wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz

# Tar XF kibana-4.1.1-linux-x64.tar.gz

# MV Kibana-4.1.1-linux-x64/usr/local/elk/kibana

2, Start Kibana

# pwd

/usr/local/elk/kibana/bin

#./kibana &

Open a browser to view http://115.29.150.217:5601

650) this.width=650; "class=" AlignCenter size-full wp-image-1196 "src=" http://www.mrliangqi.com/wp-content/uploads/ 2016/05/2016052702.png "alt=" 2016052702 "height=" 627 "width="/>

[Small extension]

Installation of the kibana3.0 version:

#wget Https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.zip

# tar fxz kibana-3.1.2.zip && mv Kibana-3.1.2.zip Kibana

# MV Kibana/usr/local/nginx/html/

In Nginx configuration good kibana.

location/kibana/{alias/usr/local/nginx/html/kibana/;        Index index.php index.html index.htm; }


http://115.29.150.217/kibana/index.html Access

650) this.width=650; "class=" AlignCenter size-full wp-image-1197 "src=" http://www.mrliangqi.com/wp-content/uploads/ 2016/05/2016052703.png "alt=" 2016052703 "height=" 638 "width="/>

Seven . Add Kibana Login Authentication

Kibana is developed by the Nodejs, itself without any security restrictions, as long as the browse URL can be accessed, if the public network environment is not secure, can be sent through Nginx request to increase authentication, the method is as follows:

Kibana not restarted, only through Ps–ef | Greo node to see the Nodejs process to the end.

In this place,

1, modify Nginx configuration file to add authentication

# Vim Nginx.conf

location/kibana/{#alias/usr/local/nginx/html/kibana/;p roxy_pass http://115.29.150.216:5602/;index index.php Index.html index.htm;auth_basic "secret"; auth_basic_user_file/usr/local/nginx/db/passwd.db;}

[Safety Reinforcement]

For security, I changed the Kibana default port of 5601 to 5602, not only to modify the Kibana configuration file, but also to point to that port in the Nginx configuration's reverse configuration, otherwise it would be inaccessible.

# pwd

/usr/local/elk/kibana/config

# Vim Kibana.yml

port:5602 modify the paragraph.

Host: "0.0.0.0" If you want to improve the external security of the public network, change to localhost 127.0.0.1, and then modify the address in Nginx.

# mkdir-p/usr/local/nginx/db/

2. Configure login User Password

# yum install-y httpd-tools installation htpasswd tool

# htpasswd-c/usr/local/nginx/db/passwd.db Elkuser

New Password: Enter password

Re-type New Password: enter password

Adding password for user elkuser OK.

Re-start Nginx

Access test this is in the KIBANA3 version test:

650) this.width=650; "class=" AlignCenter size-full wp-image-1198 "src=" http://www.mrliangqi.com/wp-content/uploads/ 2016/05/2016052704.png "alt=" 2016052704 "height=" 345 "width=" 883 "/>

Elasticsearch-Logstash-kibana-Redis service startup shutdown:

[Redis]

#/usr/local/redis/redis-server/usr/local/redis/conf/redis.conf & Startup

#killall Redis-server

[Elasticsearch]

#elasticsearch start-d Start

# Ps-ef | grep elasticsearch view pid and kill

[Logstash]

#nohup/usr/local/logstash-1.5.3/bin/logstash-f/usr/local/logstash-1.5.3/etc/logstash_agent.conf & Start log push

# Ps-ef | grep logstash view pid and kill

[Kibana]

The 3.0 version is directly unpacked and placed in the Web directory for access.

Version 4.0:

#/usr/local/elk/kibana/bin/kibana & Background Launch

# Ps-ef | grep node view node process and kill

This article from: control Penguin's Blog, permanent link:http://www.mrliangqi.com/1194.html

This article from "Internet&linux" blog, reproduced please contact the author!

Linux Open source real-time log Analysis Elk deployment detailed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.