CENTOS7 Building Elk Open-source real-time log analysis system

Source: Internet
Author: User
Tags geoip gpg ssl certificate kibana logstash

Elasticsearch is an open source distributed search engine It features a distributed 0 Configuration Autodiscover Index Auto-shard index replica mechanism restful style interface multi-data source automatic search load etc.

Logstash is a fully open source tool he can collect, analyze, and store your logs for later use, such as search.

Kibana is also an open source and free tool He kibana can provide log analytics for Logstash and ElasticSearch the friendly Web interface helps you summarize, analyze, and search for important data logs.

The flow of traffic to the customer after the log is processed from the client to the server is as follows

Logstash-forwarder--->logstash--->elasticsearch--->kibana--->nginx---> Customer browser

Where Logstash-forwarder is the client's Log collection tool sends the log to the server Logstash after Logstash by using the grok matching rule to match the log to cut and then save in Elasticsearch by Kibana from Elasticsearch read Data is transferred to Nginx for processing and returned to the customer.

Okay, here's the installation process for the Elk system.

The following is the JVM version required for Elasticsearch/logstash

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M01/76/70/wKiom1ZTt0WB8A6uAACI9Nv_8F4171.png "title=" 2.png " alt= "Wkiom1ztt0wb8a6uaaci9nv_8f4171.png"/>

Install the Java environment first

wget--no-cookies--no-check-certificate--header "COOKIE:GPW_E24=HTTP%3A%2F%2FWWW.ORACLE.COM%2F; Oraclelicense=accept-securebackup-cookie "" http://download.oracle.com/otn-pub/java/jdk/8u65-b17/ jdk-8u65-linux-x64.rpm "RPM-UVH jdk-8u65-linux-x64.rpm

Or you can install the JDK directly with Yum, but make sure you install the appropriate version.

Of course, can also be source installation but the source installation needs to pay attention to set environment variables

wget--no-cookies--no-check-certificate--header "COOKIE:GPW_E24=HTTP%3A%2F%2FWWW.ORACLE.COM%2F; Oraclelicense=accept-securebackup-cookie "" http://download.oracle.com/otn-pub/java/jdk/8u65-b17/ Jdk-8u65-linux-x64.tar.gz "Tar zxvf jdk-8u65-linux-x64.tar.gzmv jdk1.8.0_65 javavi/etc/profilejava_home="/usr/local /java "Path= $JAVA _home/bin: $PATH classpath=.: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jar export Java_home PATH Export Classpathsource/etc/profile

Installing the JDK environment requires the installation of Elasticsearch.

RPM--import http://packages.elastic.co/GPG-KEY-elasticsearchwget-c https://download.elastic.co/elasticsearch/ ELASTICSEARCH/ELASTICSEARCH-1.7.2.NOARCH.RPMRPM-IVH elasticsearch-1.7.2.noarch.rpm

Modify the configuration file as follows

Cd/usr/local/elasticsearch/vim Config/elasticsearch.ymlpath.data:/data/dbnetwork.host:192.168.100.233

Install the Elasticsearch plugin as follows

cd/usr/share/elasticsearch/&&/bin/plugin-install mobz/elasticsearch-head &&./bin/plugin-install lukas-vlcek/bigdesk/2.5.0

Then start Elasticsearch

Systemctl Start Elasticsearch


Then start installing Kibana

Go to Https://www.elastic.co/downloads/kibana and find the right version. Each version has a line below it. Be sure to pay attention to these things compatible with Elasticsearch 1.4.4-1.7

My choice here is kibana-4.1.3-linux-x64.tar.gz.

wget https://download.elastic.co/kibana/kibana/kibana-4.1.3-linux-x64.tar.gz Tar XF KIBANA-4.1.3-LINUX-X64.TAR.GZMV KIBANA-4.1.3-LINUX-X64/USR/LOCAL/KIBANACD! $vim config/kibana.yml port:5601host: " 192.168.100.233 "Elasticsearch_url:" http://192.168.100.233:9200 "


The configuration file indicates that Kibana listens on port 5601 and obtains data from Elasticsearch through Port 9200.

Install Nginx can choose Source installation here for the sake of easy to use Yum installed.

Yum-y Install Nginx

Vim/etc/nginx/nginx.conf

Change the server to the following

server {        listen        80 default_server;        listen        [::]:80 default_server;        server_name _;          location / {          proxy_pass http://192.168.100.233:5601;          proxy_http_version 1.1;         proxy_set_header  Upgrade  $http _upgrade;         proxy_set_header  connection  ' upgrade ';          proxy_set_header host   $host;         proxy_cache_bypass  $http _upgrade;             }} 

Modify the log Save format to the following

Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" ' $status $upstream _response_time $requ             Est_time $body _bytes_sent "$http _referer" "$http _user_agent" "$http _x_forwarded_for" "$request _body" ' ' $scheme $upstream _addr ';

The log format is modified to match the Grok matching rules for subsequent Logstash

Start Nginx and Kibana

Systemctl Start Nginxnohup/usr/local/kibana/bin/kibana-l/var/log/kibana.log &

Or take a look at the following two scripts

CD/ETC/INIT.D && Curl-o Kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/ Fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-initcd/etc/default && Curl-o Kibana https:// gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/ Kibana-4.x-default

About booting the Kibana.

After that, we need to install Logstash.

RPM--import https://packages.elasticsearch.org/gpg-key-elasticsearchvi/etc/yum.repos.d/logstash.repo[ Logstash-1.5]name=logstash repository for 1.5.x packagesbaseurl=http://packages.elasticsearch.org/logstash/1.5/ Centosgpgcheck=1gpgkey=http://packages.elasticsearch.org/gpg-key-elasticsearchenabled=1yum-y Install Logstash

This package is more likely to download a relatively slow domestic can go to the official website using thunder download faster.

Create a TLS certificate

Logstash and Logstash-forwarder communications require the use of TLS certificate authentication. Logstash forwarder above only public key Logstash need to configure public key, private key. Generate an SSL certificate on the Logstash server.

There are two ways to create an SSL certificate, one that specifies the IP address, one of the specified FQDN (DNS).

1. Specify IP address mode

Vi/etc/pki/tls/openssl.cnf

Configure SubjectAltName = ip:192.168.100.233 under [V3_ca] Remember this is important because there is also a place where there is a subjectaltname configuration is wrong, it will not be able to achieve certification

Cd/etc/pki/tlsopenssl req-config/etc/pki/tls/openssl.cnf-x509-days 3650-batch-nodes-newkey rsa:2048-keyout privat E/logstash-forwarder.key-out CERTS/LOGSTASH-FORWARDER.CRT

Note Set the-days to a large point so that the certificate expires.

2. Using the FQDN method

You do not need to modify the Openssl.cnf file.

Cd/etc/pki/tlsopenssl req-subj '/cn=logstash.abcde.com/'-x509-days 3650-batch-nodes-newkey rsa:2048-keyout private /logstash-forwarder.key-out CERTS/LOGSTASH-FORWARDER.CRT

Replace the logstash.abcde.com with your own domain name. At the same time to the domain name resolution that adds Logstash.abcde.com's a record.

That's the way it works, but if the IP address of the Logstash server is transformed, the certificate is not available.


Configure Logstash

The Logstash configuration file is a configuration file in JSON format that is configured in the/ETC/LOGSTASH/CONF.D directory to include three partial input filters and outputs.

First create a 01-lumberjack-input.conf file setting lumberjack Enter the protocol that Logstash-forwarder uses.

Vi/etc/logstash/conf.d/01-lumberjack-input.confinput {lumberjack {port = 5043 type = "Logs" Ssl_certi Ficate = "/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT" Ssl_key = "/etc/pki/tls/private/logstash-forwarder.key "  }}

Then create a 02-nginx.conf to filter the Nginx logs.

vi /etc/logstash/conf.d/02-nginx.conf filter {  if [type] ==  "Nginx"  {    grok {      match => {  " Message " => "%{iporhost:clientip} - %{notspace:remote_user} \[%{httpdate:timestamp }\] \ "(?:%{word:method} %{notspace:request} (?:  %{uriproto:proto}/%{number:httpversion})? | %{data:rawrequest}) \ " %{NUMBER:status}  (?:%{number:upstime}|-)  %{NUMBER:reqtime}  (?:%{ number:size}|-)  %{qs:referrer} %{qs:agent} %{qs:xforwardedfor} %{qs:reqbody} %{word: scheme}  (?:%{ipv4:upstream} (:%{posint:port})? | -) " }      add_field => [ " Received_at ", "%{@ Timestamp} " ]      add_field => [ " Received_from ", " %{host} " ]    }    date {        match => [  "timestamp"  ,  "Dd/mmm/yyyy:hh:mm:ss  z " ]    }   geoip {         source =>  "ClientIP"         add_tag = > [  "GeoIP"  ]        fields => [" Country_name ", " Country_code2 "," Region_name ", " City_name ", " Real_region_name ", " latitude ",   "Longitude"]        remove_field => [  "[GeoIP] [Longitude] ", " [Geoip][latitude] " ]    }  }}

This filter will look for logs labeled "Nginx" type Logstash-forwarder defined to attempt to use "Grok" to parse the incoming Nginx log to make it structured and queryable.

Type to match the Logstash-forwarder.

Also note that the Nginx log format is set to the above.

The log format is not grok to be overridden by a matching rule.

Can be debugged via the http://grokdebug.herokuapp.com/online tool. Most elk have no data errors here.

Grok matching log unsuccessful do not look down. Just make it right.

At the same time, look at the http://grokdebug.herokuapp.com/patterns# grok matching pattern is very beneficial to the subsequent write rule matching.

Finally, create a file to define the output.

vi /etc/logstash/conf.d/03-lumberjack-output.conf output {    if  "_ Grokparsefailure " in [tags] {      file { path = >  "/var/log/logstash/grokparsefailure-%{type}-%{+yyyy. Mm.dd}.log " }    }    elasticsearch {         host =>  "10.1.19.18"          protocol =>  "http"         index =>  " Logstash-%{type}-%{+yyyy. MM.DD} "        document_type => "%{type} "         workers => 5         template_overwrite => true    }     #stdout  {  codec =>rubydebug }}

Defines a structured log store to a file that is written to Elasticsearch for mismatched grok logs.

Note the filter filenames that are added later are located between 01-99. Because the Logstash configuration file is sequential.

In debugging, the log is not deposited to elasticsearch but the standard output for troubleshooting.

Also look at the log many errors in the log is also easy to locate the error.

Before starting the Logstash service, it is best to configure the file to detect the following

/opt/logstash/bin/logstash--configtest-f/etc/logstash/conf.d/*configuration OK

You can also specify file name detection until OK. Otherwise the Logstash server does not get up.

Finally, the Logstash service is started.

Systemctl Start Logstash

The Logstash-forwarder client is then configured.

Installing Logstash-forwarder

wget HTTPS://DOWNLOAD.ELASTIC.CO/LOGSTASH-FORWARDER/BINARIES/LOGSTASH-FORWARDER-0.4.0-1.X86_64.RPMRPM-IVH logstash-forwarder-0.4.0-1.x86_64.rpm

You need to copy the public key of the SSL certificate created at the time of installation Logstash to each Logstash-forwarder server.

SCP 192.168.100.233:/etc/pki/tls/certs/logstash-forwarder.crt/etc/pki/tls/certs/

Configure Logstash-forwarder

vi/etc/logstash-forwarder.conf{"Network": {"Servers": ["10.1.19.18:5043"], "SSL CA": "/etc/pki/tls/certs/logst Ash-forwarder.crt "," Timeout ": ()," files ": [{" Paths ": ["/alidata/logs/nginx/*-access.log "]," Fields ": {" type ":" Nginx "}}]}

This is also a JSON is a configuration file. The JSON format does not start up for the Logstash-forwarder service.

The following is the start of the Logstash-forwarder service.

When all of the above are configured correctly, you can access the Kibana to view the data.

The access effect is as follows

650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M02/76/72/wKiom1ZT1XTSzt05AAJNjn4xvHY184.png "title=" 1.png " alt= "Wkiom1zt1xtszt05aajnjn4xvhy184.png"/>

This article is from the "Lemon" blog, be sure to keep this source http://xianglinhu.blog.51cto.com/5787032/1716274

CENTOS7 Building Elk Open-source real-time log analysis system

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.