Elk Log System Installation Deployment

Source: Internet
Author: User
Tags redis kibana logstash install redis
what elk is.

Elk is an abbreviation for three applications of Elasticsearch, Logstash, and Kibana. Elasticsearch abbreviation ES, mainly used to store and retrieve data. Logstash is primarily used to write data into ES. Kibana is mainly used to display data Elk System Architecture Diagram

Elasticsearch

Elasticsearch is a distributed, real-time, Full-text search engine. All operations are implemented through the RESTful interface; Its underlying implementation is based on the Lucene Full-text search engine. The data is stored in a JSON document format and does not require a predefined paradigm elasticsearch and traditional database terminology comparison
Nodes (node) is a running Elasticsearch instance. A cluster (cluster) is a set of nodes with the same cluster.name that work together to share data and provide failover and extension capabilities, and of course a node can also form a set of nodes in a cluster that will be elected as the primary node (master), It will temporarily manage some of the changes at the cluster level, such as new or deleted indexes, adding or removing nodes, and so on. The primary node does not participate in document-level changes or searches, which means that the master node will not become a bottleneck for the cluster when traffic grows. Logstash

Logstash is a very flexible log collection tool, not limited to importing data to Elasticsearch, and can customize a variety of input, output, and filter conversion rules. Redis Transmission

Redis servers are often used as NoSQL databases, but Logstash is only used for Message Queuing Kibana

Kibana Real-time data analysis tool Elk Configuration Installation

JDK installation, need to install jdk-1.8.0 above version, otherwise run Logstash will error

Using Yum for installation

Yum-y Install java-1.8.0-openjdk

vim/etc/profile
java_home=/usr/lib/jvm/ Java-1.8.0-openjdk-1.8.0.91-1.b14.el6.x86_64/jre
export Java_home 

source/etc/profile

Elasticsearch installation Configuration

wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.noarch.rpm
RPM-IVH elasticsearch-1.7.1.noarch.rpm

Boot:/etc/init.d/elasticsearch start


Install plugin:
1./usr/share/elasticsearch/ Bin/plugin Install mobz/elasticsearch-head
2./usr/share/elasticsearch/bin/plugin Install lmenezes/ Elasticsearch-kopf
error: failed:sslexception[java.security.providerexception:java.security.keyexception]; Nested:providerexception[java.security.keyexception]; nested:keyexception;
Solution: Yum Upgrade NSS

Configure ELASTICSEARCH.YML and modify the Log_dir and Data_dir paths under/etc/init.d/elasticsearch

Cluster.name:elk-local
node.name:node-1
path.data:/file2/elasticsearch/data
path.logs:/file2/ Elasticsearch/logs
bootstrap.mlockall:true
network.host:0.0.0.0
http.port:9200
Discovery.zen.ping.unicast.hosts: ["192.168.1.16"]

Logstash installation Configuration

RPM Installation, download address

wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.3.4-1.noarch.rpm
Installation: RPM-IVH logstash-2.3.4-1.noarch.rpm 
start:/etc/init.d/logstash start
ln-s/opt/logstash/bin/logstash/usr/bin/ Logstash

Logstash Configuration Elk Log System configuration The core of the deployment of the log is the collection and collation, is the Logstash this part, the need to configure the most is also here. Especially in the filter{} part of the Grok regular match, according to their own log format and the data required to separate extraction

These two links will help you write Grok: Grok regular Grammar Tutorials Grokdebug

Path:/etc/logstash/conf.d/with. conf end configuration is mainly divided into three parts: input (Input), filter (filter), Output (output)
(PS: If%{type} is the index name, no special characters should be included in type)

Logstash Configuration Instance role: Shipper is 192.168.1.16 broker for 192.168.1.13;broker,indexer,search&storage: first install Redis, and start

Shipper: Only responsible for collecting data, not processing, configuration is simpler, other configuration of the same

Input {
    file {
        path => '/web/nginx/logs/www.log '
        type => ' Nginx-log '
        start_position => ' Beginning "
    }

}
output {
        if [type] = =" Nginx-log "{
        Redis {
                host =>" 192.168.1.16 "
                Port  => "6379"
                data_type => "list"
                key => "Nginx:log"
         }
    }
}

Indexer,search&storage: Collect the log from shipper, handle the format and output to ES

input {redis {host => "192.168.1.16" Port => 6379
        Data_type => "list" key => "Nginx:log" type => "Nginx-log" } filter {grok {match => {' message ' => '%{iporhost:clientip}-%{notspace:remote_use R} \[%{httpdate:timestamp}\]\ \ "(?:%{word:method}%{notspace:request} (?:%{uriproto:proto}/%{number:httpversion})? | % {data:rawrequest}) \%{number:status} (?:%{number:bytes}|-)%{qs:referrer}%{qs:agent} (%{word:x_forword}|-)-(%{ Number:request_time})--(%{number:upstream_response_time})--%{iporhost:d Omain}--(%{word:upstream_cache_status}|
                        -) '}} output {if [type] = = ' Nginx-log ' {elasticsearch { Hosts => ["192.168.1.16:9200"] Index => "nginx-%{+yyyy. MM.DD} "}}} 
PS: When using%{type} as index name, Tpye cannot have special characters

Test Configuration Correctness/Startup

Test:/opt/logstash/bin/logstash-f/etc/logstash/conf.d/xx.conf-t

start: Service Logstash start

Kibana Installation

wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
tar zxvf https:// Download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz

Configure Startup items (/etc/init.d/kibana)

#!/bin/bash ### BEGIN INIT INFO # provides:kibana # default-start:2 3 4 5 # Default-stop: 0 1 6 # short-description:runs Kibana Daemon # description:runs Kibana daemon as a non-root user ### end INI T INFO # Process name Name=kibana desc= "Kibana4" prog= "/etc/init.d/kibana" # Configure location of Kibana bin Kiban A_bin=/vagrant/elk/kibana-4.1.1-linux-x64/bin #注意路径 # PID Info pid_folder=/var/run/kibana/pid_file=/var/ run/kibana/$NAME. PID lock_file=/var/lock/subsys/$NAME path=/bin:/usr/bin:/sbin:/usr/sbin: $KIBANA _bin DAEMON=$ kibana_bin/$NAME # Configure User to run daemon process Daemon_user=root # Configure logging Location Kibana_log=/var /log/kibana.log # Begin Script retval=0 if [' Id-u '-ne 0]; Then echo "You need the root privileges to run this script" Exit 1 Fi # Function library.
  /etc/init.d/functions start () {echo-n "Starting $DESC:" Pid= ' pidofproc-p $PID _file kibana '      If [-N "$pid"];
                Then echo "already running." Exit 0 Else # Start Daemon if [!-d "$PID _folder"]; Then mkdir $PID _folder fi daemon--user= $DAEMON _user--pidfile= $PID _file $DAEMON 1
                > "$KIBANA _log" 2>&1 & Sleep 2 pidofproc node > $PID _file
                Retval=$? [[$-eq 0]] && Success | | Failure echo [$RETVAL = 0] && touch $LOCK _file return $RETVAL fi} rel
    Oad () {echo "Reload command isn't implemented for the this service."
Return $RETVAL} stop () {echo-n "stopping $DESC:" Killproc-p $PID _file $DAEMON retval=$?
  echo [$RETVAL = 0] && rm-f $PID _file $LOCK _file} case ' "in Start" start;;
  stop) stop;;
        Status) status-p $PID _file $DAEMON retval=$?
  ;; REstart) stop start;;
  reload) reload;;

        *) # Invalid Arguments, print the following message.
echo "Usage: $ {Start|stop|status|restart}" >&2 exit 2;; Esac

Because Kibana stores Kibana add validation (implemented under Nginx)

1.yum install-y httpd #如果已安装此步骤可忽略 2. Determine htpasswd location (Whereis htpasswd) htpasswd:/usr/bin/htpasswd/usr/share/man/man1/ht passwd.1.gz 3. Generate password file/usr/bin/htpasswd-c/web/nginx/conf/elk/authdb Elk New Password: According to the prompts to enter the password two times, the password stored in the AUTHDB 4.
        Nginx Add Elk Configuration/web/nginx/conf/elk/elk.conf server {Listen 80;
        server_name www.elk.com;

        CharSet UTF8;
                Location/{Proxy_pass Http://192.168.1.16$request_uri;
                Proxy_set_header Host $host;
                Proxy_set_header X-real-ip $remote _addr;
                Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
                Auth_basic "Authorized users only";
         AUTH_BASIC_USER_FILE/WEB/NGINX/CONF/ELK/AUTHDB;
        } server {Listen 80;
        server_name www.es.com;

        CharSet UTF8;
                Location/{Proxy_pass Http://192.168.1.16:9200$request_uri; Proxy_set_heaDer Host $host;
                Proxy_set_header X-real-ip $remote _addr;
                Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
                Auth_basic "Authorized users only";
         AUTH_BASIC_USER_FILE/WEB/NGINX/CONF/ELK/AUTHDB; }
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.