ELK Stack
ELK stack is also a combination of three open source software, the formation of a powerful real-time log collection analysis and display system.
Logstash: Log Collection tool, from local disk, network services (their own listening port, accept user log), Message Queuing to collect a variety of logs, and then filter analysis, and input the log into the Elasticsearch.
Elasticsearch: Log Distributed Storage/search tool, native support cluster function, can generate a log of the specified time an index, speed up log queries and access.
Kibana: A visual log Web display tool that shows the logs stored in Elasticsearch, and can also generate stunning dashboards.
Topology
110160402215734.png
Nginx Proxy two Elasticsearch cluster, logstash the client-side log hand to the Redis,redis to pass the data to the ES, the client uses Lostash to pass the log to Redis
Environment
[Root@localhost logs]# Cat/etc/redhat-release
CentOS Release 6.6 (Final)
[Root@localhost logs]# Uname-rm
2.6.32-504.el6.x86_64 x86_64
[Root@localhost logs]#
Using software
Elasticsearch-1.7.4.tar.gz
Kibana-4.1.1-linux-x64.tar.gz
Logstash-1.5.5.tar.gz
Time synchronization
Ntpdate time.nist.gov
Elasticsearch Cluster Installation configuration
One, 192.168.1.8 download installation Elasticsearch
Yum-y Install java-1.8.0 lrzsz git
Wget-p/usr/local https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.4.tar.gz
Cd/usr/local
Tar XF elasticsearch-1.7.4.tar.gz
Ln-s elasticsearch-1.7.4 Elasticsearch
Modify configuration file Vim Elasticsearch/config/elasticsearch.yml
Cluster.name:LinuxEA Cluster Name
Node.name: "linuxea-es1" node name
Whether Node.master:true is the main
Node.data:true whether to store
Index.number_of_shards:5 fragmentation
Index.number_of_replicas:1
path.conf:/usr/local/elasticsearch/config/configuration file path
Path.data:/data/es-data Date Path
Path.work:/data/es-worker
Path.logs:/usr/local/elasticsearch/logs/Log
Path.plugins:/usr/local/elasticsearch/plugins Module
Bootstrap.mlockall:true does not write memory
network.host:192.168.1.8
http.port:9200
Create a table of contents
Mkdir/data/es-data-p
Mkdir/data/es-worker-p
Mkdir/usr/local/elasticsearch/logs
Mkdir/usr/local/elasticsearch/plugins
Download the startup configuration file
git clone https://github.com/elastic/elasticsearch-servicewrapper.git
MV elasticsearch-servicewrapper/service//usr/local/elasticsearch/bin/
/usr/local/elasticsearch/bin/service/elasticsearch Install
Modify configuration file
Vim/usr/local/elasticsearch/bin/service/elasticsearch.conf
Set.default.es_home=/usr/local/elasticsearch #设置ES的安装路径, must and installation path remain
set.default.es_heap_size=1024
Start
[Root@elk1 local]#/etc/init.d/elasticsearch start
Starting Elasticsearch ...
Waiting for Elasticsearch ...
running:pid:4355
[Root@elk1 local]# netstat-tlntp|grep-e "9200|9300"
TCP 0 0:: ffff:192.168.1.8:9300:::* LISTEN 4357/java
TCP 0 0:: ffff:192.168.1.8:9200:::* LISTEN 4357/java
[Root@elk1 local]#
Curl
[root@elk1 local]# Curl http://192.168.1.8:9200
{
"status": +
"name": "Linuxea-es1",
"Cluster_Name": "Linuxea",
"version": {
"number": "1.7.4",
  &N Bsp "Build_hash": "0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e",
"Build_timestamp": "2015-12-15t11 : 25:18z ",
" Build_snapshot ": false,
" lucene_version ":" 4.10.4 "
},
"tagline": "Know, for Search"
}
[Root@elk1 local]#
Elasticsearch2
Two, 192.168.1.7 ELASTICSEARCH2
[Root@elk2 local]# Vim Elasticsearch/config/elasticsearch.yml
Cluster.name:LinuxEA
Node.name: "Linuxea-es2"
Node.master:true
Node.data:true
Index.number_of_shards:5
Index.number_of_replicas:1
Path.conf:/usr/local/elasticsearch/config/
Path.data:/data/es-data
Path.work:/data/es-worker
Path.logs:/usr/local/elasticsearch/logs/
Path.plugins:/usr/local/elasticsearch/plugins
Bootstrap.mlockall:true
network.host:192.168.1.7
http.port:9200
Create a table of contents
Mkdir/data/es-data-p
Mkdir/data/es-worker-p
Mkdir/usr/local/elasticsearch/logs
Mkdir/usr/local/elasticsearch/plugins
Download the startup configuration file
git clone https://github.com/elastic/elasticsearch-servicewrapper.git
MV elasticsearch-servicewrapper/service//usr/local/elasticsearch/bin/
/usr/local/elasticsearch/bin/service/elasticsearch Install
Modify configuration file
vim/usr/local/elasticsearch/bin/service/elasticsearch.conf
set.default.es_home=/usr/local/ elasticsearch #设置ES的安装路径, you must continue with the installation path
set.default.es_heap_size=1024
to start
[Root@elk2 local]#/etc/init.d/elasticsearch start
Starting elasticsearch ...
Waiting for Elasticsearch ...
running:pid:4355
[root@elk2 ~]# netstat-tlntp|grep-e ' 9200|9300 '
tcp 0 0::ffff:192.168.1.7:9300 :::* listen 4568/java
tcp 0 0:: FFFF :192.168.1.7:9200 :::* listen 4568/java
[root@elk2 ~]#
Curl
[Root@elk2 ~]# Curl http://192.168.1.7:9200
{
"Status": 200,
"Name": "Linuxea-es2",
"Cluster_Name": "Linuxea",
"Version": {
"Number": "1.7.4",
"Build_hash": "0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e",
"Build_timestamp": "2015-12-15t11:25:18z",
"Build_snapshot": false,
"Lucene_version": "4.10.4"
},
"Tagline": "Your Know, for Search"
}
[Root@elk2 ~]#
Cluster plug-in Elasticsearch-head
Third, 192.168.1.7 Elasticsearch-head installation of the five-star represents the master node, the origin indicates the work node
[Root@elk2 ~]#/usr/local/elasticsearch/bin/plugin-i Mobz/elasticsearch-head
12.png
Redis+logstash
IV, 192.168.1.6 installation Redis+logstash, mainly used to transfer Redis data to the ES
Installing Java Dependency Packs
Yum-y Install java-1.8.0 lrzsz git
Wget-p/usr/local https://download.elastic.co/logstash/logstash/logstash-1.5.5.tar.gz
Cd/usr/local
Tar XF logstash-1.5.5.tar.gz
Ln-s logstash-1.5.5 Logstash
Startup script
[Root@localhost local]# Vim/etc/init.d/logstash
#!/bin/sh
# Init Script for Logstash
# Maintained by Elasticsearch
# Generated by Pleaserun.
# implemented based on LSB Core 3.1:
# * sections:20.2, 20.3
#
### BEGIN INIT INFO
# Provides:logstash
# Required-start: $remote _fs $syslog
# required-stop: $remote _fs $syslog
# Default-start:2 3 4 5
# default-stop:0 1 6
# short-description:
# Description:starts Logstash as a daemon.
### End INIT INFO
Path=/sbin:/usr/sbin:/bin:/usr/bin
Export PATH
If [' Id-u '-ne 0]; Then
echo "You need root privileges to run this script"
Exit 1
Fi
Name=logstash
Pidfile= "/var/run/$name. pid"
Export classpath=.: $JAVA _home/jre/lib/rt.jar: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jar
Export path= $PATH: $JAVA _home/bin
Ls_user=logstash
Ls_group=logstash
Ls_home=/usr/local/logstash
Ls_heap_size= "500m"
ls_java_opts= "-djava.io.tmpdir=${ls_home}"
Ls_log_dir=/usr/local/logstash
Ls_log_file= "${ls_log_dir}/$name. LOG"
Ls_conf_file=/etc/logstash.conf
ls_open_files=16384
Ls_nice=19
Ls_opts= ""
[-r/etc/default/$name] &&. /etc/default/$name
[-r/etc/sysconfig/$name] &&. /etc/sysconfig/$name
Program=/usr/local/logstash/bin/logstash
Args= "Agent-f ${ls_conf_file}-L ${ls_log_file} ${ls_opts}"
Start () {
Java_opts=${ls_java_opts}
Home=${ls_home}
Export PATH Home java_opts ls_heap_size ls_java_opts ls_use_gc_logging
# set Ulimit as (root, presumably), before we drop privileges
Ulimit-n ${ls_open_files}
# Run The program!
Nice-n ${ls_nice} sh-c "
CD $LS _home
Ulimit-n ${ls_open_files}
EXEC \ "$program \" $args
">" ${ls_log_dir}/$name. StdOut "2>" ${ls_log_dir}/$name. Err &
# Generate the pidfile from here. If We instead made the forked process
# Generate it there would be a race condition between the pidfile writing
# and a process possibly asking for status.
Echo $! > $pidfile
echo "$name started."
return 0
}
Stop () {
# Try A few times to kill TERM
if status; Then
Pid= ' Cat ' $pidfile '
echo "Killing $name (PID $pid) with Sigterm"
Kill-term $pid
# Wait for it to exit.
For I in 1 2 3 4 5; Todo
echo "Waiting $name (pid $pid) to die ..."
Status | | Break
Sleep 1
Done
if status; Then
echo "$name stop failed; Still running. "
Else
echo "$name stopped."
Fi
Fi
}
Status () {
If [-F "$pidfile"]; Then
Pid= ' Cat ' $pidfile '
If kill-0 $pid >/dev/null 2>/dev/null; Then
# process by-PID is running.
# It May is not we PID, but that ' s what for you and just pidfiles.
# TODO (Sissel): Check If this process seems to be the same as the one we
# expect. It ' d be nice to use flock here, but Flock uses fork, not exec,
# so it makes it quite the awkward to use in this case.
return 0
Else
Return 2 # program is dead but PID file exists
Fi
Else
Return 3 # program are not running
Fi
}
Force_stop () {
if status; Then
Stop
Status && Kill-kill ' cat ' $pidfile '
Fi
}
Case "$" in
Start
Status
Code=$?
If [$code-eq 0]; Then
echo "$name is already running"
Else
Start
Code=$?
Fi
Exit $code
;;
stop) stop;;
Force-stop) force_stop;;
Status
Status
Code=$?
If [$code-eq 0]; Then
echo "$name is running"
Else
echo "$name is not running"
Fi
Exit $code
;;
Restart)
Stop && Start
;;
Reload
Stop && Start
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|force-stop|status|restart}" >&2
Exit 3
;;
Esac
Exit $?
power-on Boot
[Root@localhost local]# chmod +x/etc/init.d/logstash
Chkconfig--add Logstash
Chkconfig Logstash on
1, edit Logstash configuration file
[Root@localhost local]# vim/etc/logstash.conf
Input {#表示从标准输入中收集日志
stdin {}
}
Output {
Elasticsearch {#表示将日志输出到ES中
Host => ["172.16.4.102:9200", "172.16.4.103:9200"] #可以指定多台主机, or you can specify a single host in the cluster
Protocol => "HTTP"
}
}
2. Manually Write Data
[Root@localhost local]#/usr/local/logstash/bin/logstash-f/etc/logstash.conf
Logstash Startup completed
Hello word!
3. Write completion, view the ES has been written, and automatically establish an index
13.png
4.redis
1, install Redis
Yum-y Install Redis
Vim/etc/redis.conf
Bind 192.168.1.6
/etc/init.d/redis start
2, install Logstash, as above can
3,logstash+redis
Logstash to read Redis content to ES
Cat/etc/logstash.conf
Input {
Redis {
Host => "192.168.1.6"
Data_type => "List"
Key => "Nginx-access.log"
Port => "6379"
DB => "2"
}
}
Output {
Elasticsearch {
Host => ["192.168.1.7:9200", "192.168.1.8:9200"]
Index => "nginx-access-log-%{+yyyy. MM.DD} "
Protocol => "HTTP"
Workers => 5
Template_overwrite => True
}
}
Nginx+logstash sample
Five, 192.168.1.4 installation Logstash and Nginx,logstash will nginx data to Redis
Logstash as the fourth step installation can be
Yum-y Install pcre pcre-devel openssl-devel Oepnssl
Http://nginx.org/download/nginx-1.6.3.tar.gz
Groupadd-r Nginx
Useradd-g nginx-r Nginx
Ln-s/usr/local/nginx-1.6.3/usr/local/nginx
Compiling the installation
./configure \
--prefix=/usr/local/nginx \
--conf-path=/etc/nginx/nginx.conf \
--user=nginx-- Group=nginx \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
-- pid-path=/var/run/nginx/nginx.pid \
--lock-path=/var/lock/nginx.lock \
--with-http_ssl_module \
-- With-http_stub_status_module \
--with-http_gzip_static_module \
--with-http_flv_module \
--with-http_mp4 _module \
--http-client-body-temp-path=/var/tmp/nginx/client \
--http-proxy-temp-path=/var/tmp/nginx/proxy \
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi
Make && make install
Mkdir-pv/var/tmp/nginx/{client,fastcgi,proxy,uwsgi}
mkdir/usr/local/nginx/logs/
/usr/local/nginx/sbin/nginx
Modifies the log format vim/etc/nginx/nginx.conf
Log_format Logstash_json ' {"@timestamp": "$time _iso8601", '
' Host ': ' $server _addr ', '
' Client ': ' $remote _addr ', '
' Size ': $body _bytes_sent, '
' ResponseTime ': $request _time, '
' Domain ': ' $host ', '
' URL ': ' $uri ', '
"Referer": "$http _referer", '
' Agent ': ' $http _user_agent ', '
' Status ': ' $status '} ';
Access_log Logs/access_json.access.log Logstash_json;
Log has been generated
[Root@localhost nginx]# ll logs/
Total 8
-rw-r--r--. 1 root 6974 Mar 08:44 access_json.access.log
The log format has been modified.
[Root@localhost nginx]# Cat/usr/local/nginx/logs/access_json.access.log
{"@timestamp": "2016-03-31t08:44:48-07:00", "host": "192.168.1.4", "Client": "192.168.1.200", "size": 0, "ResponseTime ": 0.000," domain ":" 192.168.1.4 "," url ":"/index.html "," Referer ":"-"," Agent ":" mozilla/5.0 "(Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/47.0.2526.80 safari/537.36 ", Status": "304"}
{"@timestamp": "2016-03-31t08:44:48-07:00", "host": "192.168.1.4", "Client": "192.168.1.200", "size": 0, "ResponseTime ": 0.000," domain ":" 192.168.1.4 "," url ":"/index.html "," Referer ":"-"," Agent ":" mozilla/5.0 "(Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/47.0.2526.80 safari/537.36 ", Status": "304"}
{"@timestamp": "2016-03-31t08:44:48-07:00", "host": "192.168.1.4", "Client": "192.168.1.200", "size": 0, "ResponseTime ": 0.000," domain ":" 192.168.1.4 "," url ":"/index.html "," Referer ":"-"," Agent ":" mozilla/5.0 "(Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/47.0.2526.80 safari/537.36 ", Status": "304"}
Pass the Nginx log to Redis
[root@elk1 logs]# cat/etc/logstash.conf
Input {
file {
Path => "/usr/local/nginx/logs/access_json.access.log"
codec => "JSON"
}
Output {
Redis {
host => "192.1 68.1.6 "
data_type =>" list "
key =>" Nginx-access.log "
& nbsp; Port => "6379"
db => "2"
}
}
[Root@elk1 logs]#
on Redis, and Nginx on Move Logstash
Nohup/usr/local/logstash/bin/logstash-f/etc/logstash.conf
El+kibana
Six, 192.168.1.7 El+kibana
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
Tar XF kibana-4.1.1-linux-x64.tar.gz
LN-SV kibana-4.1.1-linux-x64 Kibana
Vim/usr/local/kibana/config/kibana.yml
Elasticsearch_url: "http://192.168.1.7:9200"
Pid_file:/var/run/kibana.pid
Log_file:/usr/local/kibana/kibana.log
Nohup./kibana/bin/kibana &
192.168.1.8 El+kibana
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
Tar XF kibana-4.1.1-linux-x64.tar.gz
LN-SV kibana-4.1.1-linux-x64 Kibana
Vim/usr/local/kibana/config/kibana.yml
Elasticsearch_url: "http://192.168.1.8:9200"
Pid_file:/var/run/kibana.pid
Log_file:/usr/local/kibana/kibana.log
Nohup./kibana/bin/kibana &
14.png
Nginx Agent
Seven, 192.168.1.200 nginx reverse proxy El+kibana (192.168.1.7 and 192.168.1.8)
Control based on account and IP
Auth_basic "only for VIPs";
#定义名称
AUTH_BASIC_USER_FILE/ETC/NGINX/USERS/.HTPASSWD;
#定义控制用户名的文件路径, for hidden files
}
Deny 172.16.0.1 #拒绝172 16.0.1 visit, permission is allow
#比如, allow only 172.16.0.1, other rejections:
Allow 172.16.0.1/16; Deny all;
As follows:
[Root@localhost nginx]# Vim nginx.conf
Worker_processes 1;
Events {
Worker_connections 1024;
}
HTTP {
Include Mime.types;
Default_type Application/octet-stream;
Log_format Logstash_json ' {"@timestamp": "$time _iso8601", '
' Host ': ' $server _addr ', '
' Client ': ' $remote _addr ', '
' Size ': $body _bytes_sent, '
' ResponseTime ': $request _time, '
' Domain ': ' $host ', '
' URL ': ' $uri ', '
"Referer": "$http _referer", '
' Agent ': ' $http _user_agent ', '
' Status ': ' $status '} ';
Access_log Logs/access_json.access.log Logstash_json;
Sendfile on;
Keepalive_timeout 65;
Upstream Kibana {#定义后端主机组
Server 192.168.1.8:5601 weight=1 max_fails=2 fail_timeout=2;
Server 192.168.1.7:5601 weight=1 max_fails=2 fail_timeout=2;
}
server {
Listen 80;
server_name localhost;
Auth_basic "only for ELK Stack VIPs"; #basic
AUTH_BASIC_USER_FILE/ETC/NGINX/.HTPASSWD; #用户认证密码文件位置
Allow 192.168.1.200; #允许192.168.1.200
Allow 192.168.1.0/24; #允许192.168.1.0 Network Segment
Allow 10.0.0.1; #允许10.0.0.1
Allow 10.0.0.254; #允许10.0.0.254
Deny all; #拒绝所有
Location/{#定义反向代理, will be access to their own requests, are forwarded to the Kibana server
Proxy_pass http://kibana/;
Index index.html index.htm;
}
}
}
Modify Permissions
[Root@localhost nginx]# chmod 400/etc/nginx/.htpasswd
[Root@localhost nginx]# chown Nginx. /etc/nginx/.htpasswd
[Root@localhost nginx]# cat/etc/nginx/.htpasswd
Linuxea: $apr 1$egcdq5wx$bd2cwxgww3y/xccjvbccd0
[Root@localhost nginx]#
Add User and password
[Root@localhost ~]# htpasswd-c-m/etc/nginx/.htpasswd Linuxea
New Password:
Re-type New Password:
Adding password for user Linuxea
[Root@localhost ~]#
Now you can use 192.168.1.4 access, where the collection is the agent Nginx own log
- Kibana
After opening, click Settings,add, the name here needs to follow the fixed format YYYY.MM.DD, the log name can be viewed in http://IP:9200/_plugin/head/
such as: Search IP segment:
Status:200 and hosts:192.168.1.200
status:200 OR status:400
status:[400 to 499]
If you have more than one you can input, it will be automatically indexed, and then create can
If you have more than one log +add new
Then choose Discover and choose the right time.
You can enter the corresponding field search according to the desired result.
Click Visualize to select the corresponding content, plot
can also be selected in the Discover interface, click Visualize
As follows
Kibana more out of the map can refer to kibana.logstash.es
A machine has multiple log collections, differentiated by if,kye,db
Input {
File {
Type => "Apache"
Path => "/date/logs/access.log"
}
File {
Type => "Php-error.log"
Path => "/data/logs/php-error.log"
}
}
Output {
if [type] = = "Apache"
Redis {
Host => "192.168.1.6"
Port => "6379"
DB => "1"
Data_type => "List"
Key => "Access.log"
}
}
if [type] = = "Php-error.log"
Redis {
Host => "192.168.1.6"
Port => "6379"
DB => "2"
Data_type => "List"
Key => "Php-error.log"
}
}
http://pan.baidu.com/share/init?shareid=570693003&uk=1074693321
Password pe2n