Nginx+logstash+elasticsearch+kibana Build website Log Analysis System

Source: Internet
Author: User
Tags auth gpg json centos iptables kibana logstash firewall

Objective

process, NIGNX format log into JSON, Logstash directly to Elasticsearch, and then through the Kibana GUI interface display analysis

Important NIGNX Log into JSON format, avoid nignx default log is a space, need a regular match, resulting in logstash too much CPU
The Elasticsearch machine configures the firewall, allowing only the specified Logstash machine access
Kibana only listens for local 127.0.0.1 use NIGNX direction Agent, Nginx Configure HTTP Basic auth account password Login

More rough notes, memos.


Install Java


Yum Install java-1.8.0-openjdk*

Nginx Configuration

In order to make the NIGNX machine run Logstash capture log load lowest, it is recommended to generate JSON directly in the way that can be written directly with Logstash read to Elasticsearch

The format log is defined in http{} as JSON

Log_format Logstash_json ' {"@timestamp": "$time _iso8601", '
' Host ': ' $server _addr ', '
"ClientIP": "$remote _addr", '
"Http_x_forwarded_for": "$http _x_forwarded_for", '
' Size ': $body _bytes_sent, '
' ResponseTime ': $request _time, '
"Upstreamtime": "$upstream _response_time", '
"Upstreamhost": "$upstream _addr", '
' Http_host ': ' $host ', '
' Request ': ' $request ', '
' URL ': ' $uri ', '
"Xff": "$http _x_forwarded_for", '
"Referer": "$http _referer", '
' Agent ': ' $http _user_agent ', '
' Status ': ' $status '} ';
Server output log Access_log can be configured with multiple simultaneous outputs, you can keep your previous

Access_log/data/wwwlogs/www.iamle.log iamle.com;
Access_log/data/wwwlogs/www.iamle.com.logstash_json.log Logstash_json;

Nginx Machine Installation logstash1.5.x

RPM--import Http://packages.elasticsearch.org/GPG-KEY-elasticsearch
Cat >/etc/yum.repos.d/logstash.repo <<eof
[logstash-1.5]
Name=logstash repository for 1.5.x packages
Baseurl=http://packages.elasticsearch.org/logstash/1.5/centos
Gpgcheck=1
Gpgkey=http://packages.elasticsearch.org/gpg-key-elasticsearch
Enabled=1
Eof
Yum Clean All
Yum Install Logstash

In the directory/etc/logstash/conf.d/
Setting up a configuration file nginx_json.conf

Input {
File {
Path => "/data/wwwlogs/www.iamle.com.logstash_json.log"
Codec => JSON
}
}
Filter {
Mutate {
Split => ["Upstreamtime", ","]
}
Mutate {
Convert => ["Upstreamtime", "float"]
}
}
Output {
Elasticsearch {
Host => "Elk.server.iamle.com"
Protocol => "HTTP"
Index => "logstash-%{type}-%{+yyyy. MM.DD} "
Index_type => "%{type}"
Workers => 5
Template_overwrite => True
}
}
Service Logstash Start

Log Storage machine installation elasticsearch1.7.x provides low-level data support

RPM--import Https://packages.elastic.co/GPG-KEY-elasticsearch
Cat >/etc/yum.repos.d/elasticsearch.repo <<eof
[elasticsearch-1.7]
Name=elasticsearch repository for 1.7.x packages
Baseurl=http://packages.elastic.co/elasticsearch/1.7/centos
Gpgcheck=1
Gpgkey=http://packages.elastic.co/gpg-key-elasticsearch
Enabled=1
Eof
Yum Clean All
Yum Install Elasticsearch
configuration file
Configure Data Save Locations

Vim/etc/elasticsearch/elasticsearch.yml
# Can optionally include more than one location, causing data to be striped across
# The locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
Path.data:/data
The catalog will be generated automatically, just specify an empty directory.

Service Elasticsearch Start

Centos7
Systemctl Start Elasticsearch
Systemctl Status Elasticsearch
Elasticsearch.service-elasticsearch
loaded:loaded (/usr/lib/systemd/system/elasticsearch.service; disabled)
Active:active (running) since Fri 2015-09-04 CST; 1s ago
Docs:http://www.elastic.co
Main pid:19376 (Java)
Cgroup:/system.slice/elasticsearch.service
└─19376/BIN/JAVA-XMS256M-XMX1G-DJAVA.AWT.HEADLESS=TRUE-XX:+USEPARNEWGC-XX:+USECONCMARKSWEEPGC-XX: Cmsinitiatingoccupancyfraction=75-x ...
Sep 15:37:08 Elk systemd[1]: Starting elasticsearch ...
Sep 15:37:08 Elk Systemd[1]: Started Elasticsearch.
Check if it has been successfully opened
SS-LTNP |grep 9200

CENTOS7 Configuration firewalld fixed IP accessible elasticsearch
Systemctl Start Firewalld.service
Systemctl Status Firewalld.service
 

Allow only Nignx machine access to Elasticsearch 9200 9300 ports

Firewall-cmd--permanent--zone=public--add-rich-rule= "rule family=" IPv4 "\
SOURCE address= "10.8.8.2" \
Port protocol= "TCP" port= "9200" accept "

Firewall-cmd--permanent--zone=public--add-rich-rule= "rule family=" IPv4 "\
SOURCE address= "10.8.8.2" \
Port protocol= "TCP" port= "9300" accept "
Firewall-cmd--reload

Iptables-l-N |grep 9200
ACCEPT TCP--10.8.8.2 0.0.0.0/0 TCP dpt:9200 ctstate NEW

Installing KIBANA4 to display data in Elasticsearch

wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
Tar zxvf kibana-4.1.1-linux-x64.tar.gz
CD kibana-4.1.1-linux-x64
Modify configuration file
Vim/usr/local/kibana-4.1.1-linux-x64/config/kibana.yml
# Kibana is served from a back end server. This is controls which port to use.
port:5601

# The host to bind the server to.
#监听本地地址 Reverse proxy with NIGNX
Host: "127.0.0.1"

Nohup./bin/kibana &

Check if it has been successfully opened
SS-LTNP |grep 5601

Use NIGNX reverse proxy Kibana
Nginx Configure HTTP Basic auth account password Login
http://trac.edgewall.org/export/10770/trunk/contrib/htpasswd.py (recommended in Nginx wiki)
Running the sample
chmod 777 htpasswd.py
./htpasswd.py-c-B htpasswd username password
#-c htpasswd as file name for makefile

Server
{
Listen 80;
#listen [::]:80;
server_name elk.server.iamle.com;

Location/{
Auth_basic "Password please";
AUTH_BASIC_USER_FILE/USR/LOCAL/NGINX/CONF/HTPASSWD;
Proxy_pass http://127.0.0.1:5601/;
Proxy_redirect off;
Proxy_set_header X-real-ip $remote _addr;
Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
}
}


extending the use of CENTOS7 firewall

Check firewall status
Firewall-cmd--stat

Temporary Open FTP Service
Firewall-cmd--add-service=ftp
Permanent Open FTP Service
Firewall-cmd--add-service=ftp--permanent
Turn off the FTP service
Firewall-cmd--remove-service=ftp--permanent
Configure the firewall to permanently open the HTTP service in the public area
Firewall-cmd--permanent--zone=public--add-service=http
Join Specify Open port
Firewall-cmd--add-port=1324/tcp

In order for the previous setting to take effect, we need to restart the service.
Systemctl Restart Firewalld
or use the following command to remove the restart service (reload after firewall policy configuration)
Firewall-cmd--complete-reload
Firewall-cmd--reload (These two sentences are the same function)

Check to see if Port 21 is open for the FTP service
Iptables-l-N | grep 21
ACCEPT TCP--0.0.0.0/0 0.0.0.0/0 TCP dpt:21 ctstate NEW

Query FTP service enabled status
Firewall-cmd--query-service FTP

View current rule
Firewall-cmd--list-all

Allow only partial IP access to the native service configuration
Firewall-cmd--permanent--zone=public--add-rich-rule= "rule family=" IPv4 "\
SOURCE address= "192.168.0.4/24" service Name= "http" accept "

Allow only partial IP access to the native port configuration
firewall-cmd--permanent--zone=public--add-rich-rule= "rule family=" IPv4 "\
Source Address = "192.168.0.4/24" \
Port protocol= "tcp" port= "8080" accept "

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.