Centos6.5 using Elk (Elasticsearch + Logstash + Kibana) to build a log-focused analysis platform practice

Source: Internet
Author: User
Tags create directory json mkdir syslog kibana logstash rsyslog filebeat
Centos6.5 Installing the Logstash ELK stack Log Management system

Overview:

Logs primarily include system logs, application logs, and security logs. System operations and developers can use the log to understand the server hardware and software information, check the configuration process errors and the cause of the error occurred. Frequently analyze logs to understand the load of the server, performance security, so as to take timely measures to correct errors.
Typically, the logs are stored on different devices that are scattered. If you manage hundreds of dozens of of servers, you are also using the traditional method of logging in to each machine in turn. This is not feeling very cumbersome and inefficient. It is imperative that we use centralized log management, for example: Open source syslog, to summarize log collection on all servers.
Centralized management of the log, log statistics and retrieval has become a more troublesome thing, generally we use grep, awk and WC and other Linux commands to achieve retrieval and statistics, but for higher requirements of query, sorting and statistics and the large number of machines still use such a method is a little too hard.
Open source real-time log analysis Elk platform can perfectly solve our problems above, elk by Elasticsearch, Logstash and Kiabana three open source tools. Official website: https://www.elastic.co/products
Elasticsearch is an open source distributed search engine, it features: distributed, 0 configuration, automatic discovery, Index auto-shard, index copy mechanism, RESTful style interface, multi-data source, automatic search load, etc. It is not a software, but a collection of Elasticsearch,logstash,kibana open source software, which is an open source solution as a log management system. It can perform log searches from any source, in any format, analyze and retrieve data, and display it in real time. Like shields (security), Guardians (alarms) and Marvel (monitoring) provide more possibilities for your product.


Elasticsearch: Search, provide distributed full-text search engine
Logstash: Log collection, management, storage
Kibana: Filter Web display for logs
Filebeat: Monitoring log files, forwarding


The operating principle is as follows:


first, the test environment planning diagram

Operating system centos6.5 x86_64
Elk server:192.168.3.17


To avoid interference, turn off the firewall and SELinux
Service Iptables off
Setenforce 0

Three machines need to modify the Hosts file
Cat/etc/hosts

192.168.3.17 elk.chinasoft.com
192.168.3.18 rsyslog.chinasoft.com
192.168.3.13 nginx.chinasoft.com


Modify Host Name:
Hostname elk.chinasoft.com
Mkdir-p/data/elk


Download Elk related Software
Cd/data/elk
Wget-c https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.3.3 /elasticsearch-2.3.3.rpm
Wget-c https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.3.2-1.noarch.rpm
Wget-c https://download.elastic.co/kibana/kibana/kibana-4.5.1-1.x86_64.rpm
Wget-c https://download.elastic.co/beats/filebeat/filebeat-1.2.3-x86_64.rpm


The server only needs to install E, L, K, and the client only needs to install Filebeat.
Install Elasticsearch, install Jdk,elk server first requires Java development environment support, because the client is using the Filebeat software, it does not rely on the Java environment, so do not need to install


Second, Elk service-side Operation1. Installing JDK8 and Elasticsearch
RPM-IVH jdk-8u102-linux-x64.rpm
Yum Localinstall elasticsearch-2.3.3.rpm-y


Start the service
Service Elasticsearch Start
Chkconfig Elasticsearch on


Check Service
RPM-QC Elasticsearch
/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/logging.yml
/etc/init.d/elasticsearch
/etc/sysconfig/elasticsearch
/usr/lib/sysctl.d/elasticsearch.conf
/usr/lib/systemd/system/elasticsearch.service
/usr/lib/tmpfiles.d/elasticsearch.conf


Whether it starts normally
NETSTAT-NLTP | grep java
TCP 0 0:: ffff:127.0.0.1:9200:::* LISTEN 1927/java
TCP 0 0:: 1:9200:::* LISTEN 1927/java
TCP 0 0:: ffff:127.0.0.1:9300:::* LISTEN 1927/java
TCP 0 0:: 1:9300:::* LISTEN 1927/java


2. Installing Kibana


Yum Localinstall kibana-4.5.1-1.x86_64.rpm-y
Service Kibana Start
Chkconfig Kibana on


Check Kibana service run (kibana default process Name: node, port 5601)
Ss-tunlp|grep 5601
TCP LISTEN 0 *:5601 *:* Users: (("Node", 2042,11)

Log observation
Tail-f/var/log/kibana/kibana.stdout

At this point, we can open the browser, test access to the Kibana server http://192.168.3.17:5601/, confirm that there is no problem, as shown below:

Installing Logstash, and adding configuration files
Yum Localinstall logstash-2.3.2-1.noarch.rpm-y

Generate certificate
cd/etc/pki/tls/
OpenSSL req-subj '/cn=elk.chinasoft.com/'-x509-days 3650-batch-nodes-newkey rsa:2048-keyout Private/logstash-forwar Der.key-out CERTS/LOGSTASH-FORWARDER.CRT

Enerating a 2048 bit RSA private key ... ..... ... ... ... ..... ..... and ... and ... and .....
+ + + + +/+ + + + + +
......................................................................................................................... .... + + +
writing new private key to ' private/logstash-forwarder.key '-----, ..... ..... + + + + + +

After that, create the Logstash configuration file, as follows:
Vim/etc/logstash/conf.d/01-logstash-initial.conf


Input {
Beats {
Port = 5000
Type = "Logs"
SSL = True
Ssl_certificate = "/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT"
Ssl_key = "/etc/pki/tls/private/logstash-forwarder.key"
}
}


Filter {
if [type] = = "Syslog-beat" {
Grok {
Match + = {"Message" = "%{syslogtimestamp:syslog_timestamp}%{sysloghost:syslog_hostname}%{data:syslog_ Program} (?: \ [%{posint:syslog_pid}\])?:%{greedydata:syslog_message} "}
Add_field = ["Received_at", "%{@timestamp}"]
Add_field = ["Received_from", "%{host}"]
}
GeoIP {
Source = "ClientIP"
}
Syslog_pri {}
Date {
Match = ["Syslog_timestamp", "Mmm D HH:mm:ss", "MMM dd HH:mm:ss"]
}
}
}


Output {
Elasticsearch {}
stdout {codec = Rubydebug}
}


Start Logstash, and check the Port, config file, where 5000 ports are written
Service Logstash Start


Ss-tunlp|grep 5000
TCP LISTEN 0:::,::* Users: (("Java", 2238,14))


Modifying the Elasticsearch configuration file
To view the directory, create the folder es-01 (the name is not required), logging.yml is self-brought, Elasticsearch.yml is the file created, as follows:
cd/etc/elasticsearch/
ll
Total 12
-rwxr-x---1 root elasticsearch 3189 may 21:24 elasticsearch.yml
-rwxr-x---1 root elasticsearch 2571 may 21:24 logging.yml
Drwxr-x---2 root elasticsearch 4096 may 23:49 scripts


[Root@centossz008 elasticsearch]# Tree
├──elasticsearch.yml
├──es-01
│├──elasticsearch.yml
│└──logging.yml
├──logging.yml
└──scripts


mkdir es-01
Vim Es-01/elasticsearch.yml


http
port:9200
Network
Host:elk.chinasoft.com
Node
Name:elk.chinasoft.com
Path
Data:/etc/elasticsearch/data/es-01


Restart the Elasticsearch, Logstash service.


Service Elasticsearch Restart
stopping elasticsearch: [OK]
Starting elasticsearch: [OK]


# Service Logstash Restart
Killing Logstash (PID 2238) with SIGTERM
Waiting Logstash (PID 2238) to die ...
Waiting Logstash (PID 2238) to die ...
Logstash stopped.
Logstash started.


Copy the Fiebeat installation package to the Rsyslog, Nginx client


Copy the Fiebeat installation package to the Rsyslog, Nginx client


SCP filebeat-1.2.3-x86_64.rpm Root@rsyslog.chinasoft.com:/root
SCP filebeat-1.2.3-x86_64.rpm Root@nginx.chinasoft.com:/root
SCP/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT Rsyslog.chinasoft.com:/root
SCP/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT Nginx.chinasoft.com:/root


iii. Client Deployment Filebeat (operation on Rsyslog, Nginx client)
The Filebeat client is a lightweight tool that collects log resources from files on the server, and these logs are forwarded to the Logstash server for processing. The Filebeat client communicates with the Logstash instance using a secure beats protocol. The Lumberjack protocol is designed for reliability and low latency. Filebeat uses the compute resources of the computer that hosts the source data, and the Beats input plug-in minimizes the resource requirements for Logstash.


(Node1 rsyslog.chinasoft.com) installing filebeat, copying certificates, creating a collection log configuration file


Yum Localinstall filebeat-1.2.3-x86_64.rpm-y
#拷贝证书到本机指定目录中
[Root@rsyslog elk]# CP logstash-forwarder.crt/etc/pki/tls/certs/.
[Root@rsyslog elk]# cd/etc/filebeat/
Tree
.
├──conf.d
│├──authlogs.yml
│└──syslogs.yml
├──filebeat.template.json
├──filebeat.yml
└──filebeat.yml.bak


The modified file has 3, FILEBEAT.YML, is the configuration that defines the connection Logstash server. The 2 profiles under the CONF.D directory are custom monitoring logs, and the following are the respective contents:
mkdir CONF.D


Vim Filebeat.yml
------------------------------------------
Filebeat:
spool_size:1024
Idle_timeout:5s
Registry_file:. filebeat
Config_dir:/ETC/FILEBEAT/CONF.D
Output
Logstash
Hosts
-elk.chinasoft.com:5000
Tls:
Certificate_authorities: ["/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT"]
Enabled:true
Shipper: {}
Logging: {}
Runoptions: {}
------------------------------------------

Vim Conf.d/authlogs.yml
------------------------------------------
Filebeat:
Prospectors:
-Paths:
-/var/log/secure
Encoding:plain
Fields_under_root:false
Input_type:log
ignore_older:24h
Document_type:syslog-beat
Scan_frequency:10s
harvester_buffer_size:16384
Tail_files:false
Force_close_files:false
Backoff:1s
Max_backoff:1s
Backoff_factor:2
Partial_line_waiting:5s
max_bytes:10485760
------------------------------------------


Vim Conf.d/syslogs.yml
------------------------------------------
Filebeat:
Prospectors:
-Paths:
-/var/log/messages
Encoding:plain
Fields_under_root:false
Input_type:log
ignore_older:24h
Document_type:syslog-beat
Scan_frequency:10s
harvester_buffer_size:16384
Tail_files:false
Force_close_files:false
Backoff:1s
Max_backoff:1s
Backoff_factor:2
Partial_line_waiting:5s
max_bytes:10485760
------------------------------------------


Vim Conf.d/flowsdk.yml
------------------------------------------
Filebeat:
Prospectors:
-Paths:
-/data/logs/flowsdk.log
Encoding:plain
Fields_under_root:false
Input_type:log
ignore_older:24h
Document_type:syslog-beat
Scan_frequency:10s
harvester_buffer_size:16384
Tail_files:false
Force_close_files:false
Backoff:1s
Max_backoff:1s
Backoff_factor:2
Partial_line_waiting:5s
max_bytes:10485760
------------------------------------------


After the modification is complete, start the Filebeat service
Service Filebeat Start
Chkconfig Filebeat on


Error:
Service Filebeat Start
Starting FILEBEAT:2016/10/12 14:05:46 No paths given. What files does you want me to watch?

Analysis:
Files that need to be monitored are not generated in the last 24 hours, and additional files can be added as/var/logs/message
(Node2 nginx.chinasoft.com) installing filebeat, copying certificates, creating a collection log configuration file

Yum Localinstall filebeat-1.2.3-x86_64.rpm-y
CP logstash-forwarder.crt/etc/pki/tls/certs/.
cd/etc/filebeat/
Tree
.
├──conf.d
│├──nginx.yml
│└──syslogs.yml
├──filebeat.template.json
└──filebeat.yml


Modify the Filebeat.yml content as follows:
Vim Filebeat.yml
------------------------------------------
Filebeat:
spool_size:1024
Idle_timeout:5s
Registry_file:. filebeat
Config_dir:/ETC/FILEBEAT/CONF.D
Output
Logstash
Hosts
-elk.chinasoft.com:5000
Tls:
Certificate_authorities: ["/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT"]
Enabled:true
Shipper: {}
Logging: {}
Runoptions: {}
------------------------------------------


Syslogs.yml & NGINX.YML


Vim Conf.d/syslogs.yml
------------------------------------------
Filebeat:
Prospectors:
-Paths:
-/var/log/messages
Encoding:plain
Fields_under_root:false
Input_type:log
ignore_older:24h
Document_type:syslog-beat
Scan_frequency:10s
harvester_buffer_size:16384
Tail_files:false
Force_close_files:false
Backoff:1s
Max_backoff:1s
Backoff_factor:2
Partial_line_waiting:5s
max_bytes:10485760
------------------------------------------
Vim Conf.d/nginx.yml


------------------------------------------
Filebeat:
Prospectors:
-Paths:
-/var/log/nginx/access.log
Encoding:plain
Fields_under_root:false
Input_type:log
ignore_older:24h
Document_type:syslog-beat
Scan_frequency:10s
harvester_buffer_size:16384
Tail_files:false
Force_close_files:false
Backoff:1s
Max_backoff:1s
Backoff_factor:2
Partial_line_waiting:5s
max_bytes:10485760
------------------------------------------


After the modification is complete, start the Filebeat service and check the filebeat process
Reload the SYSTEMD, scan for new or changed units, start and join the boot-up
Systemctl Daemon-reload
Systemctl Start Filebeat
Systemctl Enable Filebeat


Systemctl Status Filebeat
Filebeat.service-filebeat
Loaded:loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset:disabled)
Active:active (running) since Tue 2016-10-11 17:23:03 CST; 2min 56s ago
Docs:https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Main pid:22452 (filebeat)
CGroup:/system.slice/filebeat.service
└─22452/usr/bin/filebeat-c/etc/filebeat/filebeat.yml


OCT 17:23:03 localhost.localdomain systemd[1]: Started filebeat.
OCT 17:23:03 Localhost.localdomain systemd[1]: Starting filebeat ...
OCT 17:23:31 localhost.localdomain systemd[1]: Started filebeat.


As can be seen above, the client filebeat process is already connected to the Elk server. below to verify.

iv. Verification, access to Kibana http://192.168.3.17:5601For subsequent use of Kibana, you need to configure at least one index name or pattern, which is used to determine index in ES at analysis time. Using the system default, Kibana will automatically load the Doc field under the index and automatically select the appropriate field for the Time field in the icon


Nginx Access log for Node2 192.168.3.13

At this point, elk construction and simple use of the temporary


To modify the log data store configuration:


Vim/etc/init.d/elasticsearch
Modify the following two lines:
log_dir= "/data/elasticsearch/log"
Data_dir= "/data/elasticsearch/data"


Create directory to store data and log itself
Mkdir/data/elasticsearch/data-p
Mkdir/data /elasticsearch/log-p

Chown-r elasticsearch.elasticsearch/data/elasticsearch/

Restarts the service in effect
/etc/init. D/elasticsearch

If it is compiled and installed, the configuration store directory is elasticsearch.yml as follows:
cat/usr/local/elasticsearch/config/ Elasticsearch.yml | Egrep-v "^$|^#"  

Path.data:/tmp/elasticsearch/data
Path.logs:/tmp/elasticsearch/logs
Network. host:x.x.x.x
network.port:9200

Logstash Log directory modification:
Vim/etc/init.d/logstash
Ls_log_dir=/data/logs Tash

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.