centralize logging on CentOS 7 using Logstash and Kibana
Centralized logging is useful when trying to identify a problem with a server or application because it allows you to search all logs in a single location. It is also useful because it allows you to identify issues across multiple servers by associating their logs within a specific time frame. This series of tutorials will teach you how to install Logstash and Kibana on CentOS, and then how to add more filters to construct your log data.
http://www.ibm.com/developerworks/cn/opensource/os-cn-elk/ Installation Introduction
In this tutorial, we will install the Elasticsearch ELK Stack on CentOS 7, which is Elasticsearch 5. X,logstash 5. X and Kibana 5. X. We'll also show you how to configure it to use Filebeat 1. Collect and visualize system logs for your system in a centralized location. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that you can use to search and view logs for Logstash indexes. Both tools are based on elasticsearch and are used to store logs.
Centralized logging is useful when trying to identify a problem with a server or application because it allows you to search all logs in a single location. It is also useful because it allows you to identify issues across multiple servers by associating their logs within a specific time frame.
You can use Logstash to collect all types of logs, but we limit the scope of this tutorial to Syslog collection. Experimental Purpose
The goal of this tutorial is to set up Logstash to collect syslog from multiple servers and set Kibana to visualize the collected logs.
The Elk stack setting has four main components: Logstash: Server component that handles Logstash for incoming logs Elasticsearch: Store all log Kibana: Web interface for searching and visualizing logs, via Nginx Filebeat Agent: Installed on the client server that sends its logs to Logstash, Filebeat acts as a log shipping agent and communicates with Logstash using the Logging Tool network protocol
We will install the first three components on a single server, which we call our Elk server. Filebeat will be installed on all client servers we want to collect logs, and we will collectively refer to the client server. Prerequisites
The amount of cpu,ram and storage your Elk server will need depends on the volume of logs that you want to collect. In this tutorial, we will use a VPS with the following specifications for our Elk server: Os:centos 7 RAM:4GB Cpu:2
Note: install Java 8 According to your own server resources to allocate resources for each node
Elasticsearch and Logstash need Java, so let's install it now. We will install the latest version of Oracle Java 8, as this is the recommended version of Elasticsearch.
Note: It is recommended that you download the latest version of the JDK locally and upload it to the server's/USR/LOCAL/SRC directory
# JDK Download Address:
http://www.oracle.com/technetwork/java/javase/downloads
Then use this Yum command to install the RPM (if you downloaded a different version, replace the file name here):
Yum-y localinstall jdk-8u111-linux-x64.rpm
# or
rpm-ivh jdk-8u111-linux-x64.rpm
Now Java should be installed in/usr/java/jdk1.8.0_111/jre/bin/java and linked from/usr/bin/java. Installing Elasticsearch
Elasticsearch can be installed with the package Manager by adding a elastic package warehouse.
Run the following command to import the Elasticsearch public gpg key into RPM:
# https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html
rpm--import https:// Artifacts.elastic.co/gpg-key-elasticsearch
Create a file named Elasticsearch.repo in the/etc/yum.repos.d/directory of the Redhat-based release, which includes:
Echo ' [elasticsearch-5.x]
Name=elasticsearch repository for 5.x packages
baseurl=https:// Artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/ Gpg-key-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
' | sudo tee/etc/yum.repos.d/ Elasticsearch.repo
After the Elasticsearch source is created, check to see if the source is available through Makecache and then install the Elasticsearch via yum:
Yum makecache
Yum install elasticsearch-y
To configure Elasticsearch to start automatically when the system boots, run the following command:
Sudo/bin/systemctl daemon-reload
sudo/bin/systemctl enable Elasticsearch.service
Elasticsearch can be started and stopped as follows:
sudo systemctl start elasticsearch.service
sudo systemctl stop elasticsearch.service
These commands do not provide feedback on whether Elasticsearch started successfully. Instead, this information is written to the log file in/Var/log/elasticsearch/.
By default, the Elasticsearch service does not log information in the SYSTEMD log. To enable JOURNALCTL logging, you must remove the –quiet option from the Execstart command line in Elasticsearch. Service files.
# Note 24 lines of--quiet \
Vim/etc/systemd/system/multi-user.target.wants/elasticsearch.service
When you enable SYSTEMD logging, you can use the JOURNALCTL command to obtain logging information: Use tail to view Journal:
sudo journalctl-f
To list the journal entries for the Elasticsearch service:
sudo journalctl--unit elasticsearch
To list the journal entries for the Elasticsearch service starting at a given time:
sudo journalctl--unit elasticsearch--since "2017-1-4 10:17:16"
# since indicates a record before the specified time
Use man Journalctl view journalctl more ways to check if Elasticsearch is running
You can test that the Elasticsearch node is running by sending an HTTP request to port 9200 on localhost:
Curl-xget ' Localhost:9200/?pretty '
We can get the following echo:
{
"name": "De-lrno",
"cluster_name": "Elasticsearch",
"Cluster_uuid": "Dejzplwhqqk5ugitxr8jja",
" Version ": {
" number ":" 5.1.1 ",
" Build_hash ":" 5395e21 ",
" build_date ":" 2016-12-06t12:36:15.409z ",
"Build_snapshot": false,
"lucene_version": "6.3.0"
},
"tagline": "You Know, for Search"
}
Configure Elasticsearch
Elasticsearch load the configuration file from the default/etc/elasticsearch/elasticsearch.yml,
Format of the configuration file:
Https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html
[Root@linuxprobe ~]# egrep-v "^#|^$"/etc/elasticsearch/elasticsearch.yml
[root@linuxprobe ~]# egrep-v "^#|^$"/ Etc/elasticsearch/elasticsearch.yml
node.name:node-1
path.data:/var/lib/elasticsearch
path.logs:/ Var/log/elasticsearch
network.host:10.1.1.53 # default localhost, custom IP
http.port:9200
RPM also has a system configuration file (/etc/sysconfig/elasticsearch) that allows you to set the following parameters:
[Root@linuxprobe elasticsearch]# egrep-v "^#|^$"/etc/sysconfig/elasticsearch
es_home=/usr/share/elasticsearch
java_home=/usr/java/jdk1.8.0_111
Conf_dir=/etc/elasticsearch
data_dir=/var/lib/elasticsearch
log_dir=/var/log/elasticsearch
PID_ Dir=/var/run/elasticsearch
Log Configuration
Elasticsearch uses log4j 2 for logging. Log4j 2 can be configured with LOG4J2. Properties file. Elasticsearch exposes a single property of $ {sys:es. log}, which can be referenced in the configuration file to determine the location of the log file, which resolves the prefix to the Elasticsearch log file at run time.
For example, if your log directory is/var/log/elasticsearch and your cluster name is production, then $ {sys:es. Logs} will resolve to/var/log/elasticsearch/production.
Default log configuration exists:/etc/elasticsearch/log4j2.properties installation Kibana
Kibana rpm can be downloaded from the Elk website or from the RPM repository. It can be used to install Kibana on any RPM-based system, such as opensuse,sles,centos,red hat and Oracle Enterprise. Import Elastic PGP Key
We use the Elastic signature key (PGP key D88E42B4, which can be signed from https://pgp.mit.edu) for all packages, fingerprint:
RPM--import Https://artifacts.elastic.co/GPG-KEY-elasticsearch
Create a Kibana source
Echo ' [kibana-5.x]
Name=kibana repository for 5.x packages
BASEURL=HTTPS://ARTIFACTS.ELASTIC.CO/PACKAGES/5. X/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/gpg-key-elasticsearch
enabled=1
Autorefresh=1
type=rpm-md
' | sudo tee/etc/yum.repos.d/kibana.repo
After the Kibana source has been created successfully, use Yum to install the Kibana after Makecache:
Yum makecache && Yum install kibana-y
run Kibana with Systemd
To configure Kibana to start automatically when the system boots, run the following command:
Sudo/bin/systemctl daemon-reload
sudo/bin/systemctl enable Kibana.service
Kibana can be started and stopped as follows
sudo systemctl start kibana.service
sudo systemctl stop kibana.service
Configure Kibana
Kibana loads its configuration from the/etc/kibana/kibana.yml file by default.
Reference: https://www.elastic.co/guide/en/kibana/current/settings.html
Note: This tutorial has changed localhost to server IP, and if you do not change localhost, you need to set up a reverse proxy to access the Kibana.
Install an Nginx reverse proxy on the same server to allow external access. Installing Nginx
Configure Kibana to listen on localhost, you must set up a reverse proxy to allow external access to it. This article uses Nginx to implement the sending agent. Create Nginx official source to install Nginx
# https://www.nginx.com/resources/wiki/start/topics/tutorials/install/
echo ' [Nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1
' | sudo tee/etc /yum.repos.d/nginx.repo
Installing Nginx and Httpd-tools using Yum
Yum Install Nginx httpd-tools-y
Use HTPASSWD to create an administrator user named "Kibanaadmin" (with a different name) that can access the Kibana Web interface:
[Root@linuxprobe ~]# htpasswd-c/etc/nginx/htpasswd.users kibanaadmin
New password: # custom
re-type New Password:
Adding password for user kibanaadmin
Configuring Nginx configuration Files with vim
[Root@linuxprobe ~]# egrep-v "#|^$"/etc/nginx/conf.d/kibana.conf
server {
listen ;
server_name kibana.aniu.co;
Access_log /var/log/nginx/kibana.aniu.co.access.log main;
Error_log /var/log/nginx/kibana.aniu.co.access.log;
Auth_basic "Restricted Access";
Auth_basic_user_file/etc/nginx/htpasswd.users;
Location/{
Proxy_pass http://localhost:5601;
Proxy_http_version 1.1;
Proxy_set_header Upgrade $http _upgrade;
Proxy_set_header Connection ' upgrade ';
Proxy_set_header Host $host;
Proxy_cache_bypass $http _upgrade;
}
}
Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application listening on localhost 5601. In addition, Nginx will use the Htpasswd.users file that we created earlier and requires Basic authentication.
# start Nginx and verify configuration
sudo systemctl start nginx
sudo systemctl enable Nginx
SELinux is disabled. If this is not the case, you may need to run the following command to make Kibana work correctly:
sudo setsebool-p httpd_can_network_connect 1 Access Kibana, enter the kibanaadmin set above, password
The figure above shows that Kibana has been successfully installed and needs to be configured with an indexed mode installation Logstash
Create a Logstash source
# import Public signature key
rpm--import https://artifacts.elastic.co/GPG-KEY-elasticsearch
# Add the following to/etc/in a file that has a. Repo suffix Yum.repos.d/directory, such as Logstash.repo
echo ' [logstash-5.x]
name=elastic repository for 5.x packages
baseurl= Https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/ Gpg-key-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
' | sudo tee/etc/yum.repos.d/ Logstash.repo
Installing Logstash with Yum
Yum makecache && Yum install logstash-y
generate an SSL certificate
Since we will use Filebeat to send logs from our client server to our Elk server, we need to create an SSL certificate and a key pair. Filebeat uses this certificate to verify the identity of Elk server. Use the following command to create the directory where the certificate and private key will be stored:
Use the following command (replaced in the FQDN of the Elk Server) in the appropriate location (/etc/pki/tls/... ) to generate the SSL certificate and private key:
Cd/etc/pki/tls
sudo openssl req-subj '/cn=elk_server_fqdn/'-x509-days 3650-batch-nodes-newkey rsa:2048-keyout Private/logstash-forwarder.key-out CERTS/LOGSTASH-FORWARDER.CRT
# Note: Elk_server_fqdn custom, examples are as follows:
[ Root@linuxprobe ~]# cd/etc/pki/tls
[root@linuxprobe tls]# sudo openssl req-subj '/cn=kibana.aniu.co/'-x509-days 36 50-batch-nodes-newkey rsa:2048-keyout private/logstash-forwarder.key-out certs/logstash-forwarder.crt
Generating a 2048 bit RSA private key
. + + +
...........................................................................................................+++
Writing new private key to ' Private/logstash-forwarder.key '
-----
The Logstash-forwarder.crt file will be copied to all server configurations that send logs to Logstash Logstash
The Logstash configuration file is in JSON format and resides in the/ETC/LOGSTASH/CONF.D. The configuration consists of three parts: input, filter, and output.
Create a configuration file named 01-beats-input.conf and set our "Filebeat" Input:
sudo vi/etc/logstash/conf.d/01-beats-input.conf
Insert the following input configuration
Input {
beats {
port = 5044
SSL = true
ssl_certificate = "/etc/pki/tls/certs/ Logstash-forwarder.crt "
ssl_key ="/etc/pki/tls/private/logstash-forwarder.key "
}
}
Save exit, listen for beats input on TCP 5044 port, encrypt with SSL certificate created above
To create a configuration file named 10-syslog-filter.conf, we will add a filter to the SYSLOG message:
sudo vim/etc/logstash/conf.d/10-syslog-filter.conf
Insert the following input configuration
Filter {
if [type] = = "Syslog" {
grok {
match = = {"Message" = "%{syslogtimestamp:syslog_timestamp}%{s Ysloghost:syslog_hostname}%{data:syslog_program} (?: \ [%{posint:syslog_pid}\])?:%{greedydata:syslog_message} "}
Add_field = [" Received_at ","%{@timestamp} "]
Add_field = ["Received_from", "%{host}"]
}
Syslog_pri {}
date {
match = = ["Syslog_timestamp", "Mmm d HH:mm:ss", "MMM dd HH:mm:ss"]
}
}
}
Save and quit. This filter looks for logs marked as "Syslog" type (by Filebeat) and will attempt to parse the incoming syslog log using Grok to make it structured and queryable. Create a configuration file named Logstash-simple, sample file:
Vim/etc/logstash/conf.d/logstash-simple.conf
Insert the following input configuration
Input {stdin {}}
output {
elasticsearch {hosts = ["localhost:9200"]}
stdout {codec = Rubydebu g}
}
This output basically configures Logstash to store input data into Elasticsearch, running in localhost:9200 run Logstash using systemd
sudo systemctl start logstash.service
sudo systemctl enable Logstash.service
Note: There may be logstash restart failure, such as problems, view logs, lock specific issues, generally not error loading the Kibana dashboard
Elastic provides several examples of Kibana dashboards and beats indexing patterns that can help us get started with Kibana. Although we won't use dashboards in this tutorial, we will still load them so that we can use the Filebeat index pattern that it includes.
First, download the sample dashboard archive to your home directory:
CD/USR/LOCAL/SRC
curl-l-o https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
Install unzip package, unzip beats
sudo yum-y install unzip
unzip Beats-dashboards-*.zip
./load.sh
These are the index patterns we just loaded: [PACKETBEAT-]YYYY. Mm. DD [topbeat-]yyyy. Mm. DD [filebeat-]yyyy. Mm. DD [winlogbeat-]yyyy. Mm. Dd
When we start using Kibana, we will select the Filebeat index mode as the default value. loading the filebeat index template in Elasticsearch
Because we plan to use Filebeat to send logs to Elasticsearch, we should load the Filebeat index template. The index template configures Elasticsearch to intelligently parse the incoming filebeat field.
First, download the Filebeat index template to your home directory:
CD/USR/LOCAL/SRC
Curl-o https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/ D8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
Then use this command to load the template:
# Note: The command is executed in the same location as the JSON template
[root@linuxprobe src]# curl-xput ' Http://localhost:9200/_template/filebeat?pretty '- D@filebeat-index-template.json
{
"acknowledged": true
}
Now our Elk server is ready to receive filebeat data and move to set filebeat on each client server. Set Filebeat (add Client server)
Follow these steps for each CentOS or Rhel 7 server that you want to send logs to the Elk server. To copy an SSL certificate
On the Elk Server, copy the SSL certificate that you created in the prerequisites tutorial to the client server:
# using SCP remote implementation to replicate
yum-y install openssh-clinets
#
SCP/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT Root@linux-node1:/tmp
# Note: If IP is not applicable, remember to set the hosts on the Elk server
After you provide your logon credentials, make sure that the certificate replication succeeds. It is required for communication between the client server and the Elk server, and on the client server, copy the SSL certificate of the Elk server to the appropriate location (/etc/pki/tls/certs):
[Root@linux-node1 ~]# sudo mkdir-p/etc/pki/tls/certs
[root@linux-node1 ~]# sudo cp/tmp/logstash-forwarder.crt/ etc/pki/tls/certs/
Installing the Filebeat package
On the client server, create a run the following command to import the Elasticsearch public gpg key into the RPM: reference above:
sudo rpm--import http://packages.elastic.co/GPG-KEY-elasticsearch
#
echo ' [elasticsearch-5.x]
name= Elasticsearch repository for 5.x packages
Baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck= 1
gpgkey=https://artifacts.elastic.co/gpg-key-elasticsearch
enabled=1
autorefresh=1
type= RPM-MD
' | sudo tee/etc/yum.repos.d/elasticsearch.repo
Use Yum to install filebeat after source creation is complete
Yum makecache && yum install filebeat-y
sudo chkconfig--add filebeat
Configure Filebeat
[Root@linux-node1 ~]# egrep-v "#|^$"/etc/filebeat/filebeat.yml
filebeat.prospectors:
-Input_type:log
Paths:
-/var/log/secure # New
-/var/log/messages # New
-/var/log/*.log
Output.elasticsearch:
hosts: ["localhost:9200"]
Output.logstash:
hosts: ["kibana.aniu.co:5044"] # Modify the connection mode to Logstash on Elk
ssl.certificate_authorities: ["/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT"] # New
Filebeat configuration file is in YAML format, note the indentation start filebeat
sudo systemctl start filebeat
sudo systemctl enable filebeat
Note: The client premise is configured to complete the Elasticsearch service, and set the domain name resolution.
After the Filebeat boot is complete, you can observe the journalctl-f and Logstash above elk, as well as the Filebeat log of the client, to see if filebeat is in effect connection Kibana
Refer to the official documentation settings:
Https://www.elastic.co/guide/en/kibana/5.x/index.html
Here we have completed the construction of the Elk Infrastructure, the following tutorial specifically to achieve the specific functions of each service, as well as the use of related plug-ins.