Heka configuration and heka
Distributed backend log Architecture Based on Heka, ElasticSearch, and Kibana
Currently, mainstream backend logs use the standard elk mode (Elasticsearch, Logstash, and Kinaba), which stores, collects, and visualizes logs.
However, our log files are diverse and distributed on different servers and different logs, which will be customized for convenience of further development. As a result, Mozilla is used to use the Heka implemented by golang open source like Logstash.
Overall Architecture
Shows the overall architecture after using Heka, ElasticSearch, and Kibana.
Heka Introduction
Heka's log processing process is input segmentation, decoding, filtering, and encoding. The internal data streams of a single Heka service are transferred through the Message data model defined by Heka in each module.
Heka has built-in most common module plug-ins, such
The Input plug-in has Logstreamer Input which can use the log file as the Input source,
The Nginx Access Log Decoder can decode the nginx Access Log as a standard key value and hand it over to the module plug-in for processing.
Thanks to the flexible configuration of input and output, you can process and output the log data collected by Heka in different regions to the Heka in the log Center for unified encoding and then deliver it to ElasticSearch for storage.
Install
The installation of source code is cumbersome and will not be introduced here. For details, refer to the official website documentation. Http://hekad.readthedocs.io/en/v0.10.0/installing.html
Here we use centos for linux release, so we use the rpm package installation method.
Download the rpm installation package
1 |
Wget https://github.com/mozilla-services/heka/releases/download/v0.10.0/heka-0_10_0-linux-amd64.rpm |
Userpm -i heka-0_10_0-linux-amd64.rpm
.
Runhekad -version
The output version is successfully installed.
For instructions, click here wuguiyunwei.com.