Log Management Log Management tool: Collect, Parse, visualize
Elasticsearch-a Lucene-based document store that is used primarily for log indexing, storage, and analysis.
FLUENTD-Log collection and issuance
Flume-Distributed Log collection and aggregation system
GRAYLOG2-Pluggable log and event Analysis server with alarm options
Heka-Stream processing system, which can be used for log aggregation
Kibana-Visualizing log and timestamp data
Logstash-Tools for managing events and logs
Octopussy-Log management solution (visualization/alarms/reporting)
Comparison of Graylog and elk scenarios:
-ELK
--elasticsearch + Logstash + Kibana
-Graylog
--elasticsearch + Graylog Server + Graylog Web
Previously tried flunted + Elasticsearch + kibana scheme, found a few disadvantages:
1. Can not handle multi-line log, such as MySQL slow query, Tomcat/jetty application Java exception printing
2. You cannot keep the original log, only the original log sub-fields can be saved so that the search log results are a bunch of JSON-formatted text that cannot be read.
3. The log lines that do not match the regular expression are discarded.
In line with the principle of resolving the above 3 shortcomings, we are looking for alternative solutions again.
First found the Business log tool Splunk, known as the Journal of Google, meaning the full-text search log ability, not only to solve the above 3 shortcomings, but also provide search word highlighting, different error level log color and other attractive features, but the free version has 500M limit, paid version is said to be 30,000 U.S. knives, Just give up and keep looking.
Finally found the Graylog, the first to see the Graylog, just system log syslog Collection tool, not attracted to me at all. But later in-depth understanding, only then found Graylog is simply open source version of the Splunk.
My own summary of the Graylog attractive place:
1. Integration program, easy to install, unlike the Elk has 3 independent systems integration problem.
2. Collect the original log, and can add fields afterwards, such as Http_status_code,response_time and so on.
3. Develop your own script to collect logs and send it to Graylog Server with CURL/NC, the send format is Custom gelf,flunted and logstash have corresponding output gelf message plugin. Self-development brings a great degree of freedom. In fact, you only need to use inotify_wait to monitor the Modify event of the log and send a new row of the log curl/nc to the Graylog server.
4. Search results are highlighted, just like Google.
5. Search syntax is simple, for example: Source:mongo and reponse_time_ms:>5000, avoid direct input elasticsearch search JSON syntax
6. Search criteria can be exported as Elasticsearch search JSON text, which facilitates direct development of search scripts that invoke the Elasticsearch rest API.
Graylog Open Source version: https://www.graylog.org/
Take a few screenshots of the official website:
1. Architecture diagram