General Application log Access scheme of Elk log System

Source: Internet
Author: User
Tags java web kibana logstash filebeat

There are two articles in front of elk about MySQL slow log collection and Nginx access log collection, so how can the logs of different types of applications be easily collected? And see how we deal with this problem efficiently.

Log specification

The specification of the log storage path and output format for our subsequent collection and analysis will bring great convenience, no need to consider a variety of different paths, format compatibility issues, only need to do a fixed number of types of log to do the adaptation can be, the specific specifications are as follows:

Log Storage Path specification

    1. The project log can only be exported to a fixed location, such as a /data/logs/ directory
    2. Log file names for the same type (for example, Java Web) remain uniform, such as callingapplication.log
    3. A type of project can record several different log files, for example, exception.log andbusiness.log

Log Output Format specification

    1. The log output must be in JSON format, which is important
    2. The same type of project should use a unified log output standard, as far as possible to the log output modular, all projects referencing the same module
    3. The output log must contain standard time (timestamp), app name (appname), Level field, and log content records are clear and understandable

Log Information Level specification

Log Level Description value
Debug Debug logs, maximum log information 7
Info General Information log, the most commonly used level 6
Notice General condition information of greatest importance 5
Warning Warning Level 4
Error Error level, a feature does not work correctly 3
Critical Severity level, the entire system does not work properly 2
Alert Logs that need to be modified immediately 1
Emerg Critical information such as kernel crashes 0

From top to bottom level from low to high, log volume from many to less, correctly select Log level to help troubleshoot problems later

Why should we make such a specification?

    1. Our projects are all running in Docker, and the Docker image is made up of base image + Project code.
    2. The underlying image packages the underlying environment where the project is running, such as the Spring Cloud MicroServices project, and the JRE service is packaged
    3. After we have standardized the log storage and output, we can package the Filebeat as the Log Collection agent into the base image, because the log path and format of the same type of project are consistent, and the Filebeat configuration file can be generalized
    4. This way we do not need to care about the log related content in the subsequent deployment process, as long as the project image refers to this base image can automatically access our log service, to achieve the collection, processing, storage and display of logs
Log capture

Our General purpose log capture scheme is as follows:

    1. The program runs in the container, the container comes with a filebeat program to collect logs
    2. After collection is passed to Kafka cluster, Logstash reads Kafka cluster data to Elasticsearch cluster
    3. Kibana read Elasticsearch cluster data display on the web, development, operations and other needs to view the log user login Kibana view

Client Side Filebeat Configuration

filebeat.prospectors:- input_type: log  paths:    - /home/logs/app/business.log    - /home/logs/app/exception.log  json.message_key: log  json.keys_under_root: trueoutput.kafka:  hosts: ["10.82.9.202:9092","10.82.9.203:9092","10.82.9.204:9092"]  topic: filebeat_docker_java

Kafka received data format

{"@timestamp":"2018-09-05T13:17:46.051Z","appname":"app01","beat":{"hostname":"52fc9bef4575","name":"52fc9bef4575","version":"5.4.0"},"classname":"com.domain.pay.service.ApiService","date":"2018-09-05 21:17:45.953+0800","filename":"ApiService.java","hostname":"172.17.0.2","level":"INFO","linenumber":285,"message":"param[{\"email\":\"TEST@163.COM\",\"claimeeIP\":\"123.191.2.75\",\"AccountName\":\"\"}]","source":"/home/logs/business.log","thread":"Thread-11","timestamp":1536153465953,"type":"log"}

Server-side Logstash configuration

input {    kafka {        bootstrap_servers => "10.82.9.202:9092,10.82.9.203:9092,10.82.9.204:9092"        topics => ["filebeat_docker_java"]    }}filter {    json {        source => "message"    }    date {        match => ["timestamp","UNIX_MS"]        target => "@timestamp"    }}output {    elasticsearch {        hosts => ["10.82.9.205", "10.82.9.206", "10.82.9.207"]        index => "filebeat-docker-java-%{+YYYY.MM.dd}"    }}

Basic configuration is simple, do not explain too much, through the simple configuration above can be implemented in any application log collection

Log Show

After collecting logs to Elasticsearch, it is possible to configure the application log through Kibana configuration, to facilitate the development of timely detection of problems, online positioning problems

Written in the last
    1. General basis and premise is the norm, the norms do a better job
    2. Log print JSON format is not convenient for local viewing? This can be written in the log output format as configuration in the configuration file, different environments load different configurations, as the development environment to load the development database
    3. Log system to the current stable running nearly 2 years, in addition to the beginning of a bit not fit to think, are more useful, now they can not leave the Elk log system, greatly improve the efficiency of the work

If you feel that the article is helpful to you, please forward it to more people. If you don't feel like reading, read the following articles:

    • Elk build MySQL Slow Log collection platform details
    • Elk Log System use Rsyslog quick and easy to collect Nginx logs

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.