The elk of OpenStack log collection and analysis

Source: Internet
Author: User
Tags kibana rabbitmq logstash

ELK installation configuration is simple, there are two points to be aware of when managing OpenStack logs:

    • Logstash configuration file Writing
    • Capacity planning for Elasticsearch log storage space

Also recommended Elkstack Chinese guide.

ELK Introduction

ELK is an excellent open-source software for log collection, storage and querying, and is widely used in log systems. When the OpenStack cluster reaches a certain scale, log management and analysis becomes increasingly important, and a well-integrated log management and analysis platform helps to quickly locate the problem. Mirantis's fuel and HPE helion are integrated with ELK.

    • Logstash: Collection and transfer of logs
    • Elasticsearch: Storage and retrieval of logs
    • Kibana: A visualization of the logs, a stunning portal

Using ELK to manage OpenStack logs has the following advantages:

    • Query the global ERROR level log quickly
    • Request frequency for an API
    • The request ID allows you to filter out the log of an API's entire process
Planning and designing a deployment architecture

The control node acts as a log server, storing all OpenStack and its associated logs. Logstash is deployed on all nodes, collects the logs that are required under this node, and then delivers the presentation log information as a Web portal in a network (node/http) way to the control node Elasticsearch,kibana:

Log format

To provide fast and intuitive retrieval capabilities, for each OpenStack log, we want it to contain the following properties for retrieval and filtering:

    • Host: such as CONTROLLER01,COMPUTE01, etc.
    • Service Name: such as NOVA-API, Neutron-server, etc.
    • Module: such as Nova.filters
    • Log level: such as DEBUG, INFO, ERROR, etc.
    • Log Date
    • Request ID: Requested ID

The above attributes can be implemented by Logstash, by extracting key fields of the log to obtain the above attributes and indexing in Elasticsearch.

Installation and Configuration Installation

ELK installation procedure is very simple, can refer to Logstash-es-kibana installation, in case of anomalies, please Google.

Configuration

Logstash configuration file has a special set of syntax, the cost of learning is relatively high, can refer to the OpenStack Logstash config, and then rewrite according to their own needs:

Input {file {path = = ['/var/log/nova/nova-api.log '] tags = [' Nova ', ' oslofmt '] type = "Nova-api"} file {path = ['/var/log/nova/nova-conductor.log '] tags = [' nova-conductor ', ' oslofmt '] type = "Nov A "} file {path = = ['/var/log/nova/nova-manage.log '] tags = [' nova-manage ', ' oslofmt '] type =" Nova "} file {path = ['/var/log/nova/nova-scheduler.log '] tags = [' Nova-scheduler ', ' oslofmt '] type = =  "Nova"} file {path = ['/var/log/nova/nova-spicehtml5proxy.log '] tags = [' Nova-spice ', ' oslofmt '] type    = "Nova"} file {path = ['/var/log/keystone/keystone-all.log '] tags = [' Keystone ', ' keystonefmt '] Type = "Keystone"} file {path = = ['/var/log/keystone/keystone-manage.log '] tags = [' keystone ', ' keys Tonefmt '] type = "Keystone"} file {path = = ['/var/log/glance/api.log '] tags = [' glance ', ' oslofmt ' ] Type = "glance-API "} file {path = = ['/var/log/glance/registry.log '] tags = [' glance ', ' oslofmt '] type =" Glance-r Egistry "} file {path = = ['/var/log/glance/scrubber.log '] tags = [' glance ', ' oslofmt '] type =" Glan Ce-scrubber "} file {path = = ['/var/log/ceilometer/ceilometer-agent-central.log '] tags = [' ceilometer ', ' O Slofmt '] type = "Ceilometer-agent-central"} file {path = = ['/var/log/ceilometer/ceilometer-alarm-notifier . log ' tags = [' ceilometer ', ' oslofmt '] type = "Ceilometer-alarm-notifier"} file {path = = ['/var/l Og/ceilometer/ceilometer-api.log '] tags = [' ceilometer ', ' oslofmt '] type = "Ceilometer-api"} file {PA th = ['/var/log/ceilometer/ceilometer-alarm-evaluator.log '] tags = [' ceilometer ', ' oslofmt '] type = "CEI Lometer-alarm-evaluator "} file {path = = ['/var/log/ceilometer/ceilometer-collector.log '] tags = [' Ceilome Ter ', ' oslofmt '] type = "Ceilometer-collector"} file {path = ['/var/log/heat/heat.log '] tags = [' Heat ', ' oslofmt '] Typ E = "Heat"} file {path = = ['/var/log/neutron/neutron-server.log '] tags = [' Neutron ', ' oslofmt '] Ty PE = "Neutron-server"}# not collecting RabbitMQ logs for the moment# file {# path = = ['/var/log/rabbitmq/[emai    l protected]<%= @hostname%>.log ']# tags = [' rabbitmq ', ' oslofmt ']# type = ' RABBITMQ ' #} file { Path = ['/var/log/httpd/access_log '] tags = [' horizon '] type = "Horizon"} file {path = = ['/V Ar/log/httpd/error_log '] tags = [' horizon '] type = "Horizon"} file {path = = ['/var/log/httpd/horiz On_access_log '] tags = [' horizon '] type = "Horizon"} file {path = = ['/var/log/httpd/horizon_error_ Log '] tags = [' horizon '] type = "Horizon"}}filter {if "oslofmt" in [tags] {multiline {negate = > True pattern => "^%{timestamp_iso8601}" what = "previous"} multiline {negate = false pattern = "^ %{timestamp_iso8601}%{space}%{number}?%{space}? TRACE "what =" previous "} grok {# Do multiline matching as the above Mutliline filter may add Newlin      ES # to the log messages.      # TODO Move the loglevels into a proper grok pattern. Match + = {"Message" = "(? m) ^%{timestamp_iso8601:logdate}%{space}%{number:pid}?%{space}?" <loglevel>audit| critical| debug|info| trace| Warning|      ERROR) \[?\b%{notspace:module}\b\]?%{space}?%{greedydata:logmessage}? "} Add_field = {"Received_at" = "%{@timestamp}"}} or else if "keystonefmt" in [tags] {grok {# do mult      Iline matching as the above Mutliline filter may add newlines # to the log messages.      # TODO Move the loglevels into a proper grok pattern. Match + = {"Message" = "(? m) ^%{timestamp_iso8601:logdate}%{space}%{number:pid}?%{space}?" <loglevEl>audit| critical| debug|info| trace| Warning|      ERROR) \[?\b%{notspace:module}\b\]?%{space}?%{greedydata:logmessage}? "} Add_field = {"Received_at" = "%{@timestamp}"} if [module] = = "iso8601.iso8601" {#log message for each  Part of the date?  Really? Drop {}}} else if "Libvirt" in [tags] {grok {match + = {"Message" = "(? m) ^%{timestamp_iso8601:logd Ate}:%{space}%{number:code}:?%{space}\[?\b%{notspace:loglevel}\b\]?%{space}?:?%{space}\[?\b%{notspace:module}\       B\]?%{space}?%{greedydata:logmessage}? "} Add_field = {"Received_at" = "%{@timestamp}"} mutate {uppercase = ["LogLevel"]}} else if [type] = = "Syslog" {grok {match + = {"Message" = "%{syslogtimestamp:syslog_timestamp}%{sysloghost: Syslog_hostname}%{data:syslog_program} (?: \     [%{posint:syslog_pid}\])?:%{greedydata:logmessage} "} Add_field = [" Received_at ","%{@timestamp} "]} Syslog_pri {Severity_labels= = ["Error", "Error", "Error", "Error", "WARNING", "info", "info", "DEBUG"]} date {match = = ["sys Log_timestamp "," Mmm D HH:mm:ss "," MMM dd HH:mm:ss "]} if! ("_grokparsefailure" in [tags]) {mutate {replace = ["@source_host", "%{syslog_hostname}"]}} mutate {Remo        Ve_field = ["Syslog_hostname", "syslog_timestamp"] Add_field = = ["LogLevel", "%{syslog_severity}"] Add_field = ["module", "%{syslog_program}"]}}}output {elasticsearch {host = controller}}

  

The elk of OpenStack log collection and analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.