Use Elastic Stack to monitor and tune Golang applications

Source: Internet
Author: User
Tags key string postgresql module vars kibana logstash
This is a creation in Article, where the information may have evolved or changed.


Golang because of its simple syntax, quick and easy deployment is being favored by more and more developers, a Golang program developed, it is bound to care about its operation, today here to introduce you if you use the Elastic Stack to analyze Golang Memory usage of the program, convenient for the Golang program to do long-term monitoring and then tuning and diagnosis, and even found some potential memory leaks and other issues.

The Elastic Stack is actually a collection of open source software, including Elasticsearch, Logstash, and Beats, and Beats, Filebeat, Packetbeat, winlogbeat And the new Heartbeat, oh, a bit more, well, each beat do something different, it doesn't matter, today mainly used to Elasticsearch, Metricbeat and Kibana on the line.

Metricbeat is a collection program specifically designed to  from the server or application service running metrics, and is written by Golang, where the deployment package is less than 10M, the deployment environment is not dependent on the target server, and the memory resource footprint and CPU overhead are also small. At present, in addition to monitoring the server itself resource usage, but also support common application servers and services, the current support list is as follows:


    • Apache Module
    • Couchbase Module
    • Docker Module
    • HAProxy Module
    • Kafka Module
    • MongoDB Module
    • MySQL Module
    • Nginx Module
    • PostgreSQL Module
    • Prometheus Module
    • Redis Module
    • System Module
    • ZooKeeper Module


Of course, it is also possible that your application is not in the above list, it's okay, metricbeat is extensible, you can easily implement a module, and the next Golang module in this article is the extension module I just added for Metricbeat, has now been merge into Metricbeat's Master branch, which is expected to be released in 6.0, wants to know how to extend the module's viewable code path and PR address.

These may not be attractive enough, so let's take a look at Kibana's visual analysis of the data collected by Metricbeat using the Golang module:







 
The above diagram is a simple interpretation:
The top column is a summary of the Golang Heap, you can get an overview of Golang memory usage and GC, system represents the memory that the Golang program requests from the operating system, which can be understood as The memory occupied by the process (note that it is not the virtual memory of the process), Bytes allocated represents the memory currently allocated by the heap, which is the memory that is directly available in the Golang, and GC limit indicates that the heap memory allocation for Golang has reached this limit value The GC is started, and this value changes with each GC, and GC cycles represents the number of GC times during the monitoring cycle;
 
The three columns in the middle are the statistics of heap memory, process memory, and objects, and the heap allocated indicates that it is in use and useless but not yet The size of the recovered object; heap Inuse is obviously an active object size; heap idle indicates allocated but idle memory; the bottom two columns of

are the GC time and GC number monitoring statistics, cpufraction This represents the process  CPU elapsed time The percentage that is spent on the GC, the larger the value, the more frequent the GC, the more time wasted on the GC, although the trend is steep, but the range is between 0.41%~0.52%, and it looks like it can, if the GC ratio accounts for single-digit or even more proportions, it will definitely need to be further optimized.
 
With this information we will be able to know the memory usage and distribution of the Golang and the performance of the GC, if you want to analyze whether there is a memory leak, see memory usage and heap memory allocation trend is smooth, and the other Gc_limit and Byte Allocation has been rising, that must have been a memory leak, combined with historical information can also be different version/commit to Golang memory usage and GC impact analysis.

The next step is to show you how to use it, first to enable the Golang Expvar service, Expvar (https://golang.org/pkg/expvar/) is Golang Provides a standard package that exposes internal variables or statistics. The
uses a simple method that only needs to be introduced in the Golang program, and it automatically registers with the existing  http service, as follows:


import _ "expvar"


If Golang does not start the HTTP service, start one using the following method, where the port is 6060, as follows:


func metricsHandler(w http.ResponseWriter, r *http.Request) {w.Header().Set("Content-Type", "application/json; charset=utf-8")first := truereport := func(key string, value interface{}) {if !first {fmt.Fprintf(w, ",\n")}first = falseif str, ok := value.(string); ok {fmt.Fprintf(w, "%q: %q", key, str)} else {fmt.Fprintf(w, "%q: %v", key, value)}}fmt.Fprintf(w, "{\n")monitoring.Do(monitoring.Full, report)expvar.Do(func(kv expvar.KeyValue) {report(kv.Key, kv.Value)})fmt.Fprintf(w, "\n}\n")}func main() {   mux := http.NewServeMux()   mux.HandleFunc("/debug/vars", metricsHandler)   endpoint := http.ListenAndServe("localhost:6060", mux)}


The access path of the default registration is/debug/vars, and after the compilation is started, it is possible to access these internal variables exposed by the Expvar in JSON format by Http://localhost:6060/debug/vars, which provides Golang runt Ime. Memstats information, that is, the data source analyzed above, of course you can also register their own variables, here for the time being not mentioned.

OK, now that our Golang program has been started and the memory usage of the runtime has been exposed through expvar, now we need to use metricbeat to get this information and into Elasticsearch.

About Metricbeat installation is actually very simple, download the corresponding platform package decompression (download address: https://www.elastic.co/downloads/beats/metricbeat), start metricbeat before, Modify configuration file: Metricbeat.yml


metricbeat.modules:  - module: golang     metricsets: ["heap"]     enabled: true     period: 10s     hosts: ["localhost:6060"]     heap.path: "/debug/vars"


The above parameters enable the Golang monitoring module, and 10 seconds to get the configuration path of the return memory data, we also configure the configuration file, set the data output to the local Elasticsearch:


output.elasticsearch:  hosts: ["localhost:9200"]



Start Metricbeat Now:


./metricbeat -e -v


Now in Elasticsearch should have the data, of course, remember to ensure that Elasticsearch and Kibana are available, you can Kibana based on data flexible custom visualization, recommend the use of timelion for analysis, Of course, for convenience, you can import the provided sample dashboard directly, which is the effect of the first diagram above.
For information on how to import the sample dashboard, refer to this document: Https://www.elastic.co/guide/e .... Html

In addition to monitoring the existing memory information, if you have some internal business indicators to be exposed, it is also possible, through the Expvar to do the same. A simple example is as follows:


var inerInt int64 = 1024pubInt := expvar.NewInt("your_metric_key")pubInt.Set(inerInt)pubInt.Add(2)


Inside the metricbeat also exposed a lot of internal running information, so metricbeat can monitor themselves ...
First, when the boot time with the parameters set Pprof monitoring address, as follows:


./metricbeat -httpprof="127.0.0.1:6060" -e -v


This allows us to access the internal operating conditions through [Url=http://127.0.0.1:6060/debug/vars]http://127.0.0.1:6060/debug/vars[/url], as follows:


{"output.events.acked": 1088, "output.write.bytes": 1027455, "output.write.errors": 0, "output.messages.dropped": 0, " Output.elasticsearch.publishEvents.call.count ":", "output.elasticsearch.read.bytes": 12215, " Output.elasticsearch.read.errors ": 0," output.elasticsearch.write.bytes ": 1027455," Output.elasticsearch.write.errors ": 0," output.elasticsearch.events.acked ": 1088," Output.elasticsearch.events.not _acked ": 0," output.kafka.events.acked ": 0," output.kafka.events.not_acked ": 0," Output.kafka.publishEvents.call.count ": 0," output.logstash.write.errors ": 0," output.logstash.write.bytes ": 0," Output.logstash.events.acked ": 0," output.logstash.events.not_acked ": 0," Output.logstash.publishEvents.call.count ": 0," output.logstash.read.bytes ": 0," output.logstash.read.errors ": 0," output.redis.events.acked ": 0," Output.redis.events.not_acked ": 0," output.redis.read.bytes ": 0," output.redis.read.errors ": 0," Output.redis.write.bytes ": 0," output.redis.write.errors ": 0," beat.memstats.memory_total ": 155721720," beAt.memstats.memory_alloc ": 3632728," Beat.memstats.gc_next ": 6052800," cmdline ": ["./metricbeat ","-httpprof= 127.0.0.1:6060 ","-e ","-V "]," fetches ": {" system-cpu ": {" Events ": 4," failures ": 0," Success ": 4}," System-filesystem ": { "Events": +, "failures": 0, "Success": 4}, "System-fsstat": {"Events": 4, "failures": 0, "Success": 4}, "System-load": {" Events ": 4," failures ": 0," Success ": 4}," System-memory ": {" Events ": 4," failures ": 0," Success ": 4}," System-network ": { "Events": "Failures": 0, "Success": 4}, "System-process": {"Events": 1008, "failures": 0, "Success": 4}}, "libbeat.conf Ig.module.running ": 0," Libbeat.config.module.starts ": 0," Libbeat.config.module.stops ": 0," Libbeat.config.reloads " : 0, "Memstats": {"Alloc": 3637704, "Totalalloc": 155 ...


For example, the above can see the processing of the output module Elasticsearch, such as the output.elasticsearch.events.acked parameter represents the message sent to the Elasticsearch ack after the return.

Now we want to modify the Metricbeat configuration file, the Golang module has two metricset, can be understood as two monitoring indicator type, we now need to add a new Expvar type, this is the custom of other indicators, the corresponding configuration file modified as follows:


- module: golang  metricsets: ["heap","expvar"]  enabled: true  period: 1s  hosts: ["localhost:6060"]  heap.path: "/debug/vars"  expvar:    namespace: "metricbeat"    path: "/debug/vars"


One of the above parameters namespace represents a command space of the custom indicator, mainly for the convenience of management, here is the information of metricbeat itself, so namespace is metricbeat.

Restart Metricbeat should be able to receive new data, we go to Kibana.

This assumes a focus on output.elasticsearch.events.acked and
Output.elasticsearch.events.not_acked These two indicators, we can see the success and failure trend of metricbeat to Elasticsearch message by simply defining a graph in Kibana.
Timelion expression:


.es("metricbeat*",metric="max:golang.metricbeat.output.elasticsearch.events.acked").derivative().label("Elasticsearch Success"),.es("metricbeat*",metric="max:golang.metricbeat.output.elasticsearch.events.not_acked").derivative().label("Elasticsearch Failed")


The effect is as follows:







As can be seen, the message sent to Elasticsearch is stable, there is no loss of messages, and about metricbeat memory situation, we open the Import Dashboard view:








about how to use Metricbeat to monitor the content of the Golang application is basically almost here, the above describes how to monitor the Golang memory situation and custom business monitoring metrics based on Expvar, which can be analyzed quickly in conjunction with Elastic Stack. Hope is useful to everyone.

Finally, this Golang module is not released at present, estimated in Beats 6.0 release, interested in the early adopters can download their own source packaging.





Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.