Docker build Elk Docker Cluster Log collection system _docker

Source: Internet
Author: User
Tags time in milliseconds kibana docker run logstash

When we set up the Docker cluster, we will solve the problem of how to collect the log Elk provides a complete solution this article mainly introduces the use of Docker to build Elk collect Docker cluster log

Elk Introduction

Elk is made up of three open source tools , Elasticsearch, Logstash and kiabana

Elasticsearch is an open source distributed search engine, it features: distributed, 0 configuration, Automatic discovery, index automatic fragmentation, index copy mechanism, RESTful style interface, multiple data sources, automatic search load and so on.

Logstash is a fully open source tool that allows you to collect, filter, and store your logs for later use.

Kibana is also an open source and free tool that kibana the log analytics-friendly Web interface for Logstash and Elasticsearch to help you summarize, analyze, and search important data logs.

Use Docker to build Elk platform

First, let's edit the Logstash profile logstash.conf

Input { 
  UDP {
  port => 5000
  type => json
 }
Filter {
  JSON {
   source => ' Message "
  }
}
output {
  elasticsearch {
       hosts =>" elasticsearch:9200 "#将logstash的输出到 Elasticsearch here to your own host 
  }
}

And then we need to start the Kibana.

Write a startup script to wait for Elasticserach to run successfully and start

#!/usr/bin/env Bash

# The Elasticsearch container to be ready before starting.
echo "Stalling for Elasticsearch" while 
true; does
  nc-q 1 elasticsearch 9200 2>/dev/null && break
do Ne

echo "starting Kibana"
exec Kibana

Modify Dockerfile to generate a custom Kibana mirror

From Kibana:latest

RUN apt-get update && apt-get install-y netcat

COPY entrypoint.sh/tmp/ entrypoint.sh
run chmod +x/tmp/entrypoint.sh

run Kibana plugin--install elastic/sense

CMD ["/tmp/ Entrypoint.sh "]

You can also modify the Kibana configuration file to select the required plug-ins

# Kibana is served from a back end server.
This is controls which port to use.
port:5601 # The host to bind the server to.
Host: "0.0.0.0" # The Elasticsearch instance to use for all your queries. Elasticsearch_url: "http://elasticsearch:9200" # Preserve_elasticsearch_host true'll send the hostname specified in ' El Asticsearch '.
If you set it to False, # then the host with the to connect to *this* Kibana instance'll be sent. Elasticsearch_preserve_host:true # Kibana uses a index in Elasticsearch to store saved searches, visualizations # and D Ashboards.
It'll create a new index if it doesn ' t already exist. Kibana_index: ". Kibana" # If your elasticsearch is protected with basic auth, this is the user credentials # used by the Kibana server to perform maintence on the Kibana_index at Statup. Your Kibana # Users would still need to authenticate and Elasticsearch (which is proxied thorugh # the Kibana server) # Ki Bana_elasticsearch_username:user # Kibana_elasticsearch_password:pasS # If your elasticsearch requires client certificate and key # KIBANA_ELASTICSEARCH_CLIENT_CRT:/PATH/TO/YOUR/CLIENT.CRT # Kibana_elasticsearch_client_key:/path/to/your/client.key # If You need to provide a CA certificate for your ELASTICSA
Rech instance, put # The path to the Pem file here.
# CA:/PATH/TO/YOUR/CA.PEM # The default application to load. DEFAULT_APP_ID: "Discover" # time in milliseconds to wait for Elasticsearch to respond to pings, defaults to # Request_ti
Meout Setting # ping_timeout:1500 # time in milliseconds to wait for responses from the ' back ' or elasticsearch. 
# This must is > 0 request_timeout:300000 # Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable. shard_timeout:0 # time in milliseconds to wait for Elasticsearch at kibana startup before retrying # startup_timeout:50
# Set to False to have a complete disregard for the validity of the SSL # certificate. Verify_ssl:true # SSL for outgoing requesTS from the Kibana Server (PEM formatted) # Ssl_key_file:/path/to/your/server.key # Ssl_cert_file:/path/to/your/server.c
RT # Set The path to where you are would like the process ID file to be created.
# Pid_file:/var/run/kibana.pid # If You are would like to send the log output to a file for you can set the path below.
# This'll also turn off the STDOUT log output. Log_file:./kibana.log # Plugins that are included in the builds, and no longer found in the Plugins/folder _ids:-Plugins/dashboard/index-plugins/discover/index-plugins/doc/index-plugins/kibana/index-plugins/markdow  N_vis/index-plugins/metric_vis/index-plugins/settings/index-plugins/table_vis/index-plugins/vis_types/index-

 Plugins/visualize/index

Okay, let's write a docker-compose.yml for easy construction.

Ports, and so on, you can modify the path of the configuration file according to your own requirements. The overall system configuration requirements are higher please select the machine with better configuration

Elasticsearch:
 image:elasticsearch:latest
 command:elasticsearch-des.network.host=0.0.0.0
 ports:
  -"9200:9200"
  -"9300:9300"
logstash:
 image:logstash:latest
 command:logstash-f/etc/logstash /conf.d/logstash.conf
 Volumes:
  -/logstash/config:/etc/logstash/conf.d
 ports:
  -"5001:5000/ UDP "
 Links:
  -elasticsearch
kibana:
 build:kibana/
 volumes:
  -./kibana/config/:/opt /kibana/config/
 Ports:
  -"5601:5601"
 Links:
  -Elasticsearch
#好了命令 can start elk directly, 
docker-compose up-d.

Access to the previous settings of the Kibanna 5601 port can be seen whether the launch succeeded

Collecting Docker logs using Logspout

The next step is to use Logspout to collect the Docker logs. We modify the logspout image according to our needs

Write configuration file Modules.go

Package main

Import (
  _ "Github.com/looplab/logspout-logstash"
  _ "github.com/gliderlabs/logspout/ TRANSPORTS/UDP "

)

Write Dockerfile

From Gliderlabs/logspout:latest
COPY./modules.go/src/modules.go

Run at each node after rebuilding the mirror

 Docker run-d--name= "Logspout"--volume=/var/run/docker.sock:/var/run/docker.sock \
         jayqqaa12/logspout logstash://your Logstash address.

Now you can open Kibana to see the Docker log you've collected.

Note that the Docker container should select the console output in order to collect

All right, the Elk Log collection system under our Docker cluster is complete.

If it is a large cluster also need to add Logstash and elasticsearch cluster This we let's.

The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.