Using Docker to build Elk log System

Source: Internet
Author: User
Tags create index kibana logstash filebeat

0, Preface

This article is mainly referred to dockerinfo this article Elk log system, which Docker configuration file is mainly provided by the blog, I do just on the basis of this article, deleted part of this article does not need, while noting the construction process of some problems.

About Elk, this article does not do too much introduction, detailed can view the official website, here first posted our General Elk Log System Architecture diagram


Elasticsearch is a real-time distributed search and analysis engine that can be used for Full-text search, structured search, and analytics. This is a search engine based on the Full-text search engine Apache lucene.

Logstash is a data collection engine with real-time channel capability, which is mainly used for log collection, filtering and parsing, and storing it in Elasticsearch.

Kibana is a web platform based on the Apache Open source protocol that provides analytics and visualization for Elasticsearch, which can look up, interact with data in Elasticsearch indexes, and generate table diagrams for various dimensions.

In practical applications, the use of Logstash to collect logs, often face the problem of large costs, so in practice, often used filebeat as a log collector, and Logstash responsible for filtering and parsing. At this point, the schema diagram is as follows:



To implement the environmental requirements of the log system in this article: The system environment is Ubuntu 16.04 (or other), the following software needs to be installed (software required)

Docker Cocker-compose experiment, all configuration files can be downloaded from github
One, build Elasticsearch First we look at the contents of the Docker-compose.yml files needed to build this part

Version: ' 2 '
services:
  elasticsearch:
    image:elasticsearch:2.2.0
    Container_name:elasticsearch 
    restart:always
    network_mode: "Bridge"
    Ports:
      -"9200:9200"
      -"9300:9300"
    volumes:
       - ./data:/usr/share/elasticsearch/datadocker-file

Here we mainly parse the next two ports and volumes, first 9200 port is the main outward exposed port, the external provision of HTTP service, 9300 port is as the interactive TCP merchandise, and the volume mapping is the log index file storage directory directory to mount to the host directory. Prevents the history log file from being lost because the container hangs or reboots.
After saving the above file, we ran the Docker-compose up-d command directly, then started our elasticcsearch in the background, and the command line showed whether the execution was successful.
In order to visually verify whether our elasticsearch was created successfully, we can enter ip:9200 and, if deployed properly, can see a string of JSON data. Also can enter Ip:9200/_search?pretty, this is the use of Elasticsearch search function, showing the current elasticsearch stored data, of course, there is no data.

Two, logstash configurationFirst of all, look at the configuration of docker-compose.yml
Version: ' 2 '
services:
  logstash:
    image:logstash:2.2.0-1
    container_name:logstash 
    Restart: Always
    network_mode: "Bridge"
    Ports:
      -"5044:5044"
      #-"4560:4560"
      -"8080:8080"
    volumes:
      -/conf:/config-dir
      -/patterns:/opt/logstash/patterns
    external_links:
      -Elasticsearch: Elasticsearch
    command:logstash-f/config-dir


The 5044 ports exposed are used to receive log data from the Filebeat collection, 8080 to receive log data from the plug-in logstash-input-http, and to mount the Conf directory to add our customized profile. Patterns is used to add our custom Grok rule file, and to set up an external connection to connect with the Elasticsearch container to transmit the data;
The relevant input and output configuration files are as follows 01-input.conf
Input {
  beats {
    port => 5044
    type => "Log"
  }

  http {
    port => 8080
  }
}

02-output.conf
Output {
  Elasticsearch {
    hosts => [' elasticsearch:9200 ']
  }
}


At the same time, in order to facilitate the subsequent testing of the container whether there is a connection, we add the following configuration file logstash-es-simple.conf
Input {
  stdin {}
}
output {
  elasticsearch {
    hosts => [' elasticsearch:9200 ']
  }
  stdout {
    codec => rubydebug 
  }
}
After the Docker-compose.yml file is configured, running Docker-compose up-d starts in the background.


Third, test Logstash and Elasticsearch connectivity

In order to test whether the two containers that have been started have been connected, we need to go into the mirror of Logstash, and first look at its container iddocker PS; Get ID number, Run command sudo docker exec-it ID number/bin/bash can enter into the container environment, after entering, we direct CD Conf-dir into our profile directory, which we have set up outside the configuration file, input command l Ogstash agent-f logstash-es-simple.conf shows success, you can enter any character to see if there is a return value, and then we can view the Ip:9200/_search?pretty directly to see if there are any new data additions.

The relevant command screenshots are as follows:


After successfully running the above process, you can view the next ip:9200/_search?pretty, you can see our elasticsearch has been more than we have just added two records.

Test completed.


Four, build Filebeat

Let's take a look at the filebeat docker-compose.yml configuration file first.

Version: ' 2 '
services:
  filebeat:
    image:olinicola/filebeat:1.0.1 
    container_name:filebeat 
    Restart:always
    Network_mode: "Bridge"
    extra_hosts:
      -"logstash:192.168.0.102"/IP here if True IP, Can not be 127.0.0.1
    volumes:
      -./conf/filebeat.yml:/etc/filebeat/filebeat.yml
      -./registry:/etc/registry
      -/tmp:/tmp



The Mounted TMP directory is used for testing.

In addition to the Docker-compose.yml file above, we will add a filebeat.yml file that reads as follows:

Filebeat:
  prospectors:
    -
      paths:
          -/tmp/test.log
      input_type:log
      tail_files:true
  Registry_file:/etc/registry/mark
output:
  Logstash:
    hosts: ["logstash:5044"]
shipper:
  Name : N31
Logging:
  files:
    rotateeverybytes:10485760 # = 10MB
The main purpose of this file is to define where to collect the file log.

After the save, directly run Docker-compose up-d on it.


Five, test the connectivity of Filebeat,logstash,elasticsearch

To test this connectivity, we do this by writing the log data (analog) to the file below the/tmp directory to see if our Logstash can receive properly.

The order is as follows:

# 1. Create log file
touch/tmp/test.log

# 2. Write a nginx access log to the log file
echo ' 127.0.0.1--[13/mar/2017:22:57:14 +0800] "get/http/1.1" 3700 "-" "mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_0) applewebkit/537.36 (khtml, like Gecko) chrome/46.0.2490.86 safari/537.36 ""-"' >>/tmp/test. Log


If it's normal, just a little while, and look inside the ip:9200/_search?pretty there should be a more log record we just added.


Six, Kibana configuration

Its docker-compose.yml configuration file is as follows:

Version: ' 2 '
services:
  kibana:
    image:kibana:4.4.0
    container_name:kibana 
    restart:always
    Network_mode: "Bridge"
    Ports:
      -"5601:5601"
    external_links:
      -Elasticsearch:elasticsearch



After saving, directly run Docker-compose up-d will be OK.

Access to http://IP:5601 (IP is the IP of the cloth), click Create Index, and then view discover, you can see our elasticsearch stored all the data (Kibana will default to display 15 minutes of log data, You can adjust this length in the upper-right corner of the page.


Reference:

Https://github.com/jasonGeng88/blog/blob/master/201703/elk.md

Http://www.dockerinfo.net/3683.html



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.