Springboot application based on Docker and EFK log processing _springboot

Source: Internet
Author: User
Tags git clone docker hub elastic search kibana docker run docker swarm fluentd


1. Overview



In a distributed cluster environment, the log content of a single node tends to be stored on its own node, which has many problems. We need a unified log processing center to collect and centrally store logs, and to view and analyze them. The Twelve-factor app has recommendations for log processing.



The corresponding processing technology is now very mature, usually using elastic Search + logstash + Kibana technology Stack (ELK). In this article we will use a more deployable approach, using the elastic Search +fluentd + kibana technology Stack (EFK), and deploying through Docker.



The corresponding one has an example project on GitHub, address: fluentd-boot.



2. Install Docker



2.1. Set Yum Mirroring



Foreign mirror installation speed is very slow, the use of Tsinghua University TUNA Mirror Source.



Create a new/etc/yum.repos.d/docker.repo file with the root user, as follows:


[Dockerrepo]
Name=docker Repository
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker/yum/repo/centos7
enabled=1
gpgcheck=1
GPGKEY=HTTPS://MIRRORS.TUNA.TSINGHUA.EDU.CN/DOCKER/YUM/GPG


2.2. Installation



To execute a command:


sudo yum makecache
sudo yum install Docker-engine


2.3. Start Docker Service



To execute a command:


Systemctl Start Docker.service


2.4. Test Docker Service



To execute a command:


Docker Run Hello-world


If you output the following similar information on the screen, the Docker installation is normal.


Unable to find image ' hello-world:latest ' locally latest:pulling from Library/hello-world c04b14da8d14:pull complete Di GEST:SHA256:0256E8A36E2070F7BF2D0B0763DBABDD67798512411DE4CDCF9431A1FEB60FD9 status:downloaded Newer image for
Hello-world:latest Hello from docker!

This message shows the your installation appears to be working correctly.
 To generate this message, Docker took the following Steps:1 Docker client contacted the Docker.
 2. The Docker daemon pulled the "Hello-world" image from the Docker Hub. 3. The Docker daemon created a new container from this image which runs the executable that produces the output for you AR
 e currently reading.

4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run a Ubuntu container with: $ docker run-it Ubuntu bash Share images, automa Te workflows, and more with a free Docker Hub account:https://hub.docker.com for more examplesand ideas, visit:https://docs.docker.com/engine/userguide/
 


2.5. Install Docker-compose



To execute a command:


sudo curl-l https://github.com/docker/compose/releases/download/1.8.1/docker-compose-' uname-s '-' uname-m ' >/usr /local/bin/docker-compose

chmod +x/usr/local/bin/docker-compose


3. Start the container



Download the sample project and enter the project directory with the following command:


git clone https://github.com/qihaiyan/fluentd-boot.git;
CD Fluentd-boot


Execute the following command in the project directory to start the Docker container:


Docker-compose up-d


The configuration of the container is in the project's Docker-compose.yml file, and the configuration content is very simple:


es:
  image:elasticsearch
  volumes:
    -./es:/usr/share/elasticsearch/data
  Ports:
    -9,200:9,200
    -9300:9300

kibana:
  image:kibana
  ports:
    -5601:5601
  Links:
    -Es:elasticsearch

fluentd:
  build:fluent-es/
  ports:
    -24224:24224
  Links:
-Es:es


3 containers are enabled in the configuration file, namely Elasticsearch, Kibana, Fluentd. Which Elasticsearch, Kibana directly from the warehouse download, FLUENTD is its own set up containers. Attention


-/es:/usr/share/elasticsearch/data


This line of content will persist Elasticsearch data in the ES directory of the directory where docker-compose.yml resides. You can modify./es to any other path, but the corresponding directory should have read and write permissions.



The FLUENTD container's build file is the Dockerfile in the project's Fluent-es directory, which reads as follows:


From Fluent/fluentd:latest

workdir/home/fluent
ENV path/home/fluent/.gem/ruby/2.2.0/bin: $PATH
RUN Gem Install fluent-plugin-elasticsearch

user root
COPY fluent.conf/fluentd/etc

expose 24284

user fluent
Volume/fluentd/log
CMD fluentd-c/fluentd/etc/$FLUENTD _conf-p/fluentd/plugins $FLUENTD _opt


From the configuration, we can see that our own Fluentd container is built on the basis of the official mirror, the main changes have 2 points: Install fluent-plugin-elasticsearch this plugin; Copy the configuration file fluent.conf to the container;



The role of these two steps is to enable FLUENTD to send log content to Elasticsearch.



4. Configure the Springboot application to send the log to the FLUENTD



Include these 2 lines in the project's Build.gradle file:


Compile ' org.fluentd:fluent-logger:0.3.2 '
compile ' com.sndyuk:logback-more-appenders:1.1.1 '


The project will use Logback-more-appenders to transfer the Logback log to Fluentd.



The Logback configuration file is logback.xml:


 <?xml version= "1.0 encoding=" UTF-8 "?> <configuration> <include resource=" org/springframework/ Boot/logging/logback/base.xml "/> <property name=" fluentd_host "value=" ${fluentd_host:-${docker_host:- localhost}} "/> <property name= fluentd_port" value= "${fluentd_port:-24224}"/> <appender name= "FLUENT" class= "Ch.qos.logback.more.appenders.DataFluentAppender" > <tag>dab</tag> <label>norma L</label> <remoteHost>${FLUENTD_HOST}</remoteHost> <port>${fluentd_port}</port&
        Gt <maxQueueSize>20</maxQueueSize> </appender> <logger name= "Fluentd" level= "Debug" additivity= "False" > <appender-ref ref= "CONSOLE"/> <appender-ref ref= "FILE"/> <appender-re F ref= "FLUENT"/> </logger> </configuration> 


The address and port of FLUENTD are specified in the configuration file by Fluentd_host and fluentd_port two environment variables. If these two configurations are not present in the environment variable, they are sent to the native address by default.



5. Execute the program, view the effect
Enter Fluent-es directory, execute./gradlew bootrun



This step starts the Springboot application, which randomly generates log information and sends the log to elastic Search.
Open http://localhost:5601 in the browser to see the Kibana dashboard page.



By configuring Fluentd_host and Fluentd_port in environment variables, you can specify the address and port of the Docker container, and if not specified, the log is sent to localhost by default, in which case Springboot applications and Docker containers should be running on the same machine.



6. Summary



In the modern system architecture, more and more emphasis is placed on cloud computing, Micro Service, cluster deployment, and the centralized processing of logs is a key factor to be considered. For demonstration purposes, only single nodes are deployed, and cluster deployments can be implemented through kubernetes or Docker Docker swarm.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.