Spring Boot Kafka for the Micro Service log for log collection

Source: Internet
Author: User
Tags ack logstash
Objective

To undertake the above (the. NET core of the MicroServices log uses Nlog to implement log collection www.cnblogs.com/maxzhang1985/p/9522017.html through Kafka). Net/core implementation, Our goal is to enable the dotnet and Java services in the microservices environment to be collected in a unified manner.
The Java System Spring Boot + Logback is easy to access the Kafka implementation of the log collection.

Spring Boot integrated MAVEN package management
<dependencyManagement>  <dependencies>     <dependency>    <groupId>ch.qos.logback</groupId>    <artifactId>logback-core</artifactId>    <version>1.2.3</version>    </dependency>  </dependencies></dependencyManagement>

Package Dependent references:

<dependency>    <groupId>com.github.danielwegener</groupId>    <artifactId>logback-kafka-appender</artifactId>    <version>0.2.0-RC1</version></dependency><dependency>    <groupId>ch.qos.logback</groupId>    <artifactId>logback-classic</artifactId>    <version>1.2.3</version>    <scope>runtime</scope></dependency><dependency>    <groupId>net.logstash.logback</groupId>    <artifactId>logstash-logback-encoder</artifactId>    <version>5.0</version></dependency>
Logback-spring.xml

In the Spring Boot Project Resources directory, add the Logback-spring.xml configuration file, Note: Be sure to modify {"AppName": "Webdemo"}, this value can also be set to a variable in the configuration. Add the following configuration, stdout is the log output configuration that is used when the connection fails. So each project has to add the configuration according to its own situation. Use asynchronous policies to improve performance in normal log output, as follows:

 <appender name= "Kafkaappender" class= "Com.github.danielwegener.logback.kafka.KafkaAppender" > <encoder cha rset= "UTF-8" class= "Net.logstash.logback.encoder.LogstashEncoder" > <customfields>{"appname": "Webdemo" }</customfields> <includeMdc>true</includeMdc> <includecontext>true</incl                udecontext> <throwableconverter class= "Net.logstash.logback.stacktrace.ShortenedThrowableConverter" > <maxDepthPerThrowable>30</maxDepthPerThrowable> <rootcausefirst>true</ro        otcausefirst> </throwableConverter> </encoder> <topic>loges</topic> <keyingstrategy class= "Com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy"/> <deli Verystrategy class= "Com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/> < Producerconfig>bootstrap.serVers=127.0.0.1:9092</producerconfig> <!--don ' t wait for a broker to ack the reception of a batch. -<producerConfig>acks=0</producerConfig> <!--wait up to 1000ms and collect log message s before sending them as a batch-<producerConfig>linger.ms=1000</producerConfig> <!-- Even if the producer buffer runs full, don't block the application but start to drop messages-<!--&LT;PR Oducerconfig>max.block.ms=0</producerconfig>--> <producerconfig>block.on.buffer.full=false </producerConfig> <!--Kafka connection fails, log output is performed using the following configuration--<appender-ref ref= "STDOUT"/> </a Ppender>

Note: Be sure to modify {"AppName": "Webdemo"}, this value can also be set as a variable in the configuration. For third-party framework or library error and exception information, if you need to write to the log, the error is configured as follows:

<appender name= "Kafkaappendererror" class= "Com.github.danielwegener.logback.kafka.KafkaAppender" > < Encoder charset= "UTF-8" class= "Net.logstash.logback.encoder.LogstashEncoder" > <customfields>{"appname ":" Webdemo "}</customfields> <includeMdc>true</includeMdc> <includecontext>tr ue</includecontext> <throwableconverter class= "Net.logstash.logback.stacktrace.ShortenedThrowableConve Rter "> <maxDepthPerThrowable>30</maxDepthPerThrowable> <rootCauseFirst> true</rootcausefirst> </throwableConverter> </encoder> <topic>ep_componen t_log</topic> <keyingstrategy class= " Com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy "/> <deliverystrategy class=" Com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy "/> <deliverystrategy class= "Com.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy" > <!--wait indefinitely Until the Kafka producer is able to send the message-<timeout>0</timeout> </deli  Verystrategy> <producerConfig>bootstrap.servers=127.0.0.1:9020</producerConfig> <!--don ' t  Wait for a broker to ack the reception of a batch. -<producerConfig>acks=0</producerConfig> <!--wait up to 1000ms and collect log message s before sending them as a batch-<producerConfig>linger.ms=1000</producerConfig> <!-- Even if the producer buffer runs full, don't block the application but start to drop messages--&LT;PRODUCERC onfig>max.block.ms=0</producerconfig> <appender-ref ref= "STDOUT"/> <filter class= "Ch.qos. Logback.classic.filter.LevelFilter "><!--only print error log-&LT;LEVEL&GT;ERROR&LT;/level> <onMatch>ACCEPT</onMatch> <onMismatch>DENY</onMismatch> </filter> </appender>

In the Exception log with the use of a synchronization policy guarantee, the effective collection of error logs, of course, can be configured according to the actual project situation.

Log Configuration recommendations:

Log root specifies an error to output a third-party framework Exception Log:

 <root level="INFO">        <appender-ref ref="kafkaAppenderERROR" /> </root>

It is recommended to export only the level log configuration in your own program as follows (for reference only):

<logger name="项目所在包" additivity="false">    <appender-ref ref="STDOUT" />    <appender-ref ref="kafkaAppender" /></logger>
At last

GITHUB:GITHUB.COM/MAXZHANG1985/YOYOFX If you feel you can also invite the star, welcome to communicate together.

. NET Core Open Source Learning Group: 214741894

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.