Logback-flume-appender Plug-in
* logback.xml related configuration *
in the following figure I output the trace level log to flume because we often use info, error, debug in the usual project
<property resource = "properties / config.properties" />
<!-The hadoop directory format is: /% Y% m% d /% {application} /% {dir} corresponds to the dir in the application configuration section and additionalAvroHeaders, where dir can be unconfigured->
<appender name = "flume" class = "com.gilt.logback.flume.FlumeLogstashV1Appender">
<!-This item is the IP and port of the Flume node->
<flumeAgents> $ {flume.agents} </ flumeAgents>
<flumeProperties> connect-timeout = 4000; request-timeout = 8000 </ flumeProperties>
<batchSize> 100 </ batchSize>
<reportingWindow> 1000 </ reportingWindow>
<!-This item configures avro header information, dir identifies the directory->
<additionalAvroHeaders> dir = logs </ additionalAvroHeaders>
<!-This item configures the current application->
<application> $ {domain} </ application>
<filter class = "ch.qos.logback.classic.filter.LevelFilter">
<level> TRACE </ level>
<onMatch> ACCEPT </ onMatch>
<onMismatch> DENY </ onMismatch>
</ filter>
<layout class = "ch.qos.logback.classic.PatternLayout">
<pattern>% message% n% ex </ pattern>
</ layout>
</ appender>
<logger name = "com.dzmsoft.framework.log.service.impl.LogServiceImpl" level = "TRACE">
<appender-ref ref = "flume" />
</ logger>
Write the log directly into Hadoop here, you can see the corresponding file in the console

after the download, you can see the data inside, it will only record the data we need.
