IntelliJ idea Configure Scala to use Logback to throw logs into the pit of the Kafka service (already filled)

Source: Internet
Author: User
Tags bind zookeeper

1) Install the zookeeper.

CP Zoo_example.cfg Zoo.cfg

2) Start Zookeeper

bin/zkserver.sh start

3) Install kafka_2.11_0.9.0.0

Modify Configuration Config/server.proxxx

Note: Host.name and Advertised.host.name

If you are connected to Windows Kafka, try to configure these 2 parameters without using localhost

Remember to shut down the Linux firewall

Bin/kafka-server-start.sh config/server.properties


and start Kafka consumers.

Named topic is logs, can be arbitrarily defined, but to unify can

bin/kafka-console-consumer.sh--zookeeper 127.0.0.1:2181--topic logs--from-beginning

The above three steps are deployed in the Linux environment


4) Development environment for Windows

Create a Scala project using idea

Idea needs to install SBT plugin and Scala plugin plugins

After installation, you need to restart the idea tool

5) Create a project

Create Scala's SBT project with idea

After creation, the project and Src/main/scala+java+resources folders are automatically generated, which indicates that the environment is configured correctly.

Create a new Build.scala file under the project directory with the following content (mainly related to the management of jar packages)/project/build.scala

Scala is using 2.11.7.

Import sbt._ import SBT. Keys._ Object Build extends Build {scalaversion: = "2.11.7" lazy val defaultsettings = Defaults.coredefaultsetti NGS + + seq (version: = "1.0", scalaversion: = "2.11.7", scalacoptions: = Seq ("-fe Ature ","-language:implicitconversions ","-language:postfixops ","-unchecked ","-D Eprecation ","-encoding "," UTF8 ","-ywarn-adapted-args "), Librarydependencies ++= Seq ()) lazy val root = Project ("Test_log", File ("."), settings = DefaultSettings + + Seq (lib Rarydependencies ++= Seq ("org.slf4j"% "Slf4j-api"% "1.7.6", "Ch.qos.logback"% "Logback-classic"% "1.1 .2 ",//%" provided "," Com.github.danielwegener "%" Logback-kafka-appender "%" 0.1.0 "), resolvers + = "Oschina" at "http://maven.oschina.net/content/groups/public/", externalresolvers: = Resolver.withdefaultresolvers ( REsolvers.value, mavencentral = false)//only from this source, otherwise he will go abroad to download the source, the card is dying. )
  )
}

6) Create a Test.scala object in the Src/main/scala directory

Package Test
Import org.slf4j.LoggerFactory

Object Test {

  def main (args:array[string]): Unit = {

    val l Ogger = Loggerfactory.getlogger ("logbackintegrationit</span>")//Pay attention to this fucking thing, if it doesn't correspond, you'll have a problem

    with timeout SYSTEM.OUT.PRINTLN ("----  " + logger.isdebugenabled) for
      (i <-0 to 846) {
          System.out.println (i)
          Logger.debug ("----Debug Message")
          logger.error ("----error message" + i)
          logger.info ("----Info message" + i) c11/>}}}

For example, you will encounter:

1) O.apache.kafka.clients.networkclient-error while fetching metadata with correlation ID 0: {logs=l

2) [Kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.networkclient-initialize connection to No De-1 for sending metadata request

Then prompt 6000 Timeout

3) Kafka client org.apache.kafka.common.errors.TimeoutException:Failed to UPDA

4) Info log can be thrown into the Kafka, but the debug is not, such as

<root level= "Info" >
    <appender-ref ref= "Kafkaappender"/>
</root>


But if you change to debug, you don't respond.

Looks like there's some, anyway, dog blood.


After the above code is written, then configure the Logback.xml file

<configuration scan= "true" scanperiod= "1 Seconds" debug= "true" > <appender name= "kafkaappender" class= "com . Github.danielwegener.logback.kafka.KafkaAppender "> <encoder class=" Com.github.danielwegener.logback.kafka.encoding.LayoutKafkaMessageEncoder "> <layout class=" ch.qos.logback . Classic.

        Patternlayout "> <pattern>%msg</pattern> </layout> </encoder> <topic>logs</topic> <keyingstrategy class= "Com.github.danielwegener.logback.kafka.keying.Ho Stnamekeyingstrategy "/> <deliverystrategy class=" Com.github.danielwegener.logback.kafka.delivery.Asynchron

    Ousdeliverystrategy "/> <producerConfig>bootstrap.servers=10.57.137.131:9092</producerConfig> </appender> <logger name= "Logbackintegrationit" additivity= "false" level= "Debug" > <appender-ref R ef= "Kafkaappender"/> </logger> <appendeR name= "STDOUT" class= "Ch.qos.logback.core.ConsoleAppender" > <encoder> <pattern>%d{hh:m M:ss. SSS} [%thread]%-5level%logger{36}-%msg%n</pattern> </encoder> </appender> <root
 level= "Debug" > <appender-ref ref= "STDOUT"/> </root> </configuration>


Please note that the code is written in

Val logger = Loggerfactory.getlogger ("logbackintegrationit</span>")//The following parameters cannot be arbitrarily filled, at least for the time being, or perhaps I do not understand correctly.
                                                                 This parameter must be from the XML logger inside find the corresponding name, or debug information can not be sent to Kafka


This will receive the debug level, note that the configuration here is

<logger name=> inside the things, like root no matter, if there is this document, please leave a message to remind, thank you.

For more information, please see: https://github.com/danielwegener/logback-kafka-appender


Add:

Kafka's Config/server.properties



listeners=plaintext://:9092


# The port the socket server listens on
#port =9092


# Hostname The broker would bind to. If not set, the server would bind to all interfaces
HOST.NAME=CENTOS7//Ensure that the other machine can connect


# Hostname The broker would advertise to producers and consumers. If not set, it uses the
# value for "Host.name" if configured. Otherwise, it'll use the value returned from
# Java.net.InetAddress.getCanonicalHostName ().
#advertised. Host.name=ADVERTISED.HOST.NAME=CENTOS77//Ensure that the other machine can connect, in order to print the log, to see which service is printed.

The above 2 Host.name are pointing to the Kafka server, try to set the same as, make sure the network is OK.


Additional Demo:

Link


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.