Logback Connection Kafka Normal log

Source: Internet
Author: User
Tags serialization ticket kinit

The normal log is as follows:

Connect the bootstrap.servers=10.57.137.131:9092 inside the Logback.xml

09:46:59,953 |-info in Ch.qos.logback.classic.loggercontext[default]-Could not find resource [Logback.groovy]
09:46:59,953 |-info in Ch.qos.logback.classic.loggercontext[default]-Could not find resource [logback-test.xml]
09:46:59,954 |-info in Ch.qos.logback.classic.loggercontext[default]-Found resource [logback.xml] at [file:/f:/study_ Src/test_log/target/scala-2.11/classes/logback.xml]
09:47:00,106 |-info in Ch.qos.logback.classic.joran.action.configurationaction-setting ReconfigureOnChangeFilter Scanning period to seconds
09:47:00,106 |-info in Reconfigureonchangefilter{invocationcounter=0}-would scan for changes in [[F:\study_src\test_log \target\scala-2.11\classes\logback.xml]] every seconds.
09:47:00,106 |-info in Ch.qos.logback.classic.joran.action.configurationaction-adding ReconfigureOnChangeFilter as a Turbo Filter
09:47:00,119 |-info in Ch.qos.logback.core.joran.action.appenderaction-about to instantiate appender of type [Com.githu B.danielwegener.logback.kafka.kafkaappender]
09:47:00,127 |-info in ch.qos.logback.core.joran.action.appenderaction-naming appender as [Kafkaappender]
09:47:00,265 |-info in Com.github.danielwegener.logback.kafka.encoding.layoutkafkamessageencoder@396f6598-no CharSet specified for Patternlayoutkafkaencoder. Using default UTF8 encoding.
09:47:00,277 |-info in ch.qos.logback.classic.joran.action.loggeraction-setting level of logger [ LOGBACKINTEGRATIONITXXX] to INFO
09:47:00,277 |-info in ch.qos.logback.classic.joran.action.loggeraction-setting additivity of logger [ LOGBACKINTEGRATIONITXXX] to False
09:47:00,278 |-info in ch.qos.logback.core.joran.action.appenderrefaction-attaching appender named [KafkaAppender] to LOGGER[LOGBACKINTEGRATIONITXXX]
09:47:00,278 |-info in Ch.qos.logback.core.joran.action.appenderaction-about to instantiate appender of type [Ch.qos.lo Gback.core.ConsoleAppender]
09:47:00,281 |-info in ch.qos.logback.core.joran.action.appenderaction-naming appender as [STDOUT]
09:47:00,286 |-info in ch.qos.logback.core.joran.action.nestedcomplexpropertyia-assuming default type [ Ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [Encoder] Property
09:47:00,321 |-info in ch.qos.logback.classic.joran.action.rootloggeraction-setting level of ROOT logger to DEBUG
09:47:00,322 |-info in ch.qos.logback.core.joran.action.appenderrefaction-attaching appender named [STDOUT] to Logger[ ROOT]
09:47:00,322 |-info in Ch.qos.logback.classic.joran.action.configurationaction-end of configuration.
09:47:00,323 |-info in ch.qos.logback.classic.joran.joranconfigurator@7a765367-registering current configuration as Safe fallback point
----False
0
09:47:00.414 [main] INFO o.a.k.c.producer.producerconfig-producerconfig values:
Compression.type = None
Metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
Sasl.kerberos.ticket.renew.window.factor = 0.8
Bootstrap.servers = [10.57.137.131:9092]
retry.backoff.ms = 100
Sasl.kerberos.kinit.cmd =/usr/bin/kinit
Buffer.memory = 33554432
timeout.ms = 30000
Key.serializer = Class Org.apache.kafka.common.serialization.ByteArraySerializer
Sasl.kerberos.service.name = null
Sasl.kerberos.ticket.renew.jitter = 0.05
Ssl.keystore.type = JKS
Ssl.trustmanager.algorithm = PKIX
Block.on.buffer.full = False
Ssl.key.password = null
max.block.ms = 60000
Sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
Ssl.truststore.password = null
Max.in.flight.requests.per.connection = 5
Metrics.num.samples = 2
Client.id =
Ssl.endpoint.identification.algorithm = null
Ssl.protocol = TLS
request.timeout.ms = 30000
Ssl.provider = null
Ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ACKs = 1
Batch.size = 16384
Ssl.keystore.location = null
Receive.buffer.bytes = 32768
Ssl.cipher.suites = null
Ssl.truststore.type = JKS
Security.protocol = PlainText
Retries = 0
Max.request.size = 1048576
Value.serializer = Class Org.apache.kafka.common.serialization.ByteArraySerializer
Ssl.truststore.location = null
Ssl.keystore.password = null
Ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
Partitioner.class = Class Org.apache.kafka.clients.producer.internals.DefaultPartitioner
Send.buffer.bytes = 131072
linger.ms = 0


09:47:00.438 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Bufferpool-wait-time
09:47:00.569 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Buffer-exhausted-records
09:47:00.573 [main] DEBUG org.apache.kafka.clients.metadata-updated cluster Metadata version 1 to cluster (nodes = [Node] ( -1,Centos7, 9092)], partitions = [])
09:47:00.687 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name connections-closed: Client-id-producer-1
09:47:00.688 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name connections-created: Client-id-producer-1
09:47:00.688 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name bytes-sent-received: Client-id-producer-1
09:47:00.689 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Bytes-sent:client-id-producer-1
09:47:00.691 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Bytes-received:client-id-producer-1
09:47:00.692 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Select-time:client-id-producer-1
09:47:00.692 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Io-time:client-id-producer-1
09:47:00.699 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Batch-size
09:47:00.699 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Compression-rate
09:47:00.701 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Queue-time
09:47:00.702 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Request-time
09:47:00.702 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Produce-throttle-time
09:47:00.702 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Records-per-request
09:47:00.703 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Record-retries
09:47:00.703 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name errors
09:47:00.703 [main] DEBUG o.a.kafka.common.metrics.metrics-added sensor with name Record-size-max
09:47:00.706 [Kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.producer.internals.sender-starting Kafka Producer I/O thread.
09:47:00.713 [main] INFO O.a.kafka.common.utils.appinfoparser-kafka version:0.9.0.0
09:47:00.713 [main] INFO O.a.kafka.common.utils.appinfoparser-kafka commitid:fc7243c2af4b2b4a
09:47:00.715 [main] DEBUG O.a.k.clients.producer.kafkaproducer-kafka producer started
09:47:00.717 [Kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.networkclient-initialize Connection to node-1 for sending metadata request
09:47:00.718 [Kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.networkclient-initiating Connection to node-1 atCentos7: 9092.
09:47:00.724 [Kafka-producer-network-thread | producer-1] DEBUG o.a.kafka.common.metrics.metrics-added sensor with Name Node--1.bytes-sent
09:47:00.725 [Kafka-producer-network-thread | producer-1] DEBUG o.a.kafka.common.metrics.metrics-added sensor with Name Node--1.bytes-received
09:47:00.726 [Kafka-producer-network-thread | producer-1] DEBUG o.a.kafka.common.metrics.metrics-added sensor with Name Node--1.latency
09:47:00.726 [Kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.networkclient-completed Connection to Node-1
09:47:00.858 [Kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.networkclient-sending Metadata Request Clientrequest (Expectresponse=true, Callback=null, Request=requestsend (header={api_key=3,api_ Version=0,correlation_id=0,client_id=producer-1}, Body={topics=[logs]}), Isinitiatedbynetworkclient, createdtimems=1459216020829, sendtimems=0) to Node-1
09:47:00.875 [Kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.metadata-updated cluster Metadata version 2 to Cluster (nodes = [Node (0,centos77, 9092)], partitions = [Partition (topic = logs, Partition = 0, leader = 0, replicas = [0,], ISR = [0,]])
09:47:00.892 [Kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.networkclient-adding node 0 to n Odes ever seen
09:47:00.921 [Kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.networkclient-initiating Connection to node 0 atcentos77: 9092.


Connect 10.57.137.131:9092 First

Re-connect the returned host.name = Centos7,

Finally, continue to connect advertised.host.name=centos77

In fact the above are all the same IP, if debugging different machines, Kafka server configuration file, please be careful to modify, do not use localhost


As the official document says, consumers connect to any valid node, request leader metadata, and the consumer simply drops the message to leader broker.

The producer connects to any of the Alive nodes and requests metadata about the
Leaders for the partitions of a topic. This allows the producer to put the message
Directly to the leads broker for the partition.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.