#Licensed to the Apache software Foundation (ASF) under one or more#Contributor license agreements. See the NOTICE file distributed with#This is for additional information regarding copyright ownership.#the ASF licenses this file to you under the Apache License, Version 2.0#(the "License"); you are not a use of this file except in compliance with#The License. Obtain a copy of the License at# #http://www.apache.org/licenses/LICENSE-2.0# #unless required by applicable law or agreed to writing, software#distributed under the License is distributed on a "as is" BASIS,#without warranties or CONDITIONS of any KIND, either express or implied.#See the License for the specific language governing permissions and#limitations under the License.#See Kafka.server.KafkaConfig for additional details and Defaults############################# Server Basics ##############################The ID of the broker. This must is set to a unique integer for each broker.Broker.id=1############################# Socket Server Settings #############################Listeners=plaintext://:9092#The port the socket server listens onport=9092#Hostname The broker would bind to. If not set, the server would bind to all interfaceshost.name=192.168.14.105#Hostname the broker would advertise to producers and consumers. If not set, it uses the#value for "Host.name" if configured. Otherwise, it'll use the value returned from#java.net.InetAddress.getCanonicalHostName ().advertised.host.name=192.168.14.105#The port to publish-ZooKeeper for clients. If This is not set,#It'll publish the same port, the broker binds to.#advertised.port=<port accessible by clients>#The number of threads handling network requestsNum.network.threads=4#The number of threads doing disk I/ONum.io.threads=8#The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400#The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400#The maximum size of a request that the socket server would accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ##############################A comma seperated list of directories under which to store log fileslog.dirs=/tmp/kafka-logs#The default number of log partitions per topic. More partitions allow greater#parallelism for consumption, but this would also result in more files across#The brokers.Num.partitions=8#The number of threads per data directory to is used for log recovery at startup and flushing at shutdown.#This value was recommended to being increased for installations with the data dirs located in RAID array.Num.recovery.threads.per.data.dir=1auto.create.topics.enable=truelog.index.interval.bytes=4096log.index.size.max.bytes=10485760############################# Log Flush Policy ##############################Messages is immediately written to the filesystem of the default we only Fsync () to sync#The OS cache lazily. The following configurations control the flush of data to disk. #There is a few important trade-offs here:#1. durability:unflushed data may lost if is not using replication.#2. Latency:very large flush intervals may leads to Latency spikes when the flush does occur as there'll be a lot of data to flush.#3. Throughput:the Flush is generally the most expensive operation, and a small flush interval may leads to exceess Ive seeks. #The settings below allow one to configure the flush policy to flush data after a period of time or#every N messages (or both). This can is done globally and overridden on a per-topic basis.#The number of messages to accept before forcing a flush of data to disklog.flush.interval.messages=20000#The maximum amount of time a message can sit in a log before we force a flushlog.flush.interval.ms=10000log.flush.scheduler.interval.ms=2000log.retention.check.interval.ms=300000############################# Log Retention Policy ##############################The following configurations control the disposal of log segments. The policy can#Be set to delete segments after a period of time, or after a given size has accumulated.#A segment would be deleted whenever *either* of these criteria is met. Deletion always happens#From the end of the log.#The minimum age of a log file to being eligible for deletionlog.retention.hours=168#A size-based Retention policy for logs. Segments is pruned from the log as long as the remaining#segments don ' t drop below log.retention.bytes.#log.retention.bytes=1073741824#The maximum size of a log segment file. When the this size is reached a new log segment would be created.log.segment.bytes=1073741824#The interval at which log segments is checked to see if they can be deleted according#To the retention policieslog.retention.check.interval.ms=300000#By default the log cleaner is disabled and the log retention policy would default to just delete segments after their Retention expires.#If Log.cleaner.enable=true is set the cleaner would be enabled and individual logs can then being marked for log compacti On.log.cleaner.enable=false############################# partition Replicas #############################Num.replica.fetchers=4replica.fetch.max.bytes=1048576replica.fetch.wait.max.ms=500replica.high.watermark.checkpoint.interval.ms=5000controller.socket.timeout.ms=30000controller.message.queue.size=10replica.lag.time.max.ms=10000replica.lag.max.messages=4000replica.socket.timeout.ms=30000replica.socket.receive.buffer.bytes=65536############################# Zookeeper ##############################Zookeeper Connection string (see Zookeeper docs for details).#This was a comma separated host:port pairs, each corresponding to a ZK#server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".#You can also append a optional chroot string to the URLs to specify the#root directory for all Kafka znodes.zookeeper.connect=192.168.14.100:2181,192.168.14.105:2181,192.168.14.102:2181#Timeout in MS for connecting to zookeeperzookeeper.connection.timeout.ms=6000zookeeper.sync.time.ms=2000zookeeper.session.timeout.ms=6000
Kafka Learning-Configuration details