Kafka Learning Broker Configuration (version 0.8.1)

Source: Internet
Author: User

Broker.id Default value: None

Each broker has a unique ID, which is a non-negative integer, which is the "name" of the broker, allowing the broker to migrate to another machine without affecting the consumer. You can choose any number as long as it is unique.

Log.dirs Default value:/tmp/kafka-logs

A comma-delimited list of directories that can have multiple, used to store data for Kafka. Whenever a directory needs to be assigned to a new partition, the current storage partition the fewest directories is selected for storage.

Port Default value: 6667

The port that the server uses to accept client requests.

Zookeeper.connect default value: null

The Zookeeper connect string is specified, in the form of Hostname:port, hostname and port are the hostname and ports of the zookeeper cluster nodes. A node in the zookeeper cluster may be hung, so you can specify the connect string for multiple nodes. as follows:

hostname1:port1,hostname2:port2,hostname3:port3.

Zookeeper can also allow you to specify a "chroot" path, which allows the Kafka cluster to store the data stored in the zookeeper to the specified path, which allows multiple Kafka clusters or other applications to be common to the same zookeeper cluster. The following connect string can be used:

hostname1:port1,hostname2:port2,hostname3:port3/chroot/path

This will tell you that all the data for this cluster is stored under the/chroot/path path. Note Before you start the cluster, you must create the path yourself, and consumer must use the same connect string.

Message.max.bytes Default value: 1000000

The maximum size of a message that the server can receive. This property is consistent with the maximum fetch size used by consumer, which is important, otherwise an unruly producer will send a message too large.

Num.network.threads Default value: 3

The number of threads that handle the network, which the server side uses to handle network requests, generally does not need to change it.

Num.io.threads Default value: 8

The number of I/O threads that the server side processes the request, not less than the number of disks.

Background.threads Default Value: 4

The number of threads used to handle various background tasks, such as deleting a file, does not normally need to change it.

Queued.max.requests Default value: 500

When the I/O thread waits for the maximum number of requests in the queue, the network thread will no longer receive a new request.

Host.name default value: null

The broker's hostname, if set, will bind only this address. If not set, all network interfaces are bound and submitted to ZK.

Advertised.host.name default value: null

If this hostname is set, it is distributed to all producer,consumer and other brokers to connect to themselves.

Advertised.port default value: null

Distribute this port to all producer,consumer and other brokers to establish a connection. If this port differs from the server-bound port, it is necessary to set it.

Socket.send.buffer.bytes Default value: 100 * 1024

The server side is used to handle the So_sndbuff buffer size of the socket connection.

Socket.receive.buffer.bytes Default value: 100 * 1024

The server side is used to handle the So_rcvbuff buffer size of the socket connection.

Socket.request.max.bytes Default value: 100 * 1024 * 1024

Server can accept the maximum size of the request, which is to prevent the server from uproar memory and cannot be larger than the Java heap size.

Num.partitions Default value: 1

If you do not specify the number of partition when creating topic, use this value to set it.

Log.segment.bytes Default Value: 1024 * 1024 * 1024

All segment files corresponding to a partition of a topic are called log. This setting controls the maximum size of a segment file, and if this size is exceeded, a new segment file is generated. This configuration can be overridden by referring to the Per-topic configuration section.

Log.roll.hours Default Value: 24 * 7

This setting forces Kafka to roll a new log segment file, even if the size of the currently used segment file is not more than log.segment.bytes. This configuration can be overridden by referring to the Per-topic configuration section.

Log.cleanup.policy Default value: Delete

This configuration can be set to delete or compact. If set to delete, the file will be deleted when the log segment file reaches the maximum size or the roll time is capped. If set to compact, this file will be cleaned and marked as obsolete, as described in log compaction. This configuration can be overridden by referring to the Per-topic configuration section.

Log.retention.minutes Default value: 7 days

The time, in minutes, that is saved on the disk before the log file is deleted, which is the default value for all topic. Note If Log.retention.minutes and log.retention.bytes are set at the same time, if any one of the conditions is reached, it will be deleted immediately. This configuration can be overridden by referring to the Per-topic configuration section.

Log.retention.bytes Default value: 1

Topic the maximum file size per partition, the size limit of one topic = number of partitions * log.retention.bytes. -1 No size limit log.retention.bytes and log.retention.minutes any one to meet the requirements, the deletion will be performed. This configuration can be overridden by referring to the Per-topic configuration section.

log.retention.check.interval.ms Default value: 5 minutes

Check whether any of the log segment files require retention processing time interval.

Log.cleaner.enable Default value: False

Set to True to turn on the log compaction feature.

Log.cleaner.threads Default value: 1

The number of threads that use the log compaction feature to clean up log.

Log.cleaner.io.max.bytes.per.second Default value: None

During the execution of log compaction, the amount of data I/O per second was limited cleaner to avoid cleaner affecting the requests being executed.

Log.cleaner.dedupe.buffer.size Default value: 500 * 1024 * 1024

Log compression goes back to the cache space, and in the case of space permitting, the larger the better.

Log.cleaner.io.buffer.size Default Value: 512 * 1024

The size of the I/O block (chunk) used in log cleanup is generally not required.

Log.cleaner.io.buffer.load.factor Default value: 0.9

The enlargement factor of the hash table in log cleanup, generally does not need to be modified.

log.cleaner.backoff.ms Default value: 15000

Check if log requires clean time interval.

Log.cleaner.min.cleanable.ratio Default Value: 0.5

Controls the frequency of log compactor for clean operations. By default, when more than 50% of log is clean, you do not have to continue with clean. This configuration can be overridden by referring to the Per-topic configuration section.

log.cleaner.delete.retention.ms Default value: 1 day

For the longest time the compressed log is retained, it is also the maximum time the client consumes the message, and the difference between the log.retention.minutes is a control of uncompressed data, a control of the compressed data, and a reference to the Per-topic configuration Section

Log.index.size.max.bytes Default Value: 10 * 1024 * 1024

The maximum size of the offset index file for each log segment file. Note that you always pre-allocate a sparse (sparse) file, and then shrink down when you roll the file. If the index file is full, roll a new log segment file, even if the log.segment.byte limit has not been reached. Refer to the Per-topic configuration section.

Log.index.interval.bytes Default value: 4096

When a fetch operation is performed, a certain amount of space is required to scan the nearest offset size, and the larger the setting, the faster the scan, but also the memory consumption, which is generally not required to change this parameter.

Log.flush.interval.messages Default value: None

The number of messages staged before forcing Fsync of a partition log file. Lowering this value will make the sync data more frequent to disk, affecting performance. It is often recommended to use replication to ensure durability, rather than relying on the fsync on a single machine, but this can lead to more reliability.

log.flush.scheduler.interval.ms Default value: 3000

Log Flusher checks to see if the log must be brushed to disk at a time interval in Ms.

log.flush.interval.ms Default value: None

2 times the maximum time interval between Fsync calls, in Ms. Even if the log.flush.interval.messages is not reached, you need to call Fsync as soon as this time is up.

log.delete.delay.ms Default value: 60000

The log file retention time after the log file is moved out of the index. Any ongoing read operation that runs during this time is completed without interrupting it. There is usually no need to change.

log.flush.offset.checkpoint.interval.ms Default value: 60000

Record the frequency of the last time the log was brushed to disk for future recovery. There is usually no need to change.

Auto.create.topics.enable Default value: True

Whether to allow automatic creation of topic. If set to true, the factor of the default replication partition and topic number are automatically created when produce,consume or fetch metadata a nonexistent topic.

controller.socket.timeout.ms Default value: 30000

Partition the socket timeout for the command that the management controller sends to replica.

Controller.message.queue.size Default value: 10

Partition leader The queue size of the message when synchronizing with replicas data.

Default.replication.factor Default value: 1

The number of default replication factor when topic is automatically created.

replica.lag.time.max.ms Default value: 10000

If a follower does not send an arbitrary fetch request within a time window, leader removes the follower from the ISR (In-sync replicas) and considers it to be dead.

Replica.lag.max.messages Default value: 4000

If a replica is behind leader the number of message bars specified by this configuration, leader will remove the ISR and think it is dead.

replica.socket.timeout.ms Default Value: 300 * 1000

Replica the socket timeout for network requests sent to leader during data replication.

Replica.socket.receive.buffer.bytes Default Value: 64 * 1024

The size of the socket receiver buffer that replica sends network requests to leader during data replication.

Replica.fetch.max.bytes Default Value: 1024 * 1024

The FETCH request replica sent to leader attempts to get the maximum number of bytes of data during the copying of data.

replica.fetch.wait.max.ms Default value: 500

In the process of copying data, replica sends the maximum wait time for the request to leader in order to fetch the data.

Replica.fetch.min.bytes Default value: 1

In the process of copying data, replica receives every fetch response, expects the minimum number of bytes, and waits for more data until it reaches replica.fetch.wait.max.ms if no sufficient number of bytes are received.

Num.replica.fetchers Default value: 1

The number of threads used to replicate messages from leader, increasing this value can increase the I/O parallelism of follow.

replica.high.watermark.checkpoint.interval.ms Default value: 5000

Each replica stores its own high watermark-to-disk frequency for future recovery.

Fetch.purgatory.purge.interval.requests Default value: 10000

The meaning is unknown and will be studied later. The purge interval (in number of requests) of the fetch request purgatory.

Producer.purgatory.purge.interval.requests Default value: 10000

The meaning is unknown and will be studied later. The purge interval (in number of requests) of the producer request purgatory.

zookeeper.session.timeout.ms Default Value: 6000

The time-out period of the zookeeper session, if no ZK heartbeat is received during this time, is considered to be the Kafka server hanging off. If this value is set too low may be mistaken for hanging off, if set too high, if it really hangs, it will take a long time to be informed by the server.

zookeeper.connection.timeout.ms Default Value: 6000

The time-out when the client is connected to the ZK server.

zookeeper.sync.time.ms Default Value: 2000

How long a ZK follower can lag behind leader.

Controlled.shutdown.enable Default value: False

If true, before closing a broker, all partition on the current broker, if any, will be leader to the corresponding partition on the other broker. This reduces the time window that is not available during the shutdown.

Controlled.shutdown.max.retries Default value: 3

In the execution of a unclean (forcibly closed?) ) Before the shutdown operation, the maximum number of retries to successfully complete the shutdown operation.

controlled.shutdown.retry.backoff.ms Default value: 5000

The fallback (Backoff) time during the shutdown retry.

Auto.leader.rebalance.enable Default value: False

If set to True, the replication controller automatically tries periodically, balancing leadership for each partition of all brokers, assigning leadership to the replica of higher precedence (preferred).

Leader.imbalance.per.broker.percentage Default value: 10

The percentage of unbalanced leader allowed per broker. If each broker exceeds this percentage, the replication controller will rebalance the leadership.

Leader.imbalance.check.interval.seconds Default value: 300

Detects the time interval at which the leader is unbalanced.

Offset.metadata.max.bytes Default Value: 1024

Allows the client (consumer) to save the maximum amount of data for their metadata (offset).

Kafka Learning Broker Configuration (0.8.1 version) (EXT)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.