Kafka manufacturer various start parameter description

Source: Internet
Author: User
Tags time in milliseconds ticket

The first is to start a producer

 final String kafkazk= "localhost:9092";        String topic= "Testapi";        Properties Properties = new properties () {{put (producerconfig.bootstrap_servers_config, KAFKAZK);        Put (Producerconfig.acks_config, "all");        Put (producerconfig.retries_config, 0);        Put (Producerconfig.batch_size_config, 16384);        Put (producerconfig.linger_ms_config, 1);        Put (Producerconfig.buffer_memory_config, 33554432);        Put (Producerconfig.key_serializer_class_config, "Org.apache.kafka.common.serialization.StringSerializer");    Put (Producerconfig.value_serializer_class_config, "Org.apache.kafka.common.serialization.StringSerializer");    }};     Kafkaproducer<string, string> producer = new Kafkaproducer<> (properties); for (int i=0;i<10000000;i++) {producer.send (New producerrecord<> (topic, Uuid.randomuuid (). ToString (        ), string.valueof (i)); }    }

The

startup configuration parameters are as follows:

2018-06-19 17:34:23,636-producerconfig values:acks = allbatch.size = 16384block.on.buffer.full = Falsebootstrap.server s = [Localhost:9092]buffer.memory = 33554432client.id = Compression.type = noneconnections.max.idle.ms = 540000interceptor.classes = Nullkey.serializer = Class org.apache.kafka.common.serialization.StringSerializerlinger.ms = 1max.block.ms = 60000max.in.flight.requests.per.connection = 5max.request.size = 1048576metadata.fetch.timeout.ms = 60000metadata.max.age.ms = 300000metric.reporters = []metrics.num.samples = 2metrics.sample.window.ms = 30000partitioner.class = Class Org.apache.kafka.clients.producer.internals.DefaultPartitionerreceive.buffer.bytes = 32768reconnect.backoff.ms = 50request.timeout.ms = 30000retries = 0retry.backoff.ms = 100sasl.jaas.config = nullsasl.ke Rberos.kinit.cmd =/usr/bin/kinitsasl.kerberos.min.time.before.relogin = 60000sasl.kerberos.service.name = Nullsasl.kerberos.ticket.renew.jitter = 0.05sasl.kerberos.ticket.renew.window.factor = 0.8sAsl.mechanism = Gssapisecurity.protocol = PLAINTEXTsend.buffer.bytes = 131072ssl.cipher.suites = Nullssl.enabled.protocols = [TLSv1.2, TLSv1.1, tlsv1]ssl.endpoint.identification.algorithm = Nullssl.key.password = Nullssl.keymanager.algorithm = SunX509ssl.keystore.location = Nullssl.keystore.password = Nullssl.keystore.type = Jksssl.protocol = Tlsssl.provider = Nullssl.secure.random.implementation = Nullssl.trustmanager.algorithm = PKIXssl.truststore.location = Nullssl.truststore.password = Nullssl.truststore.type = jkstimeout.ms = 30000value.serializer = Class Org.apache.kafka.common.serialization.StringSerializer

acks Parameters:

Before considering the completion of the request, the producer requires leader to receive the acknowledgement quantity, which will control the persistence of the sent records.

Acks=0 if set to zero, the producer does not wait for any acknowledgment from the server. The record is immediately added to the socket buffer and is considered sent. In this case, retries cannot guarantee that the server has received the record, and the configuration will not take effect (because the client will not normally know about any failures). The offset returned for each record is always set to-1.

Acks=1 This means that the leader writes the record to its local log, but responds without waiting for full confirmation by all followers. In this case, if the leader fails immediately after admitting the record but before the follower replicates, the record will be lost.

Acks=all This means that the leader waits for a full set of synchronized copies to confirm the record. This ensures that records are not lost as long as at least one synchronous copy remains active. This is the strongest guarantee. This is equivalent to the ACKs =-1 setting.

batch.size

As long as multiple records are sent to the same partition, the producer tries to divide the records into fewer requests. This helps client and server performance. This configuration controls the default batch size in bytes. Records that are larger than this size are not attempted. The request sent to brokers will contain multiple batches, each with a partition that can be used to send data.

Small batch sizes will make batches less common and may degrade throughput (batch size zero will completely disable batches). A very large batch size can be a bit more wasteful using memory because we will always allocate buffers of the specified batch size for the expected extra records.

Block.on.buffer.full

When our memory buffers are exhausted, we must stop accepting new records (blocks) or throwing errors. By default, this setting is false, and the producer no longer throws Bufferexhaustexception, but instead uses the max.block.ms value to block and then throws the TimeoutException. Setting this property to True sets the max.block.ms to Long.max_value. Additionally, if this property is set to True, the parameter metadata.fetch.timeout.ms is no longer valid.

This parameter has been deprecated and will be removed in a future release, and the parameter max.block.ms should be used.

bootstrap.servers

A list of host/port pairs used to establish the initial connection to the Kafka cluster. The client will use all servers, regardless of which servers are used for booting-this list affects only the initial hosts used to discover the full set of servers. The list should appear in tabular form Host1:port1,host2:port2,...。 Because these servers are used only for initial connections to discover full cluster memberships (which may change dynamically), this list does not need to include a full set of servers (but you may need multiple servers).

buffer.memory

The total number of bytes of memory that the producer can use to buffer records waiting to be sent to the server. If the record is sent faster than it is sent to the server, the producer will block max.block.ms it, and then it will throw an exception.

This setting should roughly correspond to the total memory that the producer will use, but not the hard limit, because not all of the memory used by the producer is used for buffering. Some additional memory will be used for compression (if compression is enabled) and for maintaining in-progress requests.

client.id

The ID string passed to the server when the request was made. This is done by allowing the logical application name to be included in the server-side request log to be able to trace the source of the request outside of Ip/port, which automatically generates an ID if not specified manually.

Compression.type

Specifies the final compression type for a given topic. The configuration accepts the standard compression codec (' gzip ', ' snappy ', ' lz4 '). It also accepts ' uncompressed ', which is equivalent to no compression, which means preserving the original compression codec set by the producer, or modifying the source code to customize the compression type.

connections.max.idle.ms

Closes the idle connection after this configuration for the specified number of milliseconds.

interceptor.classes

The list of classes used as interceptors. By implementing the Producerinterceptor interface, you can intercept (and possibly change) the records received by the producer before the producer is released to the Kafka cluster. By default, there is no interceptor and you can customize the interceptor.

Key.serializer

The serializer class used to implement the key for the serializer interface.

linger.ms

Any records that the producer arrives between the request transfer are grouped into a single batch request. Typically, this only occurs when the record arrives faster than it is sent. However, in some cases, even under moderate load, the client may want to reduce the number of requests. This setting is done by adding a small amount of artificial delay-that is, not sending the record immediately, but instead the producer waits to reach a given delay to allow other records to be sent so that the send can be sent in bulk. This can be thought of as similar to the Nagle algorithm in TCP. This setting gives the upper limit of the batch delay: once we get the partition that batch.size worth recording, it will be sent immediately regardless of this setting, but if we accumulate fewer bytes for this partition than this number, we will "wait" within the specified time and wait for more records to appear. This setting defaults to 0 (that is, no delay). Linger.ms=5 For example, settings can reduce the number of send requests, but add a delay of up to 5 milliseconds to records sent in an invalid payload.

max.block.ms

The configuration controls how long Kafkaproducer.send () and kafkaproducer.partitionsfor () are blocked. Because the buffer is full or the metadata is not available, these methods may be blocked. Blocking in the user-supplied serializer or partitioner will not count toward this timeout.

max.in.flight.requests.per.connection

The maximum number of unacknowledged requests that the client will send on a single connection before blocking. Note that if this setting is set to greater than 1 and the send fails, the message may be reordered because of retry (that is, if retry is enabled).

max.request.size

The maximum size of the request, in bytes. This is actually the upper limit of the maximum record size. Note that the server has its own upper limit on the record size, which may be different. This setting limits the number of records batches that the producer sends in a single request to avoid sending a large number of requests.

metadata.fetch.timeout.ms

When you first send data to a topic, we must obtain metadata about the topic to understand which server hosts the topic's partitions. This configuration specifies the maximum time, in milliseconds, that this gets successful before the exception is thrown back to the client.

metadata.max.age.ms

After a period of time in milliseconds, we force the metadata to be updated even if we do not see any partition leader changes to proactively discover any new agents or partitions.

metric*

Starting with metric is the indicator-related, which is discussed later.

Partitioner.class

The partition class that implements the Partitioner interface. The default is to use Defaultpartitioner to partition.

receive.buffer.bytes

The size of the TCP receive buffer (SO_RCVBUF) used when reading data. If the value is-1, the operating system default value is used.

reconnect.backoff.ms

The amount of time to wait before attempting to reconnect to a given host. This avoids repeated connections to the host in a tight loop. This backoff applies to all requests that consumers send to the agent.

request.timeout.ms

This configuration controls the maximum time the client waits for a request response. If the response is not received before the timeout elapses, the client will resend the request if necessary, or the request fails if the retry is exhausted.

Retries

Number of retries after the producer failed to send, default 0

retry.backoff.ms

The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeated requests to be sent in some failure scenarios.

SASL and SSL

Safety related to Kafka

Kafka manufacturer various start parameter description

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.