Enter data into Kafka, throw exception org.apache.kafka.common.errors.RecordTooLargeException
Two parameters of the official website are described below:
Message.max.bytes |
The maximum size of message that the server can receive |
Int |
1000012 |
[0,...] |
High |
Fetch.message.max.bytes |
1024 * 1024 |
The number of byes of messages to attempt-to-fetch for each topic-partition in each fetch request. These bytes would be read to memory for each partition, so this helps control the memory used by the consumer. The fetch request size must be at least as large as the maximum message size the server allows or else it was possible for The producer to send messages larger than the consumer can fetch. |
Message.max.bytes:server can accept the maximum value of the message body.
Fetch.message.max.bytes:consumer gets the body of the message from the partition into memory, which controls the amount of memory used by the Conusmer. If the message.max.bytes is greater than fetch.message.max.bytes, it causes the memory allocated by consumer to be placed in a single message.
Therefore, add two configuration items to the Server.properties
#broker能接收消息的最大字节数message. Max.bytes=20000000#broker The maximum number of bytes of a message that can be copied replica.fetch.max.bytes=20485760
If you do not want to modify the configuration file, you can use the method of modifying the topic configuration, which corresponds to the server configuration item message.max.bytes topic Configuration item is Max.message.bytes
bin/kafka-topics.sh--zookeeper localhost:2181--alter--topic my-topic--config max.message.bytes=128000
This article is from the "Chocolate Black" blog, be sure to keep this source http://10120275.blog.51cto.com/10110275/1844461
Kafka Server Write data when the error org.apache.kafka.common.errors.RecordTooLargeException