The Librdkafka problem of the small Ji
This article describes some of the problems and workarounds that are encountered when using Kafka's C + + client Librdkafka.
1. The producer reported that the message size was too large to cause a build to fail, similar to the following error:
Produce failed:Broker:Message size to large
Solve:
By printing the default configuration of the Librdkafka, know:
The maximum size of a single message for a message.max.bytes=1000000//producer is approximately 1M
fetch.message.max.bytes=1048576//Consumer single message Max is 1M
Therefore, when a single message from the producer is larger than the default configuration, it causes the error as above, and the Producer,broker,consumer configuration should be taken into account to resolve the problem.
producer, you can modify the default values in your code by using the following configuration:
rdkafka::conf *conf = rdkafka::conf::create (Rdkafka::conf::conf_global);
string errstr;
Set the producer's single message to a maximum of 10M
if (Conf->set ("Message.max.bytes", "10485760", Errstr))
{
std::cout<< errstr<<std::endl;
}
broker side, you need to modify the broker's configuration and restart the Kafka cluster:
Message.max.bytes (default: 1000000) –broker can receive the maximum number of bytes of a message, This value should be smaller than the consumer's fetch.message.max.bytes, otherwise the broker will be suspended because the consumer cannot use the message.
Replica.fetch.max.bytes (default: 1MB) –broker The maximum number of bytes of the message that can be copied. This value should be larger than message.max.bytes, otherwise the broker will receive this message, but the message cannot be copied out, resulting in data loss.
Consumer End: Adjust the size of the canceled interest
rdkafka::conf *conf = rdkafka::conf::create (Rdkafka::conf::conf_global);
string errstr;
Set the consumer's single message to a maximum of 10M
if (Conf->set ("Fetch.message.max.bytes", "10485760", Errstr))
{
std::cout< <errstr<<std::endl;
}
Of course, the size of the message will affect the performance of Kafka, in particular, refer to the blog (http://www.cnblogs.com/doubletree/p/4264969.html)
2. Because there is no periodic callback poll (), causing the local queue to be full and sending failed, if the producer sets a callback function, the possible error is as follows:
Produce Failed:Local:Queue full
The prototype of poll in Librdkafka is as follows:
/
* * Polls the provided Kafka handle for events.
* Returns the number of events served.
*
/Virtual int poll (int timeout_ms) = 0;
It is emphasized in the header file that the poll () function must be called periodically so that the client-generated callback event can be called, and if poll () is not called, then such events as the producer successfully sends the message will not be recalled, resulting in a backlog of messages
3. About the default configuration of Librdkafka
When the problem occurs, you should check the configuration of the client and broker, the network will give some default configuration, but more should be printed out by code the actual default configuration, Librdkafka default example has given the relevant code, you can refer to.
rdkafka::conf *conf = Rdkafka::conf::create (rdkafka::conf::conf_global); int pass; for (pass = 0; Pass < 2;
pass++) {std::list<std::string> *dump;
if (pass = = 0) {dump = Conf->dump ();
Std::cout << "# Global config" << Std::endl;
} else {dump = Tconf->dump ();
Std::cout << "# Topic config" << Std::endl;
} for (Std::list<std::string>::iterator it = Dump->begin (); It! = Dump->end ();)
{std::cout << *it << "=";
it++;
Std::cout << *it << Std::endl;
it++;
} std::cout << Std::endl; }