Kafka detailed five, Kafka consumer the bottom Api-simpleconsumer

Source: Internet
Author: User

Kafka provides two sets of APIs to consumer
    1. The high-level Consumer API
    2. The Simpleconsumer API
the first highly abstracted consumer API, which is simple and convenient to use, but for some special needs we might want to use the second, lower-level API, so let's start by describing what the second API can do to help us do it .
    • One message read multiple times
    • Consume only a subset of the messages in a process partition
    • Adding transaction management mechanisms to ensure that messages are processed and processed only once
What are the drawbacks of using Simpleconsumer?
    • Offset value must be tracked in the program
    • The lead broker in the specified topic partition must be found
    • Change of broker must be handled
steps to use Simpleconsumer
    1. Find out which of the leader broker in the specified topic partition from all active broker
    2. Find all the backup brokers in the specified topic partition
    3. Construct the request
    4. Send Request query data
    5. Handling leader Broker Changes
code Example:

Package Bonree.consumer;import Java.nio.bytebuffer;import Java.util.arraylist;import java.util.collections;import Java.util.hashmap;import Java.util.list;import Java.util.map;import Kafka.api.fetchrequest;import Kafka.api.fetchrequestbuilder;import Kafka.api.partitionoffsetrequestinfo;import kafka.common.ErrorMapping; Import Kafka.common.topicandpartition;import Kafka.javaapi.fetchresponse;import kafka.javaapi.OffsetResponse; Import Kafka.javaapi.partitionmetadata;import Kafka.javaapi.topicmetadata;import Kafka.javaapi.topicmetadatarequest;import Kafka.javaapi.consumer.simpleconsumer;import Kafka.message.messageandoffset;public class Simpleexample {private list<string> m_replicabrokers = new ArrayList <String> ();p ublic simpleexample () {m_replicabrokers = new arraylist<string> ();} public static void Main (String args[]) {Simpleexample example = new Simpleexample ();//maximum number of read messages Long maxreads = Long.parsel Ong ("3");//topicstring to subscribe to topic = "mytopic";//partition to find int partition = inTeger.parseint ("0");//broker node iplist<string> seeds = new arraylist<string> (); Seeds.add ("192.168.4.30" ); Seeds.add ("192.168.4.31"); Seeds.add ("192.168.4.32");//port int port = integer.parseint ("9092"); try {Example.run ( Maxreads, topic, partition, seeds, port);} catch (Exception e) {System.out.println ("Oops:" + e); E.printstacktrace ();}} public void run (long a_maxreads, String a_topic, int a_partition, list<string> a_seedbrokers, int a_port) throws Exc eption {//Get meta data for the specified topic partition Partitionmetadata metadata = Findleader (A_seedbrokers, A_port, A_topic, a_partition); if (metadata = = null) {System.out.println ("Can ' t find metadata for Topic and Partition. Exiting "); return;} if (metadata.leader () = = null) {System.out.println ("Can ' t find leader for Topic and Partition. Exiting "); return;} String Leadbroker = Metadata.leader (). host (); String clientName = "Client_" + A_topic + "_" + a_partition; Simpleconsumer consumer = new Simpleconsumer (Leadbroker, A_port, 100000, * 1024x768, clientName); Long readoffset = Getlastoffset (consumer, A_topic, A_partition, Kafka.api.OffsetRequest.EarliestTime (), clientName); int numerrors = 0;while (a_maxreads > 0) {if (consumer = = null) {consumer = new Simpleconsumer (Leadbroker, A_port, 100000, clientName);} Fetchrequest req = new Fetchrequestbuilder (). ClientId (ClientName). Addfetch (A_topic, A_partition, Readoffset, 100000). Build (); Fetchresponse fetchresponse = Consumer.fetch (req), if (Fetchresponse.haserror ()) {numerrors++;//Something went wrong! Short code = Fetchresponse.errorcode (A_topic, a_partition);  SYSTEM.OUT.PRINTLN ("Error fetching data from the Broker:" + Leadbroker + "Reason:" + code); if (Numerrors > 5) break;if (Code = = Errormapping.offsetoutofrangecode ()) {//We asked for an invalid offset. For simple case ask for//the last element to Resetreadoffset = Getlastoffset (consumer, A_topic, A_partition, kafka.api.Of Fsetrequest.latesttime (), clientName); continue;} Consumer.close (); consumer = Null;leadbroker = FindneWleader (Leadbroker, A_topic, A_partition, a_port); continue;} Numerrors = 0;long Numread = 0;for (Messageandoffset messageAndOffset:fetchResponse.messageSet (A_topic, a_partition)) { Long Currentoffset = Messageandoffset.offset (); if (Currentoffset < Readoffset) {System.out.println ("Found an old Offset: "+ Currentoffset +" expecting: "+ readoffset); continue;} Readoffset = Messageandoffset.nextoffset (); Bytebuffer payload = Messageandoffset.message (). Payload (); byte[] bytes = new Byte[payload.limit ()];p ayload.get (bytes) ; System.out.println (String.valueof (Messageandoffset.offset ()) + ":" + New String (bytes, "UTF-8"); Numread++;a_ maxreads--;} if (Numread = = 0) {try {Thread.Sleep],} catch (Interruptedexception IE) {}}}if (consumer! = null) Consumer.close ();} public static long Getlastoffset (Simpleconsumer consumer, string topic, int partition, long whichtime, string clientName) {Topicandpartition topicandpartition = new Topicandpartition (topic, partition); Map<topicandpartition, Partitionoffsetrequestinfo> requestinfo = new hashmap<topicandpartition, partitionoffsetrequestinfo> (); Requestinfo.put (Topicandpartition, New Partitionoffsetrequestinfo (Whichtime, 1)); kafka.javaapi.OffsetRequest Request = new Kafka.javaapi.OffsetRequest (Requestinfo, Kafka.api.OffsetRequest.CurrentVersion (), clientName); O Ffsetresponse response = Consumer.getoffsetsbefore (Request), if (Response.haserror ()) {System.out.println ("Error Fetching data Offset data the Broker. Reason: "+ response.errorcode (topic, partition)); return 0;} Long[] Offsets = response.offsets (topic, partition); return offsets[0];}             /** * @param a_oldleader * @param a_topic * @param a_partition * @param a_port * @return String * @throws Exception * Find a leader Broker */private string Findnewleader (String A_oldleader, string a_topic, int a_partition, int a_port) t Hrows Exception {for (int i = 0; i < 3; i++) {Boolean gotosleep = false; Partitionmetadata metadata = Findleader (M_replicabrokers, A_port, A_topic,A_partition); if (metadata = = null) {Gotosleep = true;} else if (metadata.leader () = = null) {Gotosleep = true;} else if (A_ Oldleader.equalsignorecase (Metadata.leader (). Host ()) && i = = 0) {//first time through if the leader hasn ' t change D give//ZooKeeper A second to recover//second time, assume the broker does recover before failover,//or it was a Non-bro Ker Issue//gotosleep = true;} else {return Metadata.leader (). host ();} if (gotosleep) {try {thread.sleep ()} catch (Interruptedexception IE) {}}}system.out.println ("Unable to find new leads ER after Broker failure. Exiting "); throw new Exception (" Unable to find new leader after Broker failure. Exiting ");} Private Partitionmetadata Findleader (list<string> a_seedbrokers, int a_port, String a_topic, int a_partition) { Partitionmetadata returnmetadata = null;loop:for (String seed:a_seedbrokers) {Simpleconsumer consumer = null;try {Consu Mer = new Simpleconsumer (seed, A_port, 100000, * 1024x768, "leaderlookup"); List<string&gT Topics = Collections.singletonlist (a_topic); Topicmetadatarequest req = new Topicmetadatarequest (topics); Kafka.javaapi.TopicMetadataResponse resp = Consumer.send ( REQ); list<topicmetadata> metaData = Resp.topicsmetadata (); for (Topicmetadata Item:metadata) {for (partitionmetadata Part:item.partitionsMetadata ()) {if (Part.partitionid () = = a_partition) {returnmetadata = Part;break Loop;}}}} catch (Exception e) {System.out.println ("Error Communicating with Broker [" + Seed + "] to find Leader for [" + A_topic + "," + A_partition + "] Reason:" + e);} finally {if (consumer! = null) Consumer.close ();}} if (returnmetadata! = null) {m_replicabrokers.clear (); for (Kafka.cluster.Broker Replica:returnMetaData.replicas ()) {m _replicabrokers.add (Replica.host ());}} return returnmetadata;}}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.