(iii) Storm-kafka source code How to build a kafkaspout

Source: Internet
Author: User
Tags zookeeper

The previous section describes the config related information, this section, what are these parameters, what is the storage path in the zookeeper, before the QQ group there are many do not know how to pass the correct parameters to new a kafkaspout, its main or parameter transfer is correct.


Look at the Spoutconfig's constructor.

Public Spoutconfig (brokerhosts hosts, string topic, string zkroot, string id) {        super (hosts, topic);        This.zkroot = Zkroot;        This.id = ID;    }

Need a brokerhosts, look at the code:

public class Zkhosts implements brokerhosts {    private static final String Default_zk_path = "/brokers";    Public String brokerzkstr = null;    Public String brokerzkpath = null; e.g.,/kafka/brokers public    int refreshfreqsecs =;    Public zkhosts (String brokerzkstr, String brokerzkpath) {        this.brokerzkstr = brokerzkstr;        This.brokerzkpath = Brokerzkpath;    }    Public zkhosts (String brokerzkstr) {This        (brokerzkstr, Default_zk_path);}    }
need brokerzkstr, this is actually the hosts list, more than one host separated by commas, because zookeeper parsing string is separated by commas, here is attached zookeeper parsing code

Public ZooKeeper (String connectstring, int sessiontimeout, Watcher Watcher,            Boolean canbereadonly)        throws IOException    {        log.info ("Initiating client connection, connectstring=" + connectstring                + "sessiontimeout= "+ sessiontimeout +" watcher= "+ Watcher);        Watchmanager.defaultwatcher = Watcher;        Connectstringparser connectstringparser = new Connectstringparser (                connectstring);        Hostprovider Hostprovider = new Statichostprovider (                connectstringparser.getserveraddresses ());        CNXN = new Clientcnxn (Connectstringparser.getchrootpath (),                Hostprovider, Sessiontimeout, this, Watchmanager,                Getclientcnxnsocket (), canbereadonly);        Cnxn.start ();    }

one of the main stringparser to do analytic, see how I parse the know

Public Connectstringparser (String connectstring) {//parse out chroot, if any int off = Connectstring.index        Of ('/');            if (off >= 0) {String Chrootpath = connectstring.substring (off);            Ignore "/" chroot spec, same as NULL if (chrootpath.length () = = 1) {This.chrootpath = null;                } else {Pathutils.validatepath (chrootpath);            This.chrootpath = Chrootpath;        } connectstring = connectstring.substring (0, off);        } else {this.chrootpath = null;        } String hostslist[] = Connectstring.split (",");            for (String host:hostslist) {int port = Default_port;            int pidx = Host.lastindexof (': '); if (pidx >= 0) {//Otherwise:is at the end of the string, ignore if (Pidx < Host.le          Ngth ()-1) {port = Integer.parseint (host.substring (pidx + 1));      } host = host.substring (0, PIDX);        } Serveraddresses.add (Inetsocketaddress.createunresolved (host, port)); }    }

Okay, here's the point.

Just said Brokerzkstr need, there is a parameter is Zkpath, this can be self-determined, there is a default value "/brokers"

Spoutconfig also has a zkroot, this zkroot is actually consumer end consumption of information storage place, good to give an example:

String topic = "Test";        string zkroot = "/kafkastorm";//        string spoutid = "id";//the Read status will be present,/kafkastorm/id below, so the ID is similar to consumer Group                brokerhosts brokerhosts = new Zkhosts ("10.1.110.24:2181,10.1.110.22:2181");//Use the default/brokers        here Spoutconfig spoutconfig = new Spoutconfig (brokerhosts, topic, Zkroot, Spoutid);        Spoutconfig.scheme = new Schemeasmultischeme (new Stringscheme ()); The next section describes scheme                /*spoutconfig.zkservers = new Arraylist<string> () {{//only if you need to record the read state in local mode, you need to set the            add (" 10.118.136.107 ");        };        Spoutconfig.zkport = 2181;*/                Spoutconfig.forcefromstart = true;         Spoutconfig.startoffsettime = -1;//from the latest start consumption         //spoutconfig.metricstimebucketsizeinsecs = 6;        Builder.setspout (Sqlcollectortopologydef.kafka_spout_name, New Kafkaspout (Spoutconfig), 1);

a kafkaspout has been built here, and if the project runs successfully,

can go to ZK master to see the relevant information,

./bin/zkcli.sh-server 10.1.110.24:2181



And for Statichosts, look at the official explanation:

Statichosts

This is an alternative implementation where broker, partition information is static. In order to construct a instance of this class need to first construct an instance of globalpartitioninformation.

    BrokerBrokerForPartition0= New Broker("localhost");//localhost:9092    BrokerBrokerForPartition1= New Broker("localhost",9092);//localhost:9092 But we specified the port explicitly    BrokerBrokerForPartition2= New Broker("localhost:9092");//localhost:9092 specified as one string.    globalpartitioninformationPartitioninfo= New globalpartitioninformation(); Partitioninfo.Addpartition (0, BrokerForPartition0);//mapping form partition 0 to BrokerForPartition0Partitioninfo.Addpartition (1, BrokerForPartition1);//mapping form partition 1 to BrokerForPartition1Partitioninfo.Addpartition (2, BrokerForPartition2);//mapping Form partition 2 to BrokerForPartition2    statichostsHosts= New statichosts(Partitioninfo);
personally, it is necessary for the developer to know the correspondence between the partition and the broker, and to associate it correctly. and Storm-kafka 0.9.0.1 version is, do not need to specify, I only need to pass in Zkserver list,partition total, by Kafkautil with two for (Traverse all broker and partition) loop, Temporarily generate consumer to connect to spend a try, if there is data, then put PartitionID and brokerhost relationship to map. Imagine 0.9.3-rc1 why to change to this. If the broker does not have the partition information, what will happen??? The author has not tested, have tested please leave a message, say the situation. Reference

Http://www.cnblogs.com/fxjwind/p/3808346.html

(iii) Storm-kafka source code How to build a kafkaspout

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.