Storm integrates Kafka,spout as a Kafka consumer

Source: Internet
Author: User
Tags arrays message queue zookeeper
In the previous blog, how to send each record as a message to the Kafka message queue in the project storm. Here's how to consume messages from the Kafka queue in storm. Why the staging of data with Kafka Message Queuing between two topology file checksum preprocessing in a project still needs to be implemented.

The project directly uses the kafkaspout provided by storm as the consumer of Message Queuing. implements spout to obtain data from the Kafka message queue as a data source for the topology.

Package com.lancy.topology;

Import Java.util.Arrays;
Import Org.apache.storm.Config;
Import Org.apache.storm.LocalCluster;
Import Org.apache.storm.StormSubmitter;
Import org.apache.storm.generated.AlreadyAliveException;
Import org.apache.storm.generated.AuthorizationException;
Import org.apache.storm.generated.InvalidTopologyException;
Import org.apache.storm.kafka.BrokerHosts;
Import Org.apache.storm.kafka.KafkaSpout;
Import Org.apache.storm.kafka.SpoutConfig;
Import Org.apache.storm.kafka.StringScheme;
Import org.apache.storm.kafka.ZkHosts;
Import Org.apache.storm.spout.SchemeAsMultiScheme;
Import Org.apache.storm.topology.TopologyBuilder;
Import Com.lancy.common.ConfigCommon;
Import Com.lancy.common.pre.TopoStaticName;

Import Com.lancy.spout.GetDataFromKafkaSpoutBolt; public class Lntprehandletopology implements Runnable {private static final String Config_zookeeper_host = configcom Mon.getinstance (). Zookeeper_host_port + "/kafka";// similar to this private StatiC final String config_topic = Configcommon.getinstance (). Kafka_lnt_valid_data_topic;//topic name private static final String Config_offset_zk_path = "/kafka/storm_offset" + "/" + config_topic;//Offset's root directory private static final String config_offset_zk_customer_group_id = Configcommon.getinsta NCE ().

    @Override public void Run () {exe (new string[] {"LNT"}); } public static void Exe (string[] args) {//Register ZooKeeper host Brokerhosts brokerhosts = new Zkhosts (C
        Onfig_zookeeper_host, "/brokers"); Configuration Spout spoutconfig spoutconfig = new Spoutconfig (brokerhosts, Config_topic, config_offset_zk_path,config_offs

        ET_ZK_CUSTOMER_GROUP_ID); if (args = = NULL | | args.length = = 0) {//If the input parameter is empty, this is the case when the local mode//kafkaspout initialized, the spoutconfig is taken. The value of the zkservers and Spoutconfig.zkport variables, which are not plugged by default, so it is empty,//Then it will fetch the zookeeper address and port configured by the currently running storm, and the storm running locally is a temporary zo
       Okeeper instances,     Will not really persist. So, after each shutdown, the data is gone. Local mode, to display the go configuration String config_offset_zk_host = Configcommon.getinstance ().
            Zookeeper_host; int config_offset_zk_port = Integer.parseint (Configcommon.getinstance ().
            Kafka Offet Record, the zookeeper address used is Spoutconfig.zkservers = Arrays.aslist (Config_offset_zk_host.split (","));
            Kafka Offet Record, using the zookeeper port spoutconfig.zkport = Config_offset_zk_port;
        Spoutconfig.ignorezkoffsets = true;
        }//spoutconfig.ignorezkoffsets = true; Configure Scheme (optional) Spoutconfig.scheme = new Schemeasmultischeme (new Stringscheme ());//stringscheme tells Kafkaspout how to decode the number

        Kafkaspout kafkaspout = new Kafkaspout (spoutconfig) to generate storm internal transfer data.
        Topologybuilder builder = buildertopology (kafkaspout);
        Config config = new config ();
        Config.setdebug (FALSE);
        Config.setnumworkers (8);
        Config.setnumackers (8); Config.pUT (config.topology_max_spout_pending, 10240);
        Config.put (config.topology_backpressure_enable, false);
        Config.put (Config.topology_executor_receive_buffer_size, 16384);

        Config.put (Config.topology_executor_send_buffer_size, 16384); if (args! = null && args.length > 0) {try {stormsubmitter.submittopology ("prehanl
            Der-topology ", config, builder.createtopology ());
            } catch (Alreadyaliveexception e) {e.printstacktrace ();
            } catch (Invalidtopologyexception e) {e.printstacktrace ();
            } catch (Authorizationexception e) {e.printstacktrace ();
            }} else {//The test environment uses local mode localcluster localcluster = new Localcluster ();
            Localcluster.submittopology ("Prehanlder-topology-local-mode", config, builder.createtopology ());
       try {thread.sleep (12000 * 1000);     } catch (Interruptedexception e) {e.printstacktrace ();
            } localcluster.killtopology ("Local-prehanlder-topology-local-mode");
        Localcluster.shutdown ();  }} public static Topologybuilder buildertopology (Kafkaspout kafkaspout) {Topologybuilder builder = new
        Topologybuilder ();
        Builder.setspout (Topostaticname.kafkaspout, Kafkaspout, 10); Builder.setbolt (Topostaticname.datafromkafkaspout, New Getdatafromkafkaspoutbolt (), ten). Shufflegrouping (
    Omit the rear bolt return builder;


Static parameter Configuration class

Package com.lancy.common.pre;
 * @ClassName: Topostaticname
 * @Description: Topology static value */Public
class Topostaticname {
    // The ID of the data processing topology public
    static final String  kafkaspout                  = "01.KafkaSpout";
    public static final String  datafromkafkaspout          = "02.DataFromKafkaSpout";


Refer to the blog below for kafkaspout usage.

Follow up to learn how to initialize Zookeeper node information and how to integrate kafka,storm and zookeeper, refueling

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.