Storm consumption Kafka for real-time computing

Source: Internet
Author: User

Approximate architecture


* Deploy one log agent per application instance
* Agent sends logs to Kafka in real time
* Storm compute logs in real time
* Storm calculation results saved to HBase

Storm Consumer Kafka
    • Create a real-time computing project and introduce storm and Kafka dependent dependencies
<dependency>    <groupId>Org.apache.storm</groupId>    <artifactid>Storm-core</artifactid>    <version>1.0.2</version>    <scope>Provided</Scope></Dependency><dependency>    <groupId>Org.apache.storm</groupId>    <artifactid>Storm-kafka</artifactid>    <version>1.0.2</version></Dependency><dependency>    <groupId>Org.apache.kafka</groupId>    <artifactid>kafka_2.10</artifactid>    <version>0.8.2.0</version></Dependency>
    • Create a consumer Kafka spout, directly with the kafkaspout provided by storm.
    • Creating a process that handles reading data from Kafka is responsible for parsing the JSON read by Kafka and sending it to the next bolt for further processing (the next processing bolt is no longer written, as long as inheritance Baserichbolt can be processed on a tuple).
 Public  class jsonbolt extends baserichbolt {    Private Static FinalLogger LOG = loggerfactory. GetLogger (Jsonbolt.class);Privatefields, fields;PrivateOutputcollector collector; Public Jsonbolt() { This. Fields =NewFields ("HostIP","InstanceName","ClassName","MethodName","Createtime","Calltime","ErrorCode"); }@Override     Public void Prepare(Map stormconf, Topologycontext context, outputcollector collector) { This. Collector = Collector; }@Override     Public void Execute(Tuple tuple) {String Spandatajson = tuple.getstring (0); Log.info ("Source data:{}", Spandatajson);        map<string, object> map = (map<string, object>) jsonvalue. Parse (Spandatajson); Values values =NewValues (); for(inti =0, size = This. Fields.size (); i < size; i++) {Values.add (Map.get ( This. Fields.get (i))); } This. collector.emit (tuple, values); This. Collector.ack (tuple); }@Override     Public void Declareoutputfields(Outputfieldsdeclarer declarer) {Declarer.declare ( This. fields); }}
    • To create the topology Mytopology, first configure the Kafkaspout configuration Spoutconfig, where ZK's address port and root node will be ID kafka_spout_ The spout of the ID is associated to the Jsonbolt object through shufflegrouping.
 Public  class mytopology {    Private Static FinalString Topology_name ="Span-data-topology";Private Static FinalString kafka_spout_id ="Kafka-stream";Private Static FinalString jsonproject_bolt_id ="Jsonproject-bolt"; Public Static void Main(string[] args)throwsException {String Zks ="132.122.252.51:2181"; String topic ="Span-data-topic"; String Zkroot ="/kafka-storm"; Brokerhosts brokerhosts =NewZkhosts (ZKS); Spoutconfig spoutconf =NewSpoutconfig (brokerhosts, topic, Zkroot, kafka_spout_id); Spoutconf.scheme =NewSchemeasmultischeme (NewStringscheme ()); Spoutconf.zkservers = Arrays.aslist (NewString[] {"132.122.252.51"}); Spoutconf.zkport =2181; Jsonbolt Jsonbolt =NewJsonbolt (); Topologybuilder Builder =NewTopologybuilder (); Builder.setspout (kafka_spout_id,NewKafkaspout (spoutconf));        Builder.setbolt (jsonproject_bolt_id, Jsonbolt). shufflegrouping (kafka_spout_id); Config config =NewConfig (); Config.setnumworkers (1);if(Args.length = =0) {Localcluster cluster =NewLocalcluster ();            Cluster.submittopology (topology_name, config, builder.createtopology ()); Utils.waitforseconds ( -);            Cluster.killtopology (Topology_name);        Cluster.shutdown (); }Else{Stormsubmitter.submittopology (args[0], config, builder.createtopology ()); }    }}
    • The local test is run without running parameters, and the cluster is required with the topology name as the parameter.
    • It is also important to note that the kafkaspout default from the point where it was last run to continue spending, that is, it will not be consumed from scratch, because kafkaspout default every 2 seconds to submit Kafka offset position to ZK, If you want to consume from scratch every time you run it can be implemented by configuration.

Storm consumption Kafka for real-time computing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.