Kafka (v): The consumption programming model of Kafka

Source: Internet
Author: User

Kafka's consumption model is divided into two types:

1. Partitioned consumption model

2. Group Consumption model

A. Partitioned consumption model

Second, the group consumption model

Producer:

 PackageCn.outofmemory.kafka;Importjava.util.Properties;ImportKafka.javaapi.producer.Producer;ImportKafka.producer.KeyedMessage;ImportKafka.producer.ProducerConfig;/*** Hello world! **/ Public classKafkaproducer {Private FinalProducer<string, string>producer;  Public Final StaticString TOPIC = "Test-topic"; PrivateKafkaproducer () {Properties props=NewProperties (); //The Kafka port is configured here .Props.put ("Metadata.broker.list", "192.168.193.148:9092"); //to configure the serialization class for valueProps.put ("Serializer.class", "Kafka.serializer.StringEncoder"); //Configuring the serialization class for keyProps.put ("Key.serializer.class", "Kafka.serializer.StringEncoder"); //Request.required.acks//0, which means that the producer never waits for a acknowledgement from the broker (the same behavior as 0.7). This option provides the lowest latency but the weakest durability guarantees (some data would be lost when a server fails)        . //1, which means that the producer gets a acknowledgement after the leader replica have received the data. This option provides better durability as the client waits until the server acknowledges the request as successful (only M        Essages that were written to the Now-dead leader and not yet replicated would be lost). //-1, which means that the producer gets a acknowledgement after all In-sync replicas has received the data. This option provides the best durability, we guarantee that no messages would be lost as long as at least one in sync repli CA remains.Props.put ("Request.required.acks", "1"); Producer=NewProducer<string, String> (Newproducerconfig (props)); }     voidProduce () {intMessageno = 1000; Final intCOUNT = 10000;  while(Messageno <COUNT) {String key=string.valueof (Messageno); String Data= "Hello Kafka message" +key; Producer.send (NewKeyedmessage<string, string>(TOPIC, key, data));            SYSTEM.OUT.PRINTLN (data); Messageno++; }    }      Public Static voidMain (string[] args) {NewKafkaproducer (). produce (); }}
Consumer
 PackageCn.outofmemory.kafka;ImportJava.util.HashMap;Importjava.util.List;ImportJava.util.Map;Importjava.util.Properties;ImportKafka.consumer.ConsumerConfig;ImportKafka.consumer.ConsumerIterator;ImportKafka.consumer.KafkaStream;ImportKafka.javaapi.consumer.ConsumerConnector;ImportKafka.serializer.StringDecoder;Importkafka.utils.VerifiableProperties; Public classKafkaconsumer {Private FinalConsumerconnector Consumer; PrivateKafkaconsumer () {Properties props=NewProperties (); //Zookeeper ConfigurationProps.put ("Zookeeper.connect", "192.168.193.148:2181"); //Group represents a consumer groupProps.put ("Group.id", "Jd-group"); //ZK Connection timed outProps.put ("zookeeper.session.timeout.ms", "4000"); Props.put ("Zookeeper.sync.time.ms", "200"); Props.put ("Auto.commit.interval.ms", "1000"); Props.put ("Auto.offset.reset", "smallest"); //Serialization ClassesProps.put ("Serializer.class", "Kafka.serializer.StringEncoder"); Consumerconfig Config=Newconsumerconfig (props); Consumer=kafka.consumer.Consumer.createJavaConsumerConnector (config); }     voidconsume () {Map<string, integer> topiccountmap =NewHashmap<string, integer>(); Topiccountmap.put (Kafkaproducer.topic,NewInteger (1)); Stringdecoder Keydecoder=NewStringdecoder (Newverifiableproperties ()); Stringdecoder Valuedecoder=NewStringdecoder (Newverifiableproperties ()); //get to the input streamMap<string, list<kafkastream<string, string>>> consumermap =Consumer.createmessagestreams (Topiccountmap,keydecoder,valuedecoder); Kafkastream<string, string> stream = Consumermap.get (kafkaproducer.topic). Get (0); Consumeriterator<string, string> it =Stream.iterator (); //output the received message         while(It.hasnext ()) System.out.println (It.next (). message ()); }      Public Static voidMain (string[] args) {NewKafkaconsumer (). consume (); }}

Kafka study came to an end, back into the spring to brush up.

Kafka (v): The consumption programming model of Kafka

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.