The producers and consumers of Kafka __kafka

Source: Internet
Author: User
Tags uuid

Wrote a Kafka of Demo,kafka producers and consumers, consumers with the thread pool to create multiple consumers, and create consumers greater than or equal to less than the number of partition, validated the Kafka consumption side load algorithm, algorithm see: http:// Blog.csdn.net/qq_20641565/article/details/59746101

to create a MAVEN project, the program is structured as follows:

Pom File

<project xmlns= "http://maven.apache.org/POM/4.0.0" xmlns:xsi= "Http://www.w3.org/2001/XMLSchema-instance" xsi: schemalocation= "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" > < Modelversion>4.0.0</modelversion> <groupId>kafka-new</groupId> <artifactid>kafka-new&
            lt;/artifactid> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency>
            <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>0.10.1.1</version> </dependency> <dependency> <gr Oupid>org.apache.hadoop</groupid> <artifactId>hadoop-common</artifactId> < version>2.2.0</version> </dependency> <dependency> <groupid>org.apa Che.hadoop</groupid> <artiFactid>hadoop-hdfs</artifactid> <version>2.2.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactid>hadoop-c lient</artifactid> <version>2.2.0</version> </dependency> <dependen Cy> <groupId>org.apache.hbase</groupId> <artifactid>hbase-client</artifact
            id> <version>1.0.3</version> </dependency> <dependency>
            <groupId>org.apache.hbase</groupId> <artifactId>hbase-server</artifactId> <version>1.0.3</version> </dependency> <dependency> <groupid>or G.apache.hadoop</groupid> <artifactId>hadoop-hdfs</artifactId> <version>2.
    2.0</version>    </dependency> <dependency> <groupId>jdk.tools</groupId> <ar Tifactid>jdk.tools</artifactid> <version>1.7</version> <scope>system&lt
        ;/scope> <systemPath>${JAVA_HOME}/lib/tools.jar</systemPath> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactid>ht tpclient</artifactid> <version>4.3.6</version> </dependency> </dependen cies> <build> <plugins> <plugin> <groupid>org.apache.ma Ven.plugins</groupid> <artifactId>maven-compiler-plugin</artifactId> ;configuration> <source>1.7</source> &LT;TARGET&GT;1.7&LT;/TARGET&G
                T </configuration> </plugin> </plugins> </build> </project> 

Myproducer class

Package Com.lijie.kafka;
Import java.util.Properties;

Import Java.util.UUID;
Import Org.apache.kafka.clients.producer.KafkaProducer;

Import Org.apache.kafka.clients.producer.ProducerRecord;
        /** * * * @author Lijie * */public class Myproducer {public static void main (string[] args) throws Exception {
    Produce ();

        The public static void produce () throws Exception {//topic String topic = "Mytopic";
        Configuration Properties Properties = new properties ();

        Properties.put ("Bootstrap.servers", "192.168.80.123:9092");
        Serialization Type Properties.put ("Value.serializer", "Org.apache.kafka.common.serialization.StringSerializer");

        Properties.put ("Key.serializer", "Org.apache.kafka.common.serialization.StringSerializer");
        Create producer kafkaproducer<string, string> Pro = new Kafkaproducer<> (properties); while (true) {//Simulate message String value = Uuid.randomuuid (). toString();

            Package message producerrecord<string, string> PR = new producerrecord<string, string> (topic, value);

            Send Message Pro.send (PR);
        Sleep Thread.Sleep (1000);
 }
    }
}

Myconsumer class

Package Com.lijie.kafka;
Import Java.util.HashMap;
Import java.util.List;
Import Java.util.Map;
Import java.util.Properties;
Import Java.util.concurrent.ExecutorService;

Import java.util.concurrent.Executors;
Import Kafka.consumer.Consumer;
Import Kafka.consumer.ConsumerConfig;
Import Kafka.consumer.KafkaStream;

Import Kafka.javaapi.consumer.ConsumerConnector; 
    /** * * * @author Lijie * */public class Myconsumer {public static void main (string[] args) {consumer ();

        public static void consumer () {String topic = "Mytopic";
        Configuration file Properties Properties = new properties ();
        Properties.put ("Group.id", "Lijiegroup");
        Properties.put ("Zookeeper.connect", "192.168.80.123:2181");
        Properties.put ("Auto.offset.reset", "largest");

Properties.put ("auto.commit.interval.ms", "1000");
Properties.put ("Value.serializer",//"Org.apache.kafka.common.serialization.StringSerializer"); Properties.Put ("Key.serializer", "Org.apache.kafka.common.serialization.StringSerializer");

        Set the consumer's profile consumerconfig config = new Consumerconfig (properties);

        Create connectors Consumerconnector conn = consumer.createjavaconsumerconnector (config);

        The key is topic value is the number of partition map<string, integer> Map = new hashmap<string, integer> ();

        The number of topic and partition that encapsulates the corresponding message Map.put (topic, 3); Get partition stream, key for the corresponding topic name, value for each partition stream, there are three Partiiton so there are three streams in the List map<string, List<kafkastream

        <byte[], byte[]>>> createmessagestreams = conn. createmessagestreams (map);

        Take out the list list<kafkastream<byte[of the stream corresponding to the topic, byte[]>> list = createmessagestreams.get (topic);

        Create 3 corresponding consumers with thread pool Executorservice executor = Executors.newfixedthreadpool (3); Execute consumption for (int i = 0; i < list.size (); i++) {Executor.execute (new consumeRthread ("Consumer" + (i + 1), List.get (i));
 }

    }
}

Consumerthread class

Package Com.lijie.kafka;
Import Kafka.consumer.ConsumerIterator;
Import Kafka.consumer.KafkaStream;

Import Kafka.message.MessageAndMetadata; /** * * * @author Lijie * */public class Consumerthread implements Runnable {//Current consumer's name private String Consu

    Mername;

    Current consumer's stream private kafkastream<byte[], byte[]> stream;
        Constructor public Consumerthread (String Consumername, kafkastream<byte[], byte[]> stream) {super ();
        This.consumername = Consumername;
    This.stream = stream; @Override public void Run () {//Get iterator consumeriterator<byte[for current data], byte[]> iterator = s

        Tream.iterator (); Consumption data while (Iterator.hasnext ()) {//Remove message messageandmetadata<byte[], byte[]> next

            = Iterator.next ();

            Gets the topic name String topic = Next.topic ();

            Gets the partition number int partitionnum = Next.partition ();
   Get offset         Long offset = Next.offset ();

            Gets the body String message = new string (Next.message ()); Test print System.out. println ("Consumername:" + consumername + "topic:" + topic + ", Partitionnu
        M: "+ partitionnum +", offset: "+ offset +", message: "+ message";
 }
    }

}

Execution results:

According to the above results can be found, I created the topic is Mytopic, and created 3 partition, consumer 1 consumption partition0, consumer 2 consumption partition1, consumer 3 consumption partition2, A consumer corresponds to a partition

If I change the consumer to 4, but my partition is still only 3, then how about back?
As shown in figure:

The result is the same as above, and consumer 4 has no effect

But what if I set the consumer to less than the number of partition, like 2?
As shown in figure:

You can see consumers 1 will consume two partition, respectively, Partition1 and Partition0, and consumer 2 will only consume Partition2

This phenomenon just validates my last blog, Kafka the number of consumers and partition load algorithm, see details: http://blog.csdn.net/qq_20641565/article/details/59746101

Of course, in Kafka, the default producer production data will be evenly placed in each partition, and if we need to specify a specific message to a particular partition, we need to customize partition

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.