Kafka Note Finishing (ii): Kafka Java API usage

Source: Internet
Author: User

[TOC]

Kafka Note Finishing (ii): Kafka Java API usage

The following test code uses the following topic:

$ kafka-topics.sh --describe hadoop --zookeeper uplooking01:2181,uplooking02:2181,uplooking03:2181Topic:hadoop    PartitionCount:3        ReplicationFactor:3     Configs:        Topic: hadoop   Partition: 0    Leader: 103     Replicas: 103,101,102   Isr: 103,101,102        Topic: hadoop   Partition: 1    Leader: 101     Replicas: 101,102,103   Isr: 101,102,103        Topic: hadoop   Partition: 2    Leader: 102     Replicas: 102,103,101   Isr: 102,103,101
Kafka Java API Producer

For instructions on the use of the producer API, you can view the org.apache.kafka.clients.producer.KafkaProducer code comments for this class, with a very detailed description of the program code and test directly below.

Program Code Kafkaproducerops.java
Package Com.uplooking.bigdata.kafka.producer;import Com.uplooking.bigdata.kafka.constants.constants;import Org.apache.kafka.clients.producer.kafkaproducer;import Org.apache.kafka.clients.producer.producer;import Org.apache.kafka.clients.producer.producerrecord;import Java.io.ioexception;import Java.io.InputStream;import Java.util.properties;import java.util.random;/** * Production-related data through this kafkaproducerops to Kafka topic * <p> * Producer * * public class Kafkaproducerops {public static void main (string[] args) throws IOException {/** * specifically load configuration File * configuration file format: * key=value * * In the code to minimize hard coding * Do not write the code to die, to be configurable */Prop        Erties Properties = new properties ();        InputStream in = KafkaProducerOps.class.getClassLoader (). getResourceAsStream ("producer.properties");        Properties.load (in); /** * Two generic parameters * First generic parameter: refers to the type of key in the Kafka * Second generic parameter: refers to the type of a record value in Kafka */STR Ing[] Girls =New string[]{"Yiu Huiying", "Liu Xiangmai", "Zhou Xin", "Willow"};        producer<string, string> Producer = new kafkaproducer<string, string> (properties);        String topic = Properties.getproperty (constants.kafka_producer_topic);        String key = "1";        String value = "Today's girls are Beautiful"; producerrecord<string, string> Producerrecord = new producerrecord<string, string> (topic, Key,        Value);        Producer.send (Producerrecord);    Producer.close (); }}
Constants.java
package com.uplooking.bigdata.kafka.constants;public interface Constants {    /**     * 生产的key对应的常量     */    String KAFKA_PRODUCER_TOPIC = "producer.topic";}
Producer.properties
############################# Producer Basics ############################## List of brokers used for bootstrapping Knowledge about the rest of the cluster# format:host1:port1,host2:port2 ... bootstrap.servers=uplooking01:9092, uplooking02:9092,uplooking03:9092# specify the compression codec for all data generated:none, gzip, snappy, Lz4compressio n.type=none# name of the Partitioner class for partitioning events; Default partition spreads data randomly# partitioner.class=# The maximum amount of time the client would wait for the RESPO NSE of a request#request.timeout.ms=# how long ' kafkaproducer.send ' and ' kafkaproducer.partitionsfor ' would block For#max. block.ms=# The producer would wait for up to the given delay-to-do and records to be sent so, the sends can be bat Ched together#linger.ms=# The maximum size of a request in bytes#max.request.size=# the default batch size in bytes when b atching multiple records sent to a partition#batch.size=# the total bytes of memory the producer can use-to-buffer records waiting to being sent to the server#buffer.memory=#### #设置自定义的topicproducer. Topic=hadoopkey.seriali Zer=org.apache.kafka.common.serialization.stringserializervalue.serializer= Org.apache.kafka.common.serialization.StringSerializer

In fact, this configuration file is Kafka conf directory of the configuration file, but here to do the corresponding changes, about the meaning of each field, you can view org.apache.kafka.clients.producer.KafkaProducer the code comments for this class.

Test

Start the Consumer listening topic message in the terminal:

[[email protected] ~]$ kafka-console-consumer.sh --topic hadoop --zookeeper uplooking01:2181

Then execute the Producer program and then view the terminal output:

[[email protected] ~]$ kafka-console-consumer.sh --topic hadoop --zookeeper uplooking01:2181 今天的姑娘们很美
Kafka Java API Consumer program code Kafkaconsumerops.java
Package Com.uplooking.bigdata.kafka.consumer;import Org.apache.kafka.clients.consumer.consumer;import Org.apache.kafka.clients.consumer.consumerrecord;import org.apache.kafka.clients.consumer.ConsumerRecords; Import Org.apache.kafka.clients.consumer.kafkaconsumer;import Java.io.ioexception;import Java.io.InputStream;    Import Java.util.arrays;import Java.util.collection;import Java.util.properties;public class KafkaConsumerOps {        public static void Main (string[] args) throws IOException {Properties Properties = new Properties ();        InputStream in = KafkaConsumerOps.class.getClassLoader (). getResourceAsStream ("consumer.properties");        Properties.load (in);        consumer<string, string> Consumer = new kafkaconsumer<string, string> (properties);        collection<string> topics = Arrays.aslist ("Hadoop");        Consumer Subscription topic consumer.subscribe (topics);        consumerrecords<string, string> consumerrecords = null;         while (true) {   The next step is to pull the data from the topic consumerrecords = consumer.poll (1000); Traverse each record for (Consumerrecord consumerrecord:consumerrecords) {Long offset = Consumerrecord.                Offset ();                int partition = Consumerrecord.partition ();                Object key = Consumerrecord.key ();                Object value = Consumerrecord.value ();            System.out.format ("%d\t%d\t%s\t%s\n", offset, partition, key, value); }        }    }}
Consumer.properties
# Zookeeper connection string# comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"zookeeper.connect= uplooking01:2181,uplooking02:2181,uplooking03:2181bootstrap.servers=uplooking01:9092,uplooking02:9092,uplooking03:9092# timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000#consumer group idgroup.id=test-consumer-group#consumer timeout#consumer.timeout.ms=5000key.deserializer=org.apache.kafka.common.serialization.StringDeserializervalue.deserializer=org.apache.kafka.common.serialization.StringDeserializer
Test

Execute the consumer's code first, then execute the producer's code, and the following output can be seen in the consumer terminal:

2   0   1   今天的姑娘们很美(分别是:offset partition key value)
Kafka Java API Partition

It is possible to customize the partitioner to determine which partition our messages should be stored on, just implement the Partitioner interface on our code.

Program Code Mykafkapartitioner.java
Package Com.uplooking.bigdata.kafka.partitioner;import Org.apache.kafka.clients.producer.partitioner;import Org.apache.kafka.common.cluster;import Java.util.map;import java.util.random;/** * Create a custom partition that is divided according to the key of the data * <p > * can be based on key or value hashcode * can also be based on the definition of their business to spread the data in different partitions * requirements: * Based on the user input key hashcode value and partition the number of modulo */public class MyKa Fkapartitioner implements Partitioner {public void Configure (map<string,?> configs) {}/** * based on given data Set the relevant partition * * @param topic theme name * @param key key * @param keybytes after serialization key * @param valu E value * @param valuebytes value after serialization @param cluster metadata information for the current cluster */public int partition (String Topic, Object Key, byte[] keybytes, object value, byte[] valuebytes, Cluster Cluster) {Integer partitionnums = CLU        Ster.partitioncountfortopic (topic);        int targetpartition =-1; if (key = = NULL | | keybytes = = NULL) {targetpartition = new Random (). Nextint (10000)% Partitionnums;            } else {int hashcode = Key.hashcode ();            targetpartition = hashcode% Partitionnums;        System.out.println ("key:" + key + ", Value:" + Value + ", Hashcode:" + hashcode + ", Partition:" + targetpartition ");    } return targetpartition; } public void Close () {}}
Kafkaproducerops.java
Package Com.uplooking.bigdata.kafka.producer;import Com.uplooking.bigdata.kafka.constants.constants;import Org.apache.kafka.clients.producer.kafkaproducer;import Org.apache.kafka.clients.producer.producer;import Org.apache.kafka.clients.producer.producerrecord;import Java.io.ioexception;import Java.io.InputStream;import Java.util.properties;import java.util.random;/** * Production-related data through this kafkaproducerops to Kafka topic * <p> * Producer * * public class Kafkaproducerops {public static void main (string[] args) throws IOException {/** * specifically load configuration File * configuration file format: * key=value * * In the code to minimize hard coding * Do not write the code to die, to be configurable */Prop        Erties Properties = new properties ();        InputStream in = KafkaProducerOps.class.getClassLoader (). getResourceAsStream ("producer.properties");        Properties.load (in); /** * Two generic parameters * First generic parameter: refers to the type of key in the Kafka * Second generic parameter: refers to the type of a record value in Kafka */STR Ing[] Girls =New string[]{"Yiu Huiying", "Liu Xiangmai", "Zhou Xin", "Willow"};        producer<string, string> Producer = new kafkaproducer<string, string> (properties);        Random random = new random ();        int start = 1; for (int i = start; I <= start + 9; i++) {String topic = Properties.getproperty (constants.kafka_producer_to            PIC);            String key = i + "";            String value = "Today's <--" + Girls[random.nextint (girls.length)] + "very beautiful and beautiful!" producerrecord<string, string> Producerrecord = new producerrecord<string, string> (topic, K            EY, value);        Producer.send (Producerrecord);    } producer.close (); }}

Continue to use the previous consumer's code, as well as specifying our defined partitioner in Producer.properties, as follows:

partitioner.class=com.uplooking.bigdata.kafka.partitioner.MyKafkaPartitioner
Test

Execute the consumer code first and then execute the producer code to see the terminal output.

Producer terminal output (mainly output in custom partitioner):

key: 1, value: 今天的<--刘向前-->很美很美哦~, hashCode: 49, partition: 1key: 2, value: 今天的<--杨柳-->很美很美哦~, hashCode: 50, partition: 2key: 3, value: 今天的<--姚慧莹-->很美很美哦~, hashCode: 51, partition: 0key: 4, value: 今天的<--周  新-->很美很美哦~, hashCode: 52, partition: 1key: 5, value: 今天的<--刘向前-->很美很美哦~, hashCode: 53, partition: 2key: 6, value: 今天的<--周  新-->很美很美哦~, hashCode: 54, partition: 0key: 7, value: 今天的<--周  新-->很美很美哦~, hashCode: 55, partition: 1key: 8, value: 今天的<--刘向前-->很美很美哦~, hashCode: 56, partition: 2key: 9, value: 今天的<--杨柳-->很美很美哦~, hashCode: 57, partition: 0key: 10, value: 今天的<--姚慧莹-->很美很美哦~, hashCode: 1567, partition: 1

Consumer Terminal output:

3   0   3   今天的<--姚慧莹-->很美很美哦~4   0   6   今天的<--周  新-->很美很美哦~5   0   9   今天的<--杨柳-->很美很美哦~0   2   2   今天的<--杨柳-->很美很美哦~1   2   5   今天的<--刘向前-->很美很美哦~2   2   8   今天的<--刘向前-->很美很美哦~1   1   1   今天的<--刘向前-->很美很美哦~2   1   4   今天的<--周  新-->很美很美哦~3   1   7   今天的<--周  新-->很美很美哦~4   1   10  今天的<--姚慧莹-->很美很美哦~(分别是:offset partition key value)

Kafka Note Finishing (ii): Kafka Java API usage

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.