Simple Java code for common APIs in Kafka

Source: Internet
Author: User
Tags zookeeper

Through the introduction of Kafka distributed Message Queuing and cluster installation, we have a preliminary understanding of Kafka. This article focuses on the operations commonly used in Java code.

Preparation: Increase Kafka dependency

<dependency>    <groupId>org.apache.kafka</groupId>    <artifactId>kafka-clients< /artifactid>    <version>0.10.  2.0</version></dependency>
First, the operation of the topic in Kafka
Package Org.kafka;import kafka.admin.deletetopiccommand;import kafka.admin.topiccommand;/** * Kafka Theme Operations */public Class Topicdemo {/** * Add Theme * Linux command: bin/kafka-topics.sh--create--zookeeper 192.168.2.100:2181--replication-factor 3- -partitions 1--topic topictest0416 */public static void Createtopic () {string[] options = new string[] {"--create", "--zo Okeeper "," 192.168.2.100:2181 ","--replication-factor "," 3 ","--partitions "," 1 ","--topic "," topictest0416 "}; Topiccommand.main (options);} /** * Query all topics * Linux command: bin/kafka-topics.sh--list--zookeeper 192.168.2.100:2181 */public static void Querytopic () {String [] options = new string[] {"--list", "--zookeeper", "192.168.2.100:2181"}; Topiccommand.main (options);} /** * View partition and copy status information for the specified topic * bin/kafka-topics.sh--describe--zookeeper 192.168.2.100:2181--topic topictest0416 */public STA tic void Querytopicbyname () {string[] options = new string[]{"--describe", "--zookeeper", "192.168.2.100:2 181 ","--topic "," topictest0416 ",}; Topiccommand.main (options);} /** * Modify Theme * Linux command: bin/kafka-topics.sh--zookeeper 192.168.2.100:2181--alter--topic topictest0416--partitions 3 */pub Lic static void Altertopic () {string[] options = new string[]{"--alter", "--zookeeper", "192.168.2.100:218  1 ","--topic "," topictest0416 ","--partitions "," 3 "}; Topiccommand.main (options);      }/** * Delete theme */public static void Deltopic () {string[] options = new string[] {"--zookeeper", "192.168.2.100:2181", "--topic", "topictest0416"};D eletetopiccommand.main (options);}}
Second, producer code
Package Org.kafka;import Java.util.properties;import Kafka.javaapi.producer.producer;import Kafka.producer.keyedmessage;import Kafka.producer.producerconfig;public class Producerdemo {public static void main ( String[] args) throws Interruptedexception {Properties props = new Properties ();//zookeeper cluster list props.put ("Zk.connect", "hadoop1-1:2181,hadoop1-2:2181,hadoop1-3:2181");p rops.put ("Metadata.broker.list", "hadoop1-1:9092,hadoop1-2 : 9092,hadoop1-3:9092 ");//Set which class the message uses to serialize the Props.put (" Serializer.class "," Kafka.serializer.StringEncoder "); Producerconfig config = new Producerconfig (props);//construct producer Object Producer<string, string> producer = new producer& Lt String, string> (config);//Send Business message//Read file read memory database for (int i = 0; i <; i++) {thread.sleep (500);  Keyedmessage<string, string> km = new keyedmessage<string, string> ("topictest0416", "I am a producer" + i + " Hello! "); Producer.send (km);}}}
Third, consumer code
Package Org.kafka;import Java.util.hashmap;import java.util.list;import java.util.map;import java.util.Properties; Import Kafka.consumer.consumer;import Kafka.consumer.consumerconfig;import Kafka.consumer.kafkastream;import Kafka.javaapi.consumer.consumerconnector;import Kafka.message.messageandmetadata;public class ConsumerDemo { Private static final String topic = "topictest0416";p rivate static final Integer threads = 1;public static void Main (Strin G[] args) {Properties props = new Properties ();//zookeeper cluster list props.put ("Zookeeper.connect", "hadoop1-1:2181, hadoop1-2:2181,hadoop1-3:2181 ");//consumer group Idprops.put (" Group.id "," 001 ");//Set the read offset; smallest means point to the minimum offset props.put (" Auto.offset.reset "," smallest ");//package properties into a consumer configuration object Consumerconfig config = new Consumerconfig (props); Consumerconnector consumer = consumer.createjavaconsumerconnector (config); map<string, integer> topicmap = new hashmap<> ();//key for consumption topic//value for consumption of Threads topicmap.put (topic, Threads ); Map<string, List<kafkastReam<byte[], byte[]>>> consumermap = Consumer.createmessagestreams (Topicmap); List<kafkastream<byte[], byte[]>> streams = consumermap.get (topic); for (final kafkastream<byte[], byte []> kafkastream:streams) {new Thread (new Runnable () {@Overridepublic void Run () {for (messageandmetadata<byte[], b Yte[]> mm:kafkastream) {System.out.println (New String (Mm.message ()))}}). Start ();}}}
Iv. Testing

Start consumer first, then start producer

Test results:

  

Simple Java code for common APIs in Kafka

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.