From the Kafka operating environment under Windows and the Kafka operating environment under Linux, it can be seen that once the environment is completed, the following logical procedures are fixed and have nothing to do with the environment:
Start Zookeeper
start Kafka
Create topic
start producer producer Send message
Start consumer consumer consumer message
So here's an example of a running environment under Windows (the Windows environment can be built up to the author's blog) to show Kafka's application in the program:
The first step : Open cmd, run zookeeper
Zkserver
Step two : Enter Kafka directory, start Kafka
. \bin\windows\kafka-server-start.bat. \config\server.properties
===========================================================
" Implementing Scenario One: Creating topic in a program"
(Create Maven project, add dependencies)
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
< artifactid>kafka_2.10</artifactid>
<version>0.8.2.2</version>
</dependency>
</dependencies>
(2) Create message producer producer
Import Kafka.javaapi.producer.Producer;
Import Kafka.producer.KeyedMessage;
Import Kafka.producer.ProducerConfig;
Import java.util.Properties;
public class Msgproducer {private static producer<string, string> Producer;
Private Final Properties Properties = new properties ();
Public Msgproducer () {//Configure the connected broker List properties.put ("Metadata.broker.list", "127.0.0.1:9092");
Serialization of Properties.put ("Serializer.class", "Kafka.serializer.StringEncoder");
Producer = new producer<string, string> (New Producerconfig (properties));
public static void Main (string[] args) {msgproducer msgproducer = new Msgproducer ();
Define topic String topic = "Testkafka";
Defines the message to be sent String msg = "2017.11.03,kafka test";
Build Message Object keyedmessage<string, string> data = new keyedmessage<string, string> (topic, MSG);
Send Message producer.send (data); Producer.close ();
}
}
(3) Create message consumer consumer
Import Kafka.consumer.Consumer;
Import Kafka.consumer.ConsumerConfig;
Import Kafka.consumer.ConsumerIterator;
Import Kafka.consumer.KafkaStream;
Import Kafka.javaapi.consumer.ConsumerConnector;
Import Java.util.HashMap;
Import java.util.List;
Import Java.util.Map;
Import java.util.Properties;
public class Msgconsumer {private Consumerconnector consumer;
Private String topic;
Public Msgconsumer (String zookeeper, String groupId, String topic) {Properties Properties = new properties ();
Configure Zookeeper Information Properties.put ("Zookeeper.connect", zookeeper);
Configure consumer group Properties.put ("Group.id", groupId);
Configure Timeout Connection properties.put ("zookeeper.session.timeout.ms", "500");
Configure reconnection interval Properties.put ("auto.commit.interval.ms", "1000");
Consumer = consumer.createjavaconsumerconnector (New Consumerconfig (properties));
this.topic = topic; public void Testconsumer () {map<string, Integer> topiccount = new hashmap<string, integer> ();
Define subscription Topic Quantity Topiccount.put (topic, New Integer (1)); Returns all topic Map map<string, list<kafkastream<byte[], byte[]>>> consumerstreams = Consumer.crea
Temessagestreams (TopicCount);
Take out the message flow in the topic we need list<kafkastream<byte[], byte[]>> streams = consumerstreams.get (topic); For (final Kafkastream stream:streams) {consumeriterator<byte[], byte[]> consumerite = Stream.iterato
R ();
while (Consumerite.hasnext ()) {System.out.println (a new String (Consumerite.next (). Message ()));
} if (consumer!= null) {Consumer.shutdown ();
} public static void Main (string[] args) {String topic = "Testkafka";
Msgconsumer Msgconsumer = new Msgconsumer ("127.0.0.1:2181", "MSG", topic);
Msgconsumer.testconsumer (); }
}
Start the consumer, and then start the message producer, the results of the program run as follows:
Producer
Consumer
=============================================================
" Implementation Scenario two: Turn to step three below, write all to the configuration file, and start the multithreaded execution program "
Step three : Enter the Kafka file directory D:\kafka_2.12-0.11.0.0\bin\windows, create Kafka messages topics
Kafka-topics.bat--create--zookeeper localhost:2181--replication-factor 1--partitions 5--topic test20171103
Fourth Step : The procedure realizes producer, the Consumer, realizes the message production and the consumption
(1) Create a MAVEN project that adds the following dependencies:
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
< artifactid>kafka_2.10</artifactid>
<version>0.8.2.2</version>
</dependency>
</dependencies>
(2) Write the relevant configuration information separately in a file Kafkaconf.java, easy to maintain and modify
Public interface kafkaconf {
String zookeeperconnect = "127.0.0.1:2181";
String groupId = "group";
String Topic1 = "test20171103";
String brokerlist = "127.0.0.1:9092";
String zksessiontimeout = "20000";
String zksynctime = ";
" String reconnectintervel = "1000";
}
(3) Realize the message producer producer
Import Kafka.javaapi.producer.Producer;
Import Kafka.producer.KeyedMessage;
Import Kafka.producer.ProducerConfig;
Import java.util.Properties;
public class Kafkaproducer extends thread{private Producer<integer, string> Producer;
Private String topic;
Public kafkaproducer (String topic) {Properties Properties = new properties ();
Properties.put ("Serializer.class", "Kafka.serializer.StringEncoder");
Properties.put ("Metadata.broker.list", kafkaconf.brokerlist);
Producer = new Producer<> (new Producerconfig (properties));
this.topic = topic;
@Override public void Run () {int messageno = 1;
while (true) {A string message = new String ("Message_" + Messageno);
System.out.println ("Send:" + message);
Producer.send (New keyedmessage<> (topic, message));
messageno++;
try{Sleep (3000); }catch (interruptedexception e){E.printstacktrace (); }
}
}
}
(4) Realize the message consumer consumer
Import Kafka.consumer.Consumer;
Import Kafka.consumer.ConsumerConfig;
Import Kafka.consumer.ConsumerIterator;
Import Kafka.consumer.KafkaStream;
Import Kafka.javaapi.consumer.ConsumerConnector;
Import Java.util.HashMap;
Import java.util.List;
Import Java.util.Map;
Import java.util.Properties;
public class Kafkaconsumer extends thread{private consumerconnector consumer;
Private String topic;
Public Kafkaconsumer (String topic) {Properties Properties = new properties ();
Properties.put ("Zookeeper.connect", kafkaconf.zookeeperconnect);
Properties.put ("Group.id", kafkaconf.groupid);
Properties.put ("zookeeper.session.timeout.ms", kafkaconf.zksessiontimeout);
Properties.put ("zookeeper.sync.time.ms", kafkaconf.zksynctime);
Properties.put ("auto.commit.interval.ms", kafkaconf.reconnectintervel);
Consumer = consumer.createjavaconsumerconnector (New Consumerconfig (properties));
this.topic = topic;
} @Override public void Run () {map<string, integer> topiccountmap = new hashmap<> ();
Topiccountmap.put (topic, New Integer (1)); Map<string, list<kafkastream<byte[], byte[]>>> consumermap = Consumer.createmessagestreams (
TOPICCOUNTMAP);
Kafkastream<byte[], byte[]> stream = Consumermap.get (topic). get (0);
Consumeriterator<byte[], byte[]> it = Stream.iterator ();
while (It.hasnext ()) {System.out.println ("Receive:" + New String (It.next (). message ());
Try{Sleep (1000);
}catch (interruptedexception e) {e.printstacktrace (); }
}
}
}
(5) To start the client program to achieve consumption of production and consumption
public class Main {public
static void Main (string[] args) {
kafkaproducer producerthread = new Kafkaproducer (KAFK ACONF.TOPIC1);
Producerthread.start ();
Kafkaconsumer consumerthread = new Kafkaconsumer (KAFKACONF.TOPIC1);
Consumerthread.start ();
}
The results of the program operation are as follows:
The message producer consumes a "message+ number", and the message consumer consumes the message
Resources:
1, Https://cwiki.apache.org/confluence/display/KAFKA/Index
2, http://www.nohup.cc/article/195/
3, http://blog.csdn.net/honglei915/article/details/37563647