First, download zookeeper and Kafkafrom the official website(the locally used version is zookeeper-3.3.6, kafka_2.11-1.0.0):
Second, configure zookeeper and Kafka and start, basic zkcli command and Kafka Create Delete topic Command.
2.1 Configuration zookeeper, the main configuration has two, one is the port 2181, the other is the data storage path.
2.2 start zookeeper
2.3 Use the zookeeper client to view The value of the zookeeper node
There are 4 commonly used commands , namely ls,get,set, Delete.
[Zk:127.0.0.1:2181 (CONNECTED)] Help
Zookeeper-server host:port cmd args
Set path data [version]
LS Path [watch]
Delete path [version]
Get path [watch]
[Zk:127.0.0.1:2181 (CONNECTED) 46]
2.4 configuration Kafka, the main configuration has two, one is zookeeper ip:port, another Kafka of their own Broker Port Universal value is 9092.
2.5 start Kafka
2.6 Use Kafka basic commands, in fact, there are 4 , respectively, is to create a topic, view topic , delete topic.
For example, you need to create topic:
Kafka-topics.bat--create
--zookeeper localhost:2181
--replication-factor 1--partitions 1
--topic Dhpeitopic
Third, use Java to operate Kafka.
3.1 Create a topic,Create a topic using the method of creating topic commands in "2.6" , named dhpeitopic.
3.1 Send data kafkaproductor.
mainly two steps, the first step to prepare the parameters of the connection Kafka, in fact, the main configuration of a broker ip:port; the second step to the specified Topic send the data, generally we send all is JSON data.
3.2 receive data (continuous to receive data)
It is divided into five main steps:
The first step, ready to connect the Kafka parameters, in fact, is mainly zookeeper ip:port and topic 's name.
The second step is to obtain the connectorof the Kafka based on the connection parameters, just like the jdbc connectionbased on the database parameters .
The third step, according to the Kafka connection connector to obtain the kafkastream ( personal feeling is the continuous flow ) .
fourth step, get to kafkastream.
The fifth step is to continuously read the data according to the Kafkastream obtained .
3.3 Send data console print log
3.4 receiving data from the console print log
Summary, in the Web application zookeeper more commonly used, as a distributed system of publishing services and registration services to use. Kafka application is also more.
Zookeeper is mainly distributed system coordination system, through zookeeper nodes to coordinate the call in the distributed system, of course zookeeper You can also store the configuration file and store the data on the node (path).
Kafka is mainly used as a distributed stream processing channel with high throughput, and provides disk data for message persistence and cluster support.
Zookeeper+kafka, using Java to implement message docking reads