Deployment and use of Kafka
Preface
From the architecture introduction and installation of Kafka in the previous article, you may still be confused about how to use Kafka? Next, we will introduce the deployment and use of Kafka. As mentioned in the previous article, several important components of Kafka are: 1. Producer 2. Consumer 3. Broker 4. Topic. Therefore, we use Kafka around these components.
How to get started?
Let's look at what we said on the official website, http://kafka.apache.org/quickstart.
Next, in step 2, we started a Kafka server. The following describes how to create a topic.
Step 1: Create a topic
Bin/kafka-topics.sh -- create -- zookeeper localhost: 2181
-- Replication-factor 1 -- partitions 1 -- Topic Test
Explanation:
-- Zookeeper: the zk address must be specified during creation.
-- Replication-factor replica Coefficient
-- Partitions
View topics
Bin/kafka-topics.sh -- list -- zookeeper localhost: 2181
[[Email protected] ~] $ Kafka-topics.sh -- create -- zookeeper localhost: 2181 -- replication-factor 1 -- partitions 1 -- Topic test1created topic "test1". [[email protected] ~] $
Step 2: Send some messages
Bin/kafka-console-producer.sh -- broker-list localhost: 9092 -- Topic Test
-- Place the message produced by broker-List (this address is set by the previous broker)
-- Add the previous tag to the topic
Step 3: Start a consumer
Bin/kafka-console-consumer.sh -- Bootstrap-server localhost: 9092 -- Topic test -- from-beginning
-- Bootstrap-server: Document Error
Here we should change it to-Zookeeper, so the subsequent port should also be modified.
Command:
Kafka-console-consumer.sh -- zookeeper hadoop000: 2181 -- Topic hello_topic -- from-beginning
-- From-beginning add this parameter to receive the previous data.
Without this parameter, you can only receive the data produced by the producer after the command is executed.
Now, the deployment is complete. Start test:
OK. All messages produced by our producers are received by consumers.
In big data scenarios, most of our producers are flume sinks, that is, flume outputs data to Kafka. Then, our consumers are data processing items such as sparkstreaming. Next, we will implement flume => Kafka ==>> sparkstreaming connection .....
Deployment and use of Kafka Series 2