Deployment and use of Kafka Series 2

Source: Internet
Author: User

Deployment and use of Kafka

Preface
From the architecture introduction and installation of Kafka in the previous article, you may still be confused about how to use Kafka? Next, we will introduce the deployment and use of Kafka. As mentioned in the previous article, several important components of Kafka are: 1. Producer 2. Consumer 3. Broker 4. Topic. Therefore, we use Kafka around these components.

How to get started?

Let's look at what we said on the official website, http://kafka.apache.org/quickstart.
Next, in step 2, we started a Kafka server. The following describes how to create a topic.

Step 1: Create a topic

Bin/kafka-topics.sh -- create -- zookeeper localhost: 2181
-- Replication-factor 1 -- partitions 1 -- Topic Test

Explanation:
-- Zookeeper: the zk address must be specified during creation.
-- Replication-factor replica Coefficient
-- Partitions

View topics

Bin/kafka-topics.sh -- list -- zookeeper localhost: 2181

[[Email protected] ~] $ Kafka-topics.sh -- create -- zookeeper localhost: 2181 -- replication-factor 1 -- partitions 1 -- Topic test1created topic "test1". [[email protected] ~] $

Step 2: Send some messages

Bin/kafka-console-producer.sh -- broker-list localhost: 9092 -- Topic Test

-- Place the message produced by broker-List (this address is set by the previous broker)
-- Add the previous tag to the topic

Step 3: Start a consumer

Bin/kafka-console-consumer.sh -- Bootstrap-server localhost: 9092 -- Topic test -- from-beginning

-- Bootstrap-server: Document Error
Here we should change it to-Zookeeper, so the subsequent port should also be modified.

Command:
Kafka-console-consumer.sh -- zookeeper hadoop000: 2181 -- Topic hello_topic -- from-beginning

-- From-beginning add this parameter to receive the previous data.
Without this parameter, you can only receive the data produced by the producer after the command is executed.

Now, the deployment is complete. Start test:

OK. All messages produced by our producers are received by consumers.

In big data scenarios, most of our producers are flume sinks, that is, flume outputs data to Kafka. Then, our consumers are data processing items such as sparkstreaming. Next, we will implement flume => Kafka ==>> sparkstreaming connection .....

Deployment and use of Kafka Series 2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.