Kafka---How to configure the Kafka cluster and zookeeper cluster

Source: Internet
Author: User
Tags zookeeper

the Kafka cluster configuration typically has three methods , namely

(1) Single node–single broker cluster;

(2) Single node–multiple broker cluster;
(3) Multiple node–multiple broker cluster.

The first two methods of the official network configuration process ((1) (2) To configure the party Judges Network Tutorial), the following will briefly introduce the first two methods, the main introduction to the last method.

preparatory work:

1.Kafka compression package, here is the selection of kafka_2.10-0.8.2.2.tgz.

2. Three CentOS 6.4 64-bit virtual machines. The 192.168.121.34 (host name is Master), 192.168.121.35 (host name is Datanode1), 192.168.121.36 (host name is Datanode2). One, single Node–single broker cluster configuration (single node Boker cluster configuration)

Note: The picture comes from the network 1. Decompression Kafka Compression Pack

[Root@master kafkainstall]# Tar-xzf kafka_2.10-0.8.2.0.tgz

[Root@master kafkainstall]# CD kafka_2.10-0.8.2.2

Here I have created a Kafkainstall folder to store the compressed files and then go to the kafka_2.10-0.8.2.2 folder after the decompression. 2. Start Zookeeper service

Because zookeeper is already in the Kafka's compressed package, it provides a script to start Kafka (under the Kafka_2.10-0.8.2.2/bin directory) and Zookeeper configuration file (in KAFKA_2.10-0.8.2.2/ Config directory):

[Root@master kafka_2.10-0.8.2.2]# bin/zookeeper-server-start.sh Config/zookeeper.properties &

Key attributes in the Zookeeper profile zookeeper.properties:

# The directory where the snapshot is stored.
Datadir=/tmp/zookeeper
# The port at which the clients'll connect
clientport=2181

By default, zookeeper snapshot files are stored under/tmp/zookeeper and the zookeeper server listens on port 2181. 3. Start Kafka Broker Service

Since Kafka has already provided a script to start Kafka (under the Kafka_2.10-0.8.2.2/bin directory), it can be started directly here:

[Root@master kafka_2.10-0.8.2.2]# bin/kafka-server-start.sh Config/server.properties &

Key attributes of the configuration file for Kafka broker:

# The ID of the broker. This must is set to A is a unique integer for each broker.
Broker.id=0

# The port the socket server listens on
port=9092

# A comma seperated list of directories under which to store log files
Log.dirs=/tmp/kafka-logs

# Zookeeper Connection string (zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a ZK
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append a optional chroot string to the URL to specify the
# root directory for all Kafka znodes.
zookeeper.connect=localhost:2181 4. Create a topic with only one partition

[Root@master kafka_2.10-0.8.2.2] #bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 1-- Partitions 1--topic mytest-topic

A mytest-topic topic is created here. 5. Start a producer process to send messages

[root@master kafka_2.10-0.8.2.2]# bin/kafka-console-producer.sh--broker-list localhost:9092--topic Mytest-topic

where the (1) parameter broker-list defines the broker address that the producer wants to push the message to <IP Address: port > form, which is known by the broker's configuration file as Localh ost:9092;

(2) The parameter topic which topic the specified producer sends.

Producer configuration file Key properties:

# List of brokers used for bootstrapping knowledge about the rest of the cluster
# Format:host1:port1,host2:port2 ...
metadata.broker.list=localhost:9092

# Specifies whether the messages are sent asynchronously (async) or synchronously (sync)
Producer.type=sync

# Message Encoder
Serializer.class=kafka.serializer.defaultencoder

Then you can enter the message you want to send to the consumer. (You can also start the consumer process so that the message that the producer sends can be displayed immediately) 6. Start a consumer process to consume messages

You need to open a different terminal:

[Root@master kafka_2.10-0.8.2.2]# bin/kafka-console-consumer.sh--zookeeper localhost:2181--topic-- From-beginning

Wherein, (1) The parameter zookeeper specifies the address of the connection zookeeper to <ip Address: port > form;

(2) The topic parameter specifies which topic to pull messages from.

Once you have executed this command, you will be able to see the producer production messages printed on the console:

Consumer profile consumer.properties Key attributes:

# Zookeeper Connection string
# Comma Separated host:port pairs, each corresponding to a ZK
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper.connect=localhost:2181
# Timeout in MS for connecting to zookeeper
zookeeper.connection.timeout.ms=60000
#consumer Group ID
Group.id=test-consumer-group Two, single node–multiple broker cluster (one node multiple boker cluster configuration)


Note: The picture comes from the network 1. Start Zookeeper service

The boot method is the same as above 2. Start Kafka Broker Service

If you need to start multiple broker on a single node (that is, we start three broker), we need to prepare multiple server.properties files, we need to replicate KAFKA_2.10-0.8.2.2/ config/server.properties files.

As follows:

[Root@master config]# CP server.properties server-1.properties

[Root@master config]# CP server.properties server-2.properties

Then modify server-1.properties and Server-2.properties.

Server-1:

1. broker.id=1

2.port=9093

3.log.dirs=/tmp/kafka-logs-1

Server-2:

1. broker.id=2

2.port=9094

3.log.dirs=/tmp/kafka-logs-2

Then we start a broker using the two profiles separately:

[Root@master kafka_2.10-0.8.2.2]# bin/kafka-server-start.sh Config/server-1.properties &

[Root@master kafka_2.10-0.8.2.2]# bin/kafka-server-start.sh Config/server-2.properties &

And then start:

[Root@master kafka_2.10-0.8.2.2]# bin/kafka-server-start.sh config/server.properties & 3. Create a topic that has only 1 partition and 3 backups

[Root@master kafka_2.10-0.8.2.2]# bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 3-- Partitions 1--topic my-replicated-topic 4. Start a producer Send message

If you send a producer to multiple broker (here are 3), the only thing you need to change is to specify the broker to connect in the Broker-list attribute:

    [root@master kafka_2.10-0.8.2.2] #bin/kafka-console-producer.sh--broker-list localhost : 9092,localhost:9093,

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.