Kafka's cluster configuration generally has three ways , namely
(1) Single node–single broker cluster;
(2) Single node–multiple broker cluster;
(3) Multiple node–multiple broker cluster.
The first two methods of the official network configuration process ((1) (2) Configure the party Judges Network Tutorial), the following will be a brief introduction to the first two methods, the main introduction of the last method.
preparatory work:
1.Kafka of compressed package, here is the use of kafka_2.10-0.8.2.2.tgz.
2. Three CentOS 6.4 64-bit virtual machines. The 192.168.121.34 (host name is Master), 192.168.121.35 (host name is Datanode1), 192.168.121.36 (host name is Datanode2), respectively. One, single Node–single broker cluster configuration (single node Boker cluster configuration)
Note: The picture is from the network 1. Unzip the Kafka package
[Root@master kafkainstall]# Tar-xzf kafka_2.10-0.8.2.0.tgz
[Root@master kafkainstall]# CD kafka_2.10-0.8.2.2
Here I created a new Kafkainstall folder to store the pressurized files, and then into the extracted kafka_2.10-0.8.2.2 folder. 2. Start the Zookeeper service
Since zookeeper is already available in the Kafka package, the script that launches the Kafka (in the Kafka_2.10-0.8.2.2/bin directory) and the Zookeeper configuration file (in KAFKA_2.10-0.8.2.2/ Config directory):
[Root@master kafka_2.10-0.8.2.2]# bin/zookeeper-server-start.sh Config/zookeeper.properties &
Zookeeper configuration file zookeeper.properties The key attributes inside:
# The directory where the snapshot is stored.
Datadir=/tmp/zookeeper
# The port at which the clients would connect
clientport=2181
By default, the zookeeper snapshot file is stored under/tmp/zookeeper, and the zookeeper server listens on port 2181. 3. Start the Kafka Broker service
Since Kafka has already provided a script to start the Kafka (in the Kafka_2.10-0.8.2.2/bin directory), it can be started directly:
[Root@master kafka_2.10-0.8.2.2]# bin/kafka-server-start.sh Config/server.properties &
Kafka Key properties of the broker's configuration file:
# The ID of the broker. This must is set to a unique integer for each broker.
Broker.id=0
# The port the socket server listens on
port=9092
# A comma seperated list of directories under which to store log files
Log.dirs=/tmp/kafka-logs
# Zookeeper Connection string (see Zookeeper docs for details).
# This was a comma separated host:port pairs, each corresponding to a ZK
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append a optional chroot string to the URLs to specify the
# root directory for all Kafka znodes.
zookeeper.connect=localhost:2181 4. Create a topic with only one partition
[Root@master kafka_2.10-0.8.2.2] #bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 1-- Partitions 1--topic mytest-topic
A mytest-topic topic is created here. 5. Start a producer process to send messages
[root@master kafka_2.10-0.8.2.2]# bin/kafka-console-producer.sh--broker-list localhost:9092--topic Mytest-topic
Wherein, (1) parameter broker-list defines the broker address that the producer wants to push the message to <IP Address: port > form, by the above broker's configuration file is known as Localh ost:9092;
(2) Parameter topic specifies which topic the producer sends to.
Producer Profile Key Properties:
# List of brokers used for bootstrapping knowledge about the rest of the cluster
# Format:host1:port1,host2:port2 ...
metadata.broker.list=localhost:9092
# Specifies whether the messages is sent asynchronously (async) or synchronously (sync)
Producer.type=sync
# Message Encoder
Serializer.class=kafka.serializer.defaultencoder
Then you can enter the message you want to send to the consumer. (You can also start the consumer process first, so that messages sent by the producer can be displayed immediately) 6. Start a consumer process to consume messages
You need to open a different terminal:
[Root@master kafka_2.10-0.8.2.2]# bin/kafka-console-consumer.sh--zookeeper localhost:2181--topic mytest-topic-- From-beginning
Wherein, (1) parameter zookeeper specifies the address of the connection zookeeper to <ip Address: port > form;
(2) The topic parameter specifies which topic to pull the message from.
When you execute this command, you can see the producer-produced messages printed on the console:
Consumer profile consumer.properties Key attributes:
# Zookeeper Connection string
# Comma Separated host:port pairs, each corresponding to a ZK
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper.connect=localhost:2181
# Timeout in MS for connecting to zookeeper
zookeeper.connection.timeout.ms=60000
#consumer Group ID
Group.id=test-consumer-group Two, single node–multiple broker cluster (one-node multi-boker cluster configuration)
Note: The image is from the network 1. Start the Zookeeper service
The starting method is the same as above . 2. Start the Kafka Broker service
If you need to start multiple brokers on a single node (i.e. a machine) (here we start three brokers), we need to prepare multiple server.properties files, we need to replicate KAFKA_2.10-0.8.2.2/ Config/server.properties file.
As follows:
[Root@master config]# CP server.properties server-1.properties
[Root@master config]# CP server.properties server-2.properties
Then modify the Server-1.properties and server-2.properties.
Server-1:
1. broker.id=1
2.port=9093
3.log.dirs=/tmp/kafka-logs-1
Server-2:
1. broker.id=2
2.port=9094
3.log.dirs=/tmp/kafka-logs-2
We then start a broker with each of these two configuration files:
[Root@master kafka_2.10-0.8.2.2]# bin/kafka-server-start.sh Config/server-1.properties &
[Root@master kafka_2.10-0.8.2.2]# bin/kafka-server-start.sh Config/server-2.properties &
Then start:
[Root@master kafka_2.10-0.8.2.2]# bin/kafka-server-start.sh config/server.properties & 3. Create a topic with only 1 partition and 3 backups
[Root@master kafka_2.10-0.8.2.2]# bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 3-- Partitions 1--topic my-replicated-topic 4. Start a producer Send message
If you send a producer to more than one broker (here are 3), the only thing that needs to change is to specify the broker to be connected in the Broker-list property:
[root@master kafka_2.10-0.8.2.2] #bin/kafka-console-producer.sh--broker-list localhost : 9092,localhost:9093,