Kafka (iv): Installation of Kafka

Source: Internet
Author: User

Step 1: Download Kafka

> Tar-xzf kafka_2.9.2-0.8.1.1.tgz
> CD kafka_2.9.2-0.8.1.1

Step 2:

Start the service Kafka used to zookeeper, all start Zookper First, the following simple to enable a single-instance Zookkeeper service. You can add a & symbol at the end of the command so that you can start and leave the console.
> bin/zookeeper-server-start.sh config/zookeeper.properties &
[2013-04-22 15:01:37,495] INFO Reading configuration from:config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
...
Start Kafka Now:
> bin/kafka-server-start.sh config/server.properties
[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
[2013-04-22 15:01:47,051] INFO property socket.send.buffer.bytes are overridden to 1048576 (kafka.utils.VerifiableProperties)
...

Step 3:

Create topic Create a topic called "Test", which has only one partition and one copy.
> bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 1--partitions 1--topic test
You can view the created topic by using the List command:
> bin/kafka-topics.sh--list--zookeeper localhost:2181
Test
In addition to manually creating topic, you can also configure the broker to have it automatically create topic.

Step 4: Send a message.

Kafka uses a simple command-line producer to read the message from the file or from the standard input and send it to the server. A message is sent by default for each command.

Run producer and lose some messages in the console that will be sent to the server:
> bin/kafka-console-producer.sh--broker-list localhost:9092--topic test
This was a messagethis is another message
CTRL + C can exit send.

Step 5: Start consumer

Kafka also have a command line consumer, that would dump out messages to standard output.
Kafka also has a command line consumer can read messages and output to standard output:
> bin/kafka-console-consumer.sh--zookeeper localhost:2181--topic test--from-beginning
This is a message
This is another message
You run the consumer command line in one terminal, the other terminal runs the producer command line, you can enter the message at one terminal, and the other terminal reads the message.
Both commands have their own optional parameters that can be used without any parameters to see the help information at run time.

Step 6: Build a cluster of multiple broker

Just started a single broker, and now starts a cluster of 3 brokers, all of which are on this machine:
First write the configuration file for each node:

> CP config/server.properties config/server-1.properties
> CP config/server.properties config/server-2.properties
Add the following parameters to the copied new file:
Config/server-1.properties:
Broker.id=1
port=9093
Log.dir=/tmp/kafka-logs-1

Config/server-2.properties:
broker.id=2
port=9094
Log.dir=/tmp/kafka-logs-2
Broker.id is the only one node in the cluster, because on the same machine, different ports and log files must be developed to avoid overwriting the data.

We already has Zookeeper and our single node started, so we just need to start the new nodes:
You have just started the zookeeper and one node and now start another two nodes:
> bin/kafka-server-start.sh config/server-1.properties &
...
> bin/kafka-server-start.sh config/server-2.properties &
...
Create a topic with 3 replicas:
> bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 3--partitions 1--topic My-replicated-topic
Now that we've built a cluster, how do we know what each node is about? You can run the "Describe topics" command:
> bin/kafka-topics.sh--describe--zookeeper localhost:2181--topic my-replicated-topic
Topic:my-replicated-topic partitioncount:1 replicationfactor:3 configs:
Topic:my-replicated-topic partition:0 leader:1 replicas:1,2,0 isr:1,2,0
These outputs are explained below. The first line is a description of all the partitions, and then each partition corresponds to one row, because we only have a single partition, so we add a row below.
Leader: Responsible for processing the read and write of messages, leader is randomly selected from all nodes.
Replicas: Lists all replica nodes, regardless of whether the nodes are in the service.
ISR: is the node in service.
In our example, Node 1 is run as leader.
To send a message to topic:

> bin/kafka-console-producer.sh--broker-list localhost:9092--topic my-replicated-topic
...
My test message 1my test message 2^c
Consume these messages:
> bin/kafka-console-consumer.sh--zookeeper localhost:2181--from-beginning--topic my-replicated-topic
...
My test message 1
My test message 2
^c
Test fault tolerance. Broker 1 runs as leader, and now we kill it:
> PS | grep server-1.properties7564 ttys002 0:15.91/system/library/frameworks/javavm.framework/versions/1.6/home/bin/ Java...
> kill-9 7564
The other node is selected for Leader,node 1 no longer appears in the In-sync replica list:
> bin/kafka-topics.sh--describe--zookeeper localhost:218192--topic my-replicated-topic
Topic:my-replicated-topic partitioncount:1 replicationfactor:3 configs:
Topic:my-replicated-topic partition:0 leader:2 replicas:1,2,0 isr:2,0
Although leader, who was originally responsible for the continuation of the message, was down, the previous message was still consumable:
> bin/kafka-console-consumer.sh--zookeeper localhost:2181--from-beginning--topic my-replicated-topic
...
My test message 1
My test message 2

It seems that Kafka's fault-tolerant mechanism is still good.

Kafka (iv): Installation of Kafka

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.