Kafka deployment and instance commands are completely removed topic

Source: Internet
Author: User
Tags config zookeeper zookeeper client

1, Installation Zookeeper

2, Installation Kafka

Step 1: Download Kafka Click to download the latest version and unzip it.

tar-xzf kafka_2.10-0.8.2.1.tgz
CD kafka_2.10-0.8.2.1
Step 2: Start the serviceKafka used to zookeeper, all start Zookper First, the following simple to enable a single-instance Zookkeeper service. You can add a & symbol at the end of the command so that you can start and leave the console.
bin/zookeeper-server-start.sh config/zookeeper.properties &
...
Start Kafka Now:
bin/kafka-server-start.sh config/server.properties & ...

Step 3: Create a topicCreate a topic called "Test", which has only one partition and one copy.
bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 1--partitions 1--topic test
You can view the created topic by using the List command:
bin/kafka-topics.sh--list--zookeeper localhost:2181
Test
In addition to manually creating topic, you can also configure the broker to have it automatically create topic. Step 4: Send a message.Kafka uses a simple command-line producer to read the message from the file or from the standard input and send it to the server. A message is sent by default for each command.
Run producer and lose some messages in the console that will be sent to the server:
bin/kafka-console-producer.sh--broker-list localhost:9092--topic Test this was 
a messagethis is another message
CTRL + C can exit send.
Step 5: Start consumerKafka also have a command line consumer, that would dump out messages to standard output. Kafka also has a command line consumer can read messages and output to standard output:
bin/kafka-console-consumer.sh--zookeeper localhost:2181--topic test--from-beginning
This is a message that is
another message
You run the consumer command line in one terminal, the other terminal runs the producer command line, you can enter the message at one terminal, and the other terminal reads the message.
Both commands have their own optional parameters that can be used without any parameters to see the help information at run time. Step 6: Build a cluster of multiple brokerJust started a single broker, and now starts a cluster of 3 brokers, all of which are on this machine: first write a configuration file for each node: > CP config/server.properties config/ Server-1.properties
> CP config/server.properties config/server-2.properties
Add the following parameters to the copied new file:
Config/server-1.properties:
    broker.id=1
    port=9093
    log.dir=/tmp/kafka-logs-1
 
config/ Server-2.properties:
    broker.id=2
    port=9094
    log.dir=/tmp/kafka-logs-2
Broker.id is the only one node in the cluster, because on the same machine, different ports and log files must be developed to avoid overwriting the data. We already has Zookeeper and our single node started, so we just need to start the two new nodes: just now the Zookeeper and a node have been launched, now At the start of another two nodes:
bin/kafka-server-start.sh config/server-1.properties &
...
bin/kafka-server-start.sh config/server-2.properties &
...
Create a topic with 3 replicas:
bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 3--partitions 1--topic My-replicated-topic
Now we have a cluster, how to know the information of each node. You can run the "Describe topics" command:
bin/kafka-topics.sh--describe--zookeeper localhost:2181--topic my-replicated-topic
Topic:my-replicated-topic       partitioncount:1        replicationfactor:3     configs:
        Topic: My-replicated-topic      partition:0    leader:1       replicas:1,2,0 isr:1,2,0
These outputs are explained below. The first line is a description of all the partitions, and then each partition corresponds to one row, because we only have a single partition, so we add a row below.
Leader: Responsible for processing the read and write of messages, leader is randomly selected from all nodes. Replicas: Lists all replica nodes, regardless of whether the nodes are in the service. ISR: is the node in service. In our example, Node 1 is run as leader.

To send a message to topic:

bin/kafka-console-producer.sh--broker-list localhost:9092--topic my-replicated-topic
...
my test message 1my test message 2^c 
Consume these messages:
bin/kafka-console-consumer.sh--zookeeper localhost:2181--from-beginning--topic my-replicated-topic
...
My test message 1
my test message 2
^c
Test fault tolerance. Broker 1 runs as leader, and now we kill it:
PS | grep server-1.properties7564 ttys002    0:15.91/system/library/frameworks/javavm.framework/
Versions/1.6/home/bin/java ... kill-9 7564
The other node is selected for Leader,node 1 no longer appears in the In-sync replica list:
bin/kafka-topics.sh--describe--zookeeper localhost:218192--topic my-replicated-topic
Topic:my-replicated-topic       partitioncount:1        replicationfactor:3     configs:
        Topic: My-replicated-topic      partition:0    leader:2       replicas:1,2,0 isr:2,0
Although leader, who was originally responsible for the continuation of the message, was down, the previous message was still consumable:
bin/kafka-console-consumer.sh--zookeeper localhost:2181--from-beginning--topic my-replicated-topic
...
My test message 1
my test message 2
^c
It seems that Kafka's fault-tolerant mechanism is still good.


topic Delete
(1) Delete topic related files under Log directory

(2) Delete data from ZOOKEEPER
login ZOOKEEPER client:
CD $ZOOKEEPER _home
bin/zkcli.sh

Delete ls/config/topics/topicname and ls/brokers/topics/topicname
above two steps full execution will be completely removed 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.