Zookeeper + kafka cluster installation 2

Source: Internet
Author: User

Zookeeper + kafka cluster installation 2

This is the continuation of the previous article. The installation of kafka depends on zookeeper. Both this article and the previous article are true distributed installation configurations and can be directly used in the production environment.

For zookeeper installation, refer:

Http://blog.csdn.net/ubuntu64fan/article/details/26678877

First, understand several concepts in kafka:

Assume that we have a topic named test and the number of partitions is 2. When we send the message "msg1: hello beijing" and "msg2: hello shanghai" to this test, how do we know the message sending path (to which partition )? If msg1 is sent to the test.1 partition, it will certainly not be sent to test.2. data transmission path selection decision by kafka. producer. influence of Partitioner: interface Partitioner {int partition (java. lang. object key, int numPartitions );}

The implementation of a pseudo code is as follows:

package org.mymibao.mq.client;import kafka.producer.Partitioner;public class DefaultKafkaPartitioner implements Partitioner {    private final static int FIRST_PARTITION_ID = 1;    public int partition(Object key, int numPartitions) {        return FIRST_PARTITION_ID;    }}

The partition API returns a partition id based on the related key value and the number of proxy partitions in the system. Use this id as an index. In the sorted list composed of broker_id and partition, find a proxy partition for the corresponding producer request. The default partition policy is hash (key) % numPartitions. If the key is null, It is randomly selected. You can use the partitioner. class configuration parameter to insert a custom partition policy. The partition file does not span the broker, but multiple brokers can have a partition copy of a topic.

For more information about kafka installation and configuration, see:

1) download KAFKA

$ Wget http://apache.fayea.com/apache-mirror/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz

For installation and configuration, refer to the previous article:

Http://blog.csdn.net/ubuntu64fan/article/details/26678877

2) configure $ KAFKA_HOME/config/server. properties

We have installed three brokers on three VMS: zk1, zk2, and zk3:

Zk1:

$ Vi/etc/sysconfig/network

NETWORKING=yesHOSTNAME=zk1

$ Vi $ KAFKA_HOME/config/server. properties

broker.id=0port=9092host.name=zk1advertised.host.name=zk1...num.partitions=2...zookeeper.contact=zk1:2181,zk2:2181,zk3:2181

Zk2:

$ Vi/etc/sysconfig/network

NETWORKING=yesHOSTNAME=zk2

$ Vi $ KAFKA_HOME/config/server. properties

broker.id=1port=9092host.name=zk2advertised.host.name=zk2...num.partitions=2...zookeeper.contact=zk1:2181,zk2:2181,zk3:2181

Zk3:

$ Vi/etc/sysconfig/network

NETWORKING=yesHOSTNAME=zk3

$ Vi $ KAFKA_HOME/config/server. properties

broker.id=2port=9092host.name=zk3advertised.host.name=zk3...num.partitions=2...zookeeper.contact=zk1:2181,zk2:2181,zk3:2181

3) Start the zookeeper service and run it on zk1, zk2, and zk3 respectively:

$ ZkServer. sh start

4) Start the kafka service and run the following commands on zk1, zk2, and zk3 respectively:

$ Kafka-server-start.sh $ KAFKA_HOME/config/server. properties

5) create a TOPIC (replication-factor = num of brokers)

$ Kafka-topics.sh -- create -- topic test -- replication-factor 3 -- partitions 2 -- zookeeper zk1: 2181

6) Suppose we open a terminal on zk2 and send the message to kafka (zk2 simulates producer)

$ Kafka-console-producer.sh -- broker-list zk1: 9092 -- sync -- topic test

Enter Hello Kafka at the terminal where the message is sent.

7) Suppose we open a terminal on zk3 to display the message consumption (zk3 simulates the consumer)

$ Kafka-console-consumer.sh -- zookeeper zk1: 2181 -- topic test -- from-beginning
The message consumption terminal displays: Hello Kafka

8) for examples of programming operations on Producer and Consumer, refer:

Http://shift-alt-ctrl.iteye.com/blog/1930791




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.