Build and use a fully distributed zookeeper cluster and Kafka Cluster

Source: Internet
Author: User

Zookeeper uses zookeeper-3.4.7.tar.gz and kafka_2.10-0.9.0.0.tgz. First, install JDK (jdk-7u9-linux-i586.tar.gz) and SSH. The IP addresses are allocated to kafka1 (192.168.56.136), kafka2 (192.168.56.137), and kafka3 (192.168.56.138 ). The following describes how to install SSH and how to build and use zookeeper and Kafka clusters.

1. Install SSH

(1) apt-Get Install SSH

(2)/etc/init. d/ssh start

(3) ssh-keygen-t rsa-P "" (Press enter three times)

Note: Two files are generated in/root/. Ssh: id_rsa and id_rsa.pub. The former is the private key, and the latter is the public key.

(4) cat ~ /. Ssh/id_rsa.pub> ~ /. Ssh/authorized_keys

Note: We append the Public Key id_rsa.pub to authorized_keys, because authorized_keys is used to save all public key contents that allow users to log on to the SSH client as the current user.

(5) set kafka1 to log on to kafka2 and kafka3 without a password, as shown below:
SCP [email protected]: ~ /. Ssh/id_rsa.pub ~ /. Ssh/kafka2_rsa.pub
SCP [email protected]: ~ /. Ssh/id_rsa.pub ~ /. Ssh/kafka3_rsa.pub
Cat kafka2_rsa.pub> authorized_keys
Cat kafka3_rsa.pub> authorized_keys

Note: We can remotely access SSH kafka2 and SSH kafka3 from kafka1 through commands.

 

2. Build a zookeeper Cluster

1. Copy zookeeper to the/usr/local/zookeeper directory and decompress

[Email protected]: ~ # Mkdir/usr/local/zookeeper

[Email protected]: ~ # CP ~ /Downloads/zookeeper-3.4.7.tar.gz/usr/local/zookeeper/

[Email protected]:/usr/local/zookeeper # tar-zxvf zookeeper-3.4.7.tar.gz

2. Configure zookeeper Environment Variables
Export zookeeper_home =/usr/local/zookeeper/zookeeper-3.4.7

Export Path = $ zookeeper_home/bin: $ path

Note: Each node must be configured.

3. cluster-related file configuration

(1) Vim/CONF/zoo. cfg

dataDir=/usr/local/zookeeper/zookeeper-3.4.7/dataclientPort=2181initLimit=10syncLimit=5tickTime=2000Kafka1=192.168.56.136:2888:3888  Kafka2=192.168.56.137:2888:3888  Kafka3=192.168.56.138:2888:3888 

(2) Vim/usr/local/zookeeper/zookeeper-3.4.7/data/myid on kafka1, "myid" content is 1, on kafka2, content is 2, on kafka3, the content is 3.

(3) copy the zookeeper configuration on kafka1 to kafka2 and kafka3 through SCP.

(4) Enable and stop, view the status (all nodes)

./Zkserver. Sh start/stop/status

Note: In the status, the mode shows the roles played by the server in the cluster. The roles of each server are not fixed. The leader is generated by the zookeeper fast Leader Election Algorithm. Now, the zookeeper cluster has been set up, and the corresponding configuration file is modified according to the actual business needs.

 

3. Build a Kafka Cluster

Note:

A publish is called a producer, a subscribe is called a consumer, and a storage array in the middle is called a broker.

1. Copy Kafka to the/usr/local/Kafka directory and decompress

[Email protected]: ~ # Mkdir/usr/local/Kafka

[Email protected]: ~ # CP ~ /Downloads/kafka_2.10-0.9.0.0.tgz/usr/local/Kafka/

[Email protected]:/usr/local/Kafka # tar-zxvf kafka_2.10-0.9.0.0.tgz

2. Configure Kafka Environment Variables

Export kafka_home =/usr/local/Kafka/kafka_2.10-0.9.0.0

Export Path = $ kafka_home/bin: $ path

Note: Each node must be configured.

3. cluster-related file configuration

(1) Vim config/server. Properties

The attributes to be configured include broker. ID, host. Name, Zookeeper. Connect, log. dirs. The configuration is as follows:

broker.id=1 
port=9092host.name=Kafka1 log.dirs=${KAFKA_HOME}/kafka-logs zookeeper.connect=192.168.56.136:2181,192.168.56.137:2181,192.168.56.138:2181

(2) Vim zookeeper. Properties

dataDir=/usr/local/zookeeper/zookeeper-3.4.7/data

(3) Vim producer. Properties

metadata.broker.list=192.168.56.136:9092,192.168.56.137:9092,192.168.56.138:9092

(4) Vim consumer. Properties

zookeeper.connect=192.168.56.136:2181,192.168.56.137:2181,192.168.56.138:2181

(5) copy the Kafka configuration on kafka1 to kafka2 and kafka3 through SCP.

The broker. ID increases progressively from 1. Each server must be unique. Modify the broker. ID and host. Name attributes.

(6) Start the zookeeper cluster and then start the kakfa cluster.

Resolution:

./Zkserver. Sh start/stop/status (nodes)

Nohup./kafka-server-start.sh./../config/server. Properties & (nodes)

(Kakfa cluster close command for./kafka-server-stop.sh. (Nodes)

After the cluster is started successfully, create a topic, create a producer on one server, and create a consumer on the other server to send information from the producer to check whether the consumer can receive the message, to verify whether the cluster is successful.

Create a topic: sudo. /kafka-topics.sh -- zookeeper kafka1: 2181, kafka2: 2181, kafka3: 2181 -- Topic test -- replication-factor 2 -- partitions 5 -- create view topic: sudo. /kafka-topics.sh -- zookeeper kafka1: 2181, kafka2: 2181, kafka3: 2181 -- list create Producer: sudo. /kafka-console-producer.sh -- broker-list kafka1: 9092, kafka2: 9092, kafka3: 9092 -- Topic test create consumer: sudo. /kafka-console-consumer.sh -- zookeeper server1: 2181, server2: 2181, server3: 2181 -- from-beginning -- Topic Test

Note: After the Kafka cluster is created, you can call the Java API of Kafka to test the status of the Kafka cluster.

 

References:

[1] zookeeper Cluster Environment installation and configuration: http://blog.csdn.net/unix21/article/details/18990123

[2] Apache zookeeper Cluster Environment setup: http://bigcat2013.iteye.com/blog/2175538

[3] zookeeper installation and configuration: http://coolxing.iteye.com/blog/1871009

[4] distributed message system Kafka: http://blog.jobbole.com/75328/%20

[5] Apache Kafka Cluster Environment setup: http://bigcat2013.iteye.com/blog/2175880? Utm_source = tuicool & utm_medium = referral

[6] distributed cluster installation and demo of Kafka 2.9.2: http://www.aboutyun.com/thread-8919-1-1.html

[7] Kafka cluster-3 brokers 3 zookeeper creation practice: http://www.cnblogs.com/davidwang456/p/4238536.html

[8] flume + Kafka + storm build: http://wenku.baidu.com/link? Url = hTPgtuZtUob7mZ8nWYgZkGLHK7r71-tVQnJIgPzjpT7dTKnnDggfEeCZuFvXDI575PASNE4QpqzFsNCm7GznINiJAiXxdqf8FfNevlPkHKS

Build and use a fully distributed zookeeper cluster and Kafka Cluster

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.