Zookeeper and Kafka cluster construction

Source: Internet
Author: User
Tags zookeeper ssh iptables firewall
One: Environment preparation:Physical Machine Window7 64-bit VMware 3 virtual machine centos6.8 IP: 192.168.17.[129-131] JDK1.7 installation configuration between each virtual machine configure a password-free login installation Clustershell for unified operation configuration for each node of the cluster

1: Instructions on how to operate and use the Clustershell

1.1: Configure the password-free login (between the cluster nodes, each other to operate each other, only need to enter the other IP or host, do not need to enter passwords, that is: Secret login)

1.1.2: generate key file and private key File command

SSH-KEYGEN-T RSA

1.1.3: View generated key file

Ls/root/.ssh

1.1.4: Copy the secret key to the other machine

Ssh-copy-id-i/root/.ssh/id_rsa.pub 192.168.17.129

Ssh-copy-id-i/root/.ssh/id_rsa.pub 192.168.17.130

Ssh-copy-id-i/root/.ssh/id_rsa.pub 192.168.17.131

1.1.5: Tests are connected to each other

You can log in to each other separately between nodes.

SSH root@192.168.17.130

Hostname

Installation of 1.2:clustershell

Note that I am installing the centos6.6 Mini interface version, through the Yun install Clustershell installation, will be prompted no package, the source of the Yum in the long-term no update, so use to Epel-release

installation command:

sudo yum install epel-release

Then the Yum install Clustershell can be installed by Epel.

1.2.2: Configuring Cluster groups

Vim/etc/clustershell/groups

Add a group name: server IP or Host

  kafka:192.168.17.129 192.168.17.130 192.168.17.131 II: Zookeeper and Kafka download

The zookeeper and Kafka versions used in this article were: 3.4.8, 0.10.0.0

1: First to the official website to download:

Put the package in the directory you specified, I put it in the/opt/kafka directory

Then, copy the compressed package to several other service nodes via Clush

Clush-g kafka-c/opt/kafka

2: ZK and Kafka compression pack for all nodes by Clush

Clush-g Kafka Tar zxvf/opt/kafka/zookeeper-3.4.8

Clush-g Kafka Tar zxvf/opt/kafka/kafka_2.11-0.10.1.0

3: Copy zoo_sample.cfg to Zoo.cfg (default zookeeper profile)

Modify configuration, Zoo.cfg file

# The number of milliseconds of each tick ticktime=2000 # The number of ticks, the initial # synchronization phase CA N Take initlimit=10 # The number of ticks so can pass between # Sending a request and getting an acknowledgement syncli
Mit=5 # The directory where the snapshot is stored.
# do not use/tmp for storage,/tmp here is just # example sakes. Datadir=/tmp/zookeeper # The port at which the clients would connect clientport=2181 # # ZK default Port # # node IP and Port server.1= 192.168.17.129:2888:3888 server.2=192.168.17.130:2888:3888 server.3=192.168.17.131:2888:3888 # The maximum number of
Client connections. 
# Increase this if you need to handle more clients #maxClientCnxns =60 # # is sure to read the maintenance section of the
# Administrator Guide before turning on Autopurge. # # # # # # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # of snapshots to retain in da Tadir #autopurge. snapretaincount=3 # Purge Task interval in hours # Set to ' 0 ' to Disable Auto Purge feature #autopurge. purgeinterval=1  

3: Create Tmp/zookeeper to store ZK information

Mkdir/tmp/zookeeper

4: Set a myID file for each tmp/zookeeper, with node ID 1 or 2 or 3 echo "1" > myID 5: Turn off Firewall (best practice is to find OPS to configure firewall policy instead of shutting down) clush-g Ka FKA "Service iptables status" clush-g Kafka "service Iptables stop" 6: Start zookeeper for all nodes (other nodes are also configured to Zoo.cfg and create/tmp/zoo Keeper myID) clush-g kafka/opt/kafka/zookeeper/bin/zkserver.sh start/opt/kafka/zookeeper/conf/zoo.cfg 7: View ZK 2181 Port Yes No start clush-g Kafka lsof-i:2181 8: Test data is synchronized, create a node test and give a value called Hello Bin/zkcli.sh-server 192.168.17.130:2181 create/tes t hello and then separately on the other nodes to see if the success has been created, whether there is a value through Get/test view the values under the node Ok, zookeeper cluster has been installed, the next start to deploy Kafka.Three: Kafka installation Deployment 1: Go to config---server.properties edit zookeeper.connection   zookeeper.connect= 192.168.17.129:2181,192.168.17.130:2181,192.168.17.131:2181   2: Start Kafka  /opt/kafka/kafka_2.11-0.10.1.0 /bin/kafka-server-start.sh-daemon/opt/kafka/kafka_2.11-0.10.1.0/config/server.properties 3: Create topic  bin/ kafka-topics.sh --zookeeper 192.168.17.129:2181 -topic topictest --create -- Partition 3 --replication-factor 2 4: View Kafka topic  [root@kafka01 kafka_2.11-0.10.1.0]#  bin/kafka-topics.sh --zookeeper 192.168.17.129:2181 -topic topictest --describe 5: "Test" launches Console-consumer to subscribe to messages bin/kafka-console-consumer.sh --zookeeper 192.168.17.130:2181 -- Topic topictest 6: "Test-open a new terminal" launch console-producer to produce messages    bin/kafka-console-producer.sh -- Broker-list kafka02:9092 --topic topictest 7: Test send production messages and subscribers receive messages   Note:  All connection addresses in Kafka and zookeeper are preferably through the HOST:PORT to configure. Kafka by default is accessed via hostname if the setting is IP, edit/etc/hosts   Bind the host of the corresponding machine, otherwise the warning exception will be reported following the start of consumption:      

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.