Kafka Distributed Environment construction

Source: Internet
Author: User
Tags mkdir zookeeper zookeeper client ssh
Kafka is developed in the Scala language and runs on the JVM, so you'll need to install the JDK before installing Kafka.
               1. JDK installation configuration 1) do not have spaces in the Windows installation JDK directory name. Set Java_home and CLASSPATH example: Java_home c:\Java\jkd1.8 CLASSPATH.; %java_home%\lib\dt.jar;%java_home%\lib\tools.jar Verification: java-version 2) Linux installation JDK is generally self-
               Take.                          
                    Linux installation: (1) Download and install TAR-XZVF jdk-8u161-linux-x64.tar.gz (2) Configure environment variables: Vi/etc/profile Add the following: Export java_home=/usr/jdk1.8.0_161 Export jre_home=/usr/ JDK1.8.0_161/JRE export path= $PATH: $JAVA _home/bin: $JRE _home/bin Execute the following command to make the change effective: Source/etc/profile (3) If the system has OPENJDK installed, The user can choose to delete, or modify the configuration with the most installed version, the following is the replacement of the system's own OPENJDK with the newly installed JDK: Add Java to bin:update-alternatives--install/usr/bin/java J Ava/usr/jdk1.8.0_161/bin/java 300 Add Javac to Bin:update-alternatives--install/usr/bin/javac javac/usr/jdk 1.8.0_161/bin/javac 300 Select JDK version: Update-alternatives--config java (4) authentication: Java-version 1.

          SSH installation configuration for the Kafka cluster itself, the configuration of SSH keyless entry is not a necessary step. 
          1) configuration/etc/hosts 186.168.100.5 kafka0 186.168.100.6 kafka1 186.168.100.7 kafka2 2) Configure SSH mkdir ~/.ssh ssh-keygen-t RSA cat id_rsa.pub >> authorized_keys login kafka0:ssh-copy-id-i kafka0 Login Land Kafka0:ssh-copy-id-i kafka1 Landing kafka0:ssh-copy-id-i kafka2 landing kafka1:ssh-copy-id-i kafka0 Landing Kafka1:ssh-copy-id-i Kafka 1 Landing kafka1:ssh-copy-id-i KAFKA2 landing kafka2:ssh-copy-id-i kafka0 landing kafka2:ssh-copy-id-i kafka1 Landing kafka2:ssh-copy-id-i KAF
KA2 Zookeeper is a distributed Application Orchestration Service framework that enables distributed applications to implement synchronization Services, configuration maintenance, naming services, etc. based on zookeeper. Zookeeper cluster main role: 1. Leader, the leader of the cluster, is responsible for voting initiation and resolution and updating the System Status 2. Learner:follower follower, accepts the client's request and returns the result to the client, participates in voting Observer: accepts the client's request, forwards the written request to leader, and does not participate in the poll.

The purpose of observer is to expand the system and improve the speed of reading.


	Kafka relies on zookeeper, through the zookeeper to agents, consumers up and down line management, cluster, partition metadata management.


1. Zookeeper Cluster Environment construction:Install zookeeper under/opt/zookeeper: 1. ) Unzip the installation Mkdir/opt/zookeeper Tar-xzvf/usr/zookeeper-3.4.11.tar.gz-c/opt/zokeeper/cd/opt/zookee PER/ZOOKEEPER-3.4.11/CONF/CP zoo_sample.cfg zoo_sample.cfg.bak mv Zoo_sample.cfg zoo.cfg mk Dir/opt/zookeeper/data Mkdir/opt/zookeeper/log Modify Zookeeper configuration: datadir=/opt/zookeeper/ Data Datalogdir=/opt/zookeeper/log 1. Cluster configuration in the/etc/hosts file to add IP and machine domain name Mapping, domain name customization (three machines) one of them, modify the/opt/zookeeper/zookeeper-3.4.11/conf/zoo.cfg file, add
          Configuration: server.1=kafka0:2888:3888 server.2=kafka1:2888:3888 server.3=kafka2:2888:3888 Where 2888 is the default port, which represents the port on which the server exchanges information with the leader in the cluster.
          3888 indicates that the port on which servers communicate with each other at election time creates a myID file under the $datadir path, holding the server's number. Cd/opt/zookeeper/data/vi myID 1 Others are 2, 3 send zoo.cfg to other hosts corresponding location: scp/opt /zookeeper/zookeeper-3.4.11/conf/zoo.cfg Kafka1:/opt/zookeeper/zookeeper-3.4.11/conf scp/opt/zookeeper/zookeeper-3.4.11/conf/zoo.cfg kafka2:/opt/zookeeper/zookeeper-3.4 .11/conf configuration/etc/profile vi/etc/profile export Zookeeper_home=/opt/zookeeper/zook eeper-3.4.11 export path= $PATH: $ZOOKEEPER _home/bin source/etc/profile 1.
) Start and verify service zkserver.sh start three machines JPS 2819 Quorumpeermain 2844 JPS zkserver.sh Status ZooKeeper JMX enabled by default Using config:/opt/zookeeper/zookeeper-3.4.11/bin/. /conf/zoo.cfg Mode:follower 1. Kafka installation 1) unzip the installation Mkdir/opt/kafka Tar-xzvf/usr/kafka_2.11-1.0.0.tgz-c/OPT/KAFK
                    A/vi/etc/profile Export kafka_home=/opt/kafka/kafka_2.11-1.0.0 Export path= $PATH: $ZOOKEEPER _home/bin: $KAFKA _home/bin 2) Modify the configuration VI $KAFKA _home/config/ Server.properties BROker.id=1--Keep the myID consistent with zookeeper Log.dirs=/opt/kafka/kafka-logs zookeeper.connect=kafka0:2181,kafka1:2181,kafka2:2181 host.name=186.168.100.5--Each machine is repaired separately 3) Verify boot Kafka (three machines): Kafka-server-start.sh-daemon/opt/kafka/kafka_2.11-1.0.0/co nfig/server.properties [root@kafka0 opt]# JPS 3824 Kafka 3890 JPS 2931 quorumpeermain $KAFK
               The Server.log in A_home/logs is the Kafka boot log. View the corresponding metadata for Kafka in zookeeper through the Zookeeper client: Zkcli.sh-server kafka0:2181 watchedevent state:syncconnected type: None path:null LS/[cluster, Controller, Controller_epoch, brokers, zookeeper, admin, Isr_change_notification, consumers , log_dir_event_notification, Latest_producer_id_block, config] [zk:kafka0:2181 (CONNECTED) 1] [zk:kafka0:2181 (

     

 CONNECTED) 1] ls/brokers/ids [1, 2, 3] ===============kafka distributed environment construction completed =================

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.