Kafka cluster and zookeeper cluster deployment, Kafka Java code example

Source: Internet
Author: User
Tags zookeeper log4j

From: http://doc.okbase.net/QING____/archive/19447.html

Also refer to:

http://blog.csdn.net/21aspnet/article/details/19325373

Http://blog.csdn.net/unix21/article/details/18990123

Kafka as a distributed log collection or system monitoring service, it is necessary for us to use it in a suitable situation. The deployment of Kafka includes the Zookeeper environment/kafka environment, along with some configuration operations. Next, we'll show you how to use Kafka.

We built the ZK cluster using 3 zookeeper instances and built the Kafka cluster with 2 Kafka brokers.

Where Kafka is 0.8v,zookeeper for 3.4.5V

I. Zookeeper cluster construction

We have 3 ZK instances, respectively, zk-0,zk-1,zk-2; If you are just testing, you can use 1 ZK instances.

1) zk-0

To adjust the configuration file:

clientport=2181server.0=127.0.0.1:2888:3888server.1=127.0.0.1:2889:3889server.2=127.0.0.1:2890:3890# #只需要修改上述配置 , other configurations retain the default values

Start Zookeeper

./zkserver.sh Start

2) zk-1

Adjust profile (other configuration and zk-0 one):

clientport=2182# #只需要修改上述配置, other configurations leave default values

Start Zookeeper

./zkserver.sh Start

3) Zk-2

Adjust profile (other configuration and zk-0 one):

clientport=2183# #只需要修改上述配置, other configurations leave default values

Start Zookeeper

./zkserver.sh Start

Two. Kafka Cluster construction

Because the broker configuration file involves the related conventions of zookeeper, we first show the broker configuration file. We use 2 Kafka broker to build this clustered environment, respectively, Kafka-0,kafka-1.

1) kafka-0

In the config directory, modify the configuration file to:

broker.id=0port=9092num.network.threads=2num.io.threads=2socket.send.buffer.bytes= 1048576socket.receive.buffer.bytes=1048576socket.request.max.bytes=104857600log.dir=./logsnum.partitions= 2log.flush.interval.messages=10000log.flush.interval.ms=1000log.retention.hours=168#log.retention.bytes= 1073741824log.segment.bytes=536870912log.cleanup.interval.mins=10zookeeper.connect= 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183zookeeper.connection.timeout.ms= 1000000kafka.metrics.polling.interval.secs=5kafka.metrics.reporters= Kafka.metrics.kafkacsvmetricsreporterkafka.csv.metrics.dir=/tmp/kafka_ Metricskafka.csv.metrics.reporter.enabled=false

Because Kafka is written in the Scala language, running Kafka requires first preparing the Scala-related environment.

There may be an exception to the last instruction execution, but no matter what happens. Start Kafka Broker:

> jms_port=9997 bin/kafka-server-start.sh config/server.properties &

Because the zookeeper environment is working properly, we do not need to mount the boot zookeeper through Kafka. If you have multiple Kafka brokers deployed on one machine, you need to declare jms_port.

2) Kafka-1

broker.id=1port=9093# #其他配置和kafka-0 consistency

Then execute the package command like kafka-0, and then start the broker.

> jms_port=9998 bin/kafka-server-start.sh config/server.properties &

So far the environment is OK, so let's start showing programming examples.

Three. Project preparation

The project is built on Maven, and it's hard to say that Kafka Java client is too bad; it's a lot of trouble to build the environment. It is recommended to refer to the following pom.xml, where each dependent package must be version-aligned.

<project xmlns= "http://maven.apache.org/POM/4.0.0" xmlns:xsi= "Http://www.w3.org/2001/XMLSchema-instance" xsi:s chemalocation= "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" > < Modelversion>4.0.0</modelversion> <groupId>com.test</groupId> <artifactid>test-kafka </artifactId> <packaging>jar</packaging> <name>test-kafka</name> <url>http://            maven.apache.org</url> <version>1.0.0</version> <dependencies> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version& gt;1.2.14</version> </dependency> <dependency> <groupid>org.apache.kafka& Lt;/groupid> <artifactId>kafka_2.8.0</artifactId> <version>0.8.0-beta1</vers          Ion> <exclusions>      <exclusion> <groupId>log4j</groupId> <artifactid>log4j </artifactId> </exclusion> </exclusions> </dependency> &lt ;d ependency> <groupId>org.scala-lang</groupId> <artifactid>scala-library</ar            tifactid> <version>2.8.1</version> </dependency> <dependency>            <groupId>com.yammer.metrics</groupId> <artifactId>metrics-core</artifactId> <version>2.2.0</version> </dependency> <dependency> <groupid>com .101tec</groupid> <artifactId>zkclient</artifactId> <version>0.3</version > </dependency> </dependencies> <build> <finalname>test-kafka-1.0</final Name> <resoUrces> <resource> <directory>src/main/resources</directory>            <filtering>true</filtering> </resource> </resources> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version                    >2.3.2</version> <configuration> <source>1.5</source> <target>1.5</target> <encoding>gb2312</encoding> &lt ;/configuration> </plugin> <plugin> <artifactid>maven-resources-                    Plugin</artifactid> <version>2.2</version> <configuration> <encoding>gbk</encoding> </configuration> </plugin> &lt ;/plugins> </buIld></project> 

Four. Producer-side code

1) producer.properties file: This file is placed in the/resources directory

#partitioner. class=metadata.broker.list=127.0.0.1:9092,127.0.0.1:9093##,127.0.0.1:9093producer.type= synccompression.codec=0serializer.class=kafka.serializer.stringencoder# #在producer. Type=async Valid # batch.num.messages=100

2) Logproducer.java code sample

Package Com.test.kafka;import Java.util.arraylist;import Java.util.collection;import java.util.list;import Java.util.properties;import Kafka.javaapi.producer.producer;import Kafka.producer.keyedmessage;import    Kafka.producer.producerconfig;public class Logproducer {private producer<string,string> inner;        Public Logproducer () throws exception{Properties Properties = new properties ();        Properties.load (Classloader.getsystemresourceasstream ("producer.properties"));        Producerconfig config = new Producerconfig (properties);    Inner = new producer<string, string> (config); public void Send (String topicname,string message) {if (topicname = = NULL | | message = = NULL) {R        Eturn;        } keyedmessage<string, string> km = new keyedmessage<string, string> (topicname,message);    Inner.send (km); public void Send (String topicname,collection<string> messages) {if (topicname = = NULL | | mesSages = = null) {return;        } if (Messages.isempty ()) {return;        } list<keyedmessage<string, string>> kms = new arraylist<keyedmessage<string, String>> (); for (String entry:messages) {keyedmessage<string, string> km = new keyedmessage<string, string            > (topicname,entry);        Kms.add (km);    } inner.send (KMS);    } public void Close () {inner.close ();        }/** * @param args */public static void main (string[] args) {logproducer producer = null;            try{producer = new Logproducer ();            int i=0;                while (true) {producer.send ("Test-topic", "This is a sample" + i);                i++;            Thread.Sleep (2000);        }}catch (Exception e) {e.printstacktrace ();            }finally{if (producer! = null) {producer.close ();   }        } }} 

Five. Consumer End

1) consumer.properties: The file is located in the/resources directory

zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183##,127.0.0.1:2182,127.0.0.1:2183# Timeout in MS for Connecting to Zookeeperzookeeper.connectiontimeout.ms=1000000#consumer Group Idgroup.id=test-group#consumer timeout #consumer. timeout.ms=5000

2) Logconsumer.java code sample

Package Com.test.kafka;import Java.util.hashmap;import Java.util.list;import java.util.map;import Java.util.properties;import Java.util.concurrent.executorservice;import Java.util.concurrent.executors;import Kafka.consumer.consumer;import Kafka.consumer.consumerconfig;import Kafka.consumer.consumeriterator;import Kafka.consumer.kafkastream;import Kafka.javaapi.consumer.consumerconnector;import    Kafka.message.messageandmetadata;public class Logconsumer {private Consumerconfig config;    Private String topic;    private int partitionsnum;    Private Messageexecutor executor;    Private Consumerconnector connector;    Private Executorservice ThreadPool;  Public Logconsumer (String topic,int partitionsnum,messageexecutor executor) throws exception{properties Properties        = new Properties ();        Properties.load (Classloader.getsystemresourceasstream ("consumer.properties"));        Config = new Consumerconfig (properties);        this.topic = topic; This.partitionsnum = parTitionsnum;    This.executor = executor;        } public void Start () throws exception{connector = consumer.createjavaconsumerconnector (config);        map<string,integer> topics = new hashmap<string,integer> ();        Topics.put (topic, partitionsnum);        Map<string, list<kafkastream<byte[], byte[]>>> streams = connector.createmessagestreams (topics);        List<kafkastream<byte[], byte[]>> partitions = streams.get (topic);        ThreadPool = Executors.newfixedthreadpool (partitionsnum); For (kafkastream<byte[], byte[]> partition:partitions) {Threadpool.execute (new Messagerunner (partition))        ;        }} public void Close () {try{threadpool.shutdownnow ();        }catch (Exception e) {//}finally{connector.shutdown (); }} class Messagerunner implements runnable{private kafkastream<byte[], byte[]> partition;        Messagerunner (kafkastream<byte[], byte[]> partition) {this.partition = partition;            } public void Run () {consumeriterator<byte[], byte[]> it = Partition.iterator ();                while (It.hasnext ()) {messageandmetadata<byte[],byte[]> item = It.next ();                System.out.println ("Partiton:" + item.partition ());                System.out.println ("offset:" + item.offset ());                Executor.execute (New String (Item.message ()));//utf-8}}} interface Messageexecutor {    public void execute (String message);        }/** * @param args */public static void main (string[] args) {Logconsumer consumer = null; try{Messageexecutor executor = new Messageexecutor () {public void execute (                                    String message) {SYSTEM.OUT.PRINTLN (message);}            };            Consumer = new Logconsumer ("Test-topic", 2, executor);        Consumer.start ();        }catch (Exception e) {e.printstacktrace (); }finally{//if (consumer! = NULL) {//Consumer.close ();/}}}}

Kafka cluster and zookeeper cluster deployment, Kafka Java code example

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.