KAFKA1 uses virtual machines to build its own Kafka cluster

Source: Internet
Author: User

Objective:

Last weekend, I learned a little Kafka, referring to the article on the Internet, the learning process is still relatively smooth, some of the problems encountered eventually solved, will now learn the process of recording with this, for later self-check, if can help other people, nature is better.

=============================================================== Long split-line ========================================== ==========================

Body:

  About Kafka theory Introduction, online can find a lot of information, we can search by themselves, I do not repeat here.

This article mainly involves three pieces of content: First, is to build zookeeper environment; second, build the Kafka environment, and learn to use basic commands to send receive messages; Third, use the Java API to complete the operation in order to get an idea of how to use the actual project.

The purpose of this time is to use VMware to build a zookeeper and Kafka cluster of your own. This time we choose is VMware10, the specific installation steps we can go online search, a lot of resources.

The first step is to determine the target:

Zookeeperone 192.168.224.170 CentOS

Zookeepertwo 192.168.224.171 CentOS

Zookeeperthree 192.168.224.172 CentOS

Kafkaone 192.168.224.180 CentOS

Kafkatwo 192.168.224.181 CentOS

We installed the zookeeper is 3.4.6 version, can download zookeeper-3.4.6 from here; Kafka installed is version 0.8.1, you can download kafka_2.10-0.8.1.tgz from here; The version of JDK installation is version 1.7.

Another: When I study, set up two Kafka server, formal environment in our best is to build 2n+1 Taiwan, here only as a learning some of the use, temporarily do not care.

The second step is to build the zookeeper cluster:

Here you can refer to an article I wrote earlier ZooKeeper1 using virtual machines to build their own zookeeper cluster, I am building Kafka environment is used before the zookeeper cluster.

The third step is to build the Kafka cluster:

(1). After extracting the kafka_2.10-0.8.1.tgz downloaded in the first step into the Config directory, you will see some configuration files as shown, and we are ready to edit the Server.properties file.

(2). To open the Server.properties file, the properties you need to edit are as follows:

1 broker.id=0 2 port=9092 3 host.name=192.168.118.80 4 5 Log.dirs=/opt/kafka0.8.1/kafka-logs 6 7 zookeeper.connect=192.168.224.170:2181,192.168.224.171:2181,192.168.224.172:2181

Attention:

A. broker.id: Each Kafka corresponds to a unique ID, which can be assigned by itself

B. PORT: The default port number is 9092, which is used by default

C. host.name: The IP address of the current machine is configured

D. Log.dirs: Log directory, where you can customize a directory path

E. Zookeeper.connect: Write the configuration of the zookeeper cluster that we built in the second step

(3). After the above configuration is complete, we need to execute the command vi/etc/hosts, the host configuration of the relevant server, such as, if we do not perform this step, we will execute some commands in the back, the failure to identify the host error .

(4). After the above operation, we have completed the configuration of the Kafka, very simple it?! But if we execute bin/kafka-server-start.sh config/server.properties & this startup command, we may encounter the following two questions:

A. We are launching the newspaper unrecognized VM option ' +usecompressedoops '. Could not create the Java virtual machine. This error.

How to resolve:

View bin/kafka-run-class.sh

Find the code below and remove the-xx:+usecompressedoops

1 if [-Z ' $KAFKA _jvm_performance_opts]; Then 2  -xx:+usecompressedoops -xx:+useparnewgc-xx:+useconcmarksweepgc-xx:+cmsclassunloadingenabled-xx:+ Cmsscavengebeforeremark-xx:+disableexplicitgc-djava.awt.headless=true "3 fi

B. Solve the first problem, we may also encounter Java.lang.noclassdeffounderror:org/slf4j/impl/staticloggerbinder this error when booting.

How to resolve:

Download Slf4j-nop-1.6.0.jar this jar package from the Web, and then place it in the Libs directory under the Kafka installation directory. Note that, based on my current version of Kafka, I started downloading the Slf4j-nop-1.5.0.jar from the Internet, but the boot time will still be error, so be sure to note the version number Oh ~

(5). Now we execute bin/kafka-server-start.sh config/server.properties & this start command, it should be able to start the Kafka normally. The final & symbol of the command is to allow the launcher to execute in the background. If you do not add this & symbol, we will usually use CTRL + C to exit the current console when the boot is finished, and Kafka will automatically execute the shutdown, so it is best to add the & symbol here.

Third, use basic commands to create message topics, send and receive topic messages:

(1). Create, view message topics

1 #连接zookeeper, create a topic named Myfirsttopic. 2 bin/kafka-topics.sh--create--zookeeper 192.168.224.170:2181--replication-factor 2--partitions 1--topic Myfirsttopic  3  4# View the Properties  of this topic 5bin/kafka-topics.sh--describe-- Zookeeper 192.168.224.170:2181--topic myfirsttopic    6  7# View the list  of topic that have been created  8myfirsttopic  

After the above command has been executed, the following:

(2). Create a message producer:

1 #启动生产者, sending a message 2 bin/kafka-console-producer.sh--broker-list 192.168.224.180:9092--topic myfirsttopic 3 4 #启动消费者, receiving messages 5 bin/kafka-console-consumer.sh--zookeeper 192.168.224.170:2181--from-beginning--topic myfirsttopic

After the above command has been executed, the following:

(3). According to the two steps (1) and (2), you should be able to use Kafka to feel the distributed messaging system. Here we need to focus on what I found in the process of a problem: you can look at the consumer of the command, I chose zookeeper one of the 192.168.224.170:2181 receive the message is normal to receive! Don't forget, I am three zookeeper, so I tried to 192.168.224.171:2181 and 192.168.224.172:2181 receive myfirsttopic this topic message. Under normal circumstances, the results of the three visits should be able to receive the message normally, but at that time my situation in the visit to 192.168.224.171:2181 this station will be reported ORG.APACHE.ZOOKEEPER.CLIENTCNXN this error !!!

I tried more than two times and found out that my three zookeeper, who were leader,concumer connected, would report the exception. Later, the Maxclientcnxns attribute in the Zookeeper zoo.cfg configuration file, which is the maximum number of client connections, I was using the default configuration is 2. Later I put the value of this property up a bit, consumer connection zookeeper leader, will not report this error. If you choose to comment out this attribute (the default value of this property is 10 from the Web query to comment out), you will not report this error. In fact, many articles on the internet just said that this property can be set as much as possible, without explaining the other.

But I thought about it later, when I set the Maxclientcnxns property to 2 o'clock, if two Kafka start, a client connection is established between each Kafka and zookeeper node, So at this time zookeeper each node of the number of client connections has reached the maximum number of connections 2, then I create consumer, should be three zookeeper connectivity problems, and not only leader will have problems. Therefore, it is necessary for you to have the insight to help explain!!!  

Fourth step, use the Java API to manipulate Kafka:

In fact, the Java API provides basic functionality is based on the top of the client command to achieve, original aim, I put my collation of the online example to the following, you can execute in the local Java project, to understand the calling method.

(1). Configuration of Pom.xml in my MAVEN project

1 <Projectxmlns= "http://maven.apache.org/POM/4.0.0"Xmlns:xsi= "Http://www.w3.org/2001/XMLSchema-instance"xsi:schemalocation= "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">2   <modelversion>4.0.0</modelversion>3   <groupId>Com.ismurf.study</groupId>4   <Artifactid>Com.ismurf.study.kafka</Artifactid>5   <version>0.0.1-snapshot</version>6   <name>kafka_project_0001</name>7   <Packaging>War</Packaging>8   9   <Dependencies>Ten           <Dependency> One             <groupId>Org.apache.kafka</groupId> A             <Artifactid>kafka_2.10</Artifactid> -             <version>0.8.1.1</version> -         </Dependency> the   </Dependencies> -    -   <Build> -       <Plugins> +           <plugin> -             <groupId>Org.apache.maven.plugins</groupId> +             <Artifactid>Maven-war-plugin</Artifactid> A             <version>2.1.1</version> at             <Configuration> -                 <outputfilenamemapping>@{artifactid}@[email protected]{extension}@</outputfilenamemapping> -             </Configuration> -         </plugin> -            -         <!--ensures we is compiling at 1.6 level - in         <plugin> -             <groupId>Org.apache.maven.plugins</groupId> to             <Artifactid>Maven-compiler-plugin</Artifactid> +             <Configuration> -                 <Source>1.6</Source> the                 <Target>1.6</Target> *             </Configuration> $         </plugin>Panax Notoginseng          -         <plugin> the             <groupId>Org.apache.maven.plugins</groupId> +             <Artifactid>Maven-surefire-plugin</Artifactid> A             <Configuration> the                 <skiptests>True</skiptests> +             </Configuration> -         </plugin> $       </Plugins> $   </Build> -    - </Project>

(2). Example code: You can refer to this piece of http://blog.csdn.net/honglei915/article/details/37563647 code in the article, paste into the project can be used, the code in the above post-collation directory as follows:

KAFKA1 uses virtual machines to build its own Kafka cluster

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.