Getting started with kafka quick development instances

Source: Internet
Author: User
Tags config sleep zookeeper

Kafka quick start


Installation (take windows as an example)

The installation is very simple. Download it from here. After the download is complete, unzip it to a directory.

Easy to use

First, a kafka process is used to produce a message and send it to the kafka cluster. Then, the consumer obtains the message from the kafka cluster for consumption.

To start kafka, you need to start zookeeper first, because ZooKeeper achieves high availability through redundant services, that is, how to ensure high availability of kafka clusters in a distributed environment. zookeeper selects the leader. When the consumer is preparing to send a message, it obtains an available message server address from zookeeper and sends the message through a connection, ensure that server downtime in the party cluster does not affect the overall use.


A diagram from slideshare

1. Start the built-in simple zookeeper.

Bin/windows directory

Zookeeper-server-start.bat ../config/zookeeper. properties

Run the command to start zookeeper. properties. zookeeper will develop a clientPort = 2181,2181 for the consumer. It can also be used by the producer. However, after version 0.8.0, the producer no longer connects to the broker through zookeeper, but through brokerlist (192.168.0.1: 9092,192.168 .0.2: 9092,192.168 .0.3: 9092 configuration, directly connected to the broker, as long as it can be connected to a broker, it can get information on other brokers in the cluster, bypassing zookeeper.

2. Start the kafka service

Kafka-server-start.bat .. /.. /config/server. properties to execute and start, another command line window, the same. check the configuration. The kafka service is opened on port = 9092,909 2.

3. Register a topic

Kafka-topics.bat -- create -- zookeeper localhost: 2181 -- replication-factor 1 -- partitions 1 -- topic test

In this command, create indicates creating. zookeeper and the following address indicate that kafka uses the zookeeper enabled on port 2181 of the local machine to maintain high availability. replication-factor indicates that only one message is redundant. Currently, we only have one kafka machine. broker and partitions indicate one partition. partitions are another concept of kafka. Generally, messages in the same topic are stored in different locations according to certain keys and algorithms. in this way, a message topic named test has been registered in kafka.

4. Use simple console producer simulation

Kafka-console-producer.bat -- broker-list localhost: 9092 -- topic test

As mentioned above, the producer of the new version connects kafka directly through brokerlist. Currently, there is only one producer, so the producer will send a message to the topic test at the same address.

5. Use simple console consumer simulation

Kafka-console-consumer.bat -- zookeeper localhost: 2181 -- topic test

As mentioned above, the consumer uses zookeeper to obtain the list of available brokers, then pull the messages, and there are some offset synchronization problems. It is a concept of partition and file storage, which will be written next time.

6. Start to produce and consume messages.

At this point, four console windows have been opened. In the producer window, enter a few words and display them in the consumer window.


Actual test diagram

Other problems

It may not be as smooth as it is. If an error message is prompted when you start kafka or other applications, prompting you that you cannot create a vm like this. Then modify the corresponding bat script.


Startup error. The heap application for vm is 1 GB. If your machine memory is not enough, change it to 512 MB or smaller.


Kafka quick development example



1. Create a maven project and introduce dependencies

<Dependency>
<GroupId> org. apache. kafka </groupId>
<ArtifactId> kafka_2.11 </artifactId>
<Version> 0.8.2.1 </version>
</Dependency>

2. Compile the configuration file

Public interface KafkaProperties {
Public final static String ZK = "maid: 2181 ";
Public final static String GROUP_ID = "test_group1 ";
Public final static String TOPIC = "test ";
Public final static String BROKER_LIST = "127.0.0.1: 9092 ";
Public final static String SESSION_TIMEOUT = "20000 ";
Public final static String SYNC_TIMEOUT = "20000 ";
Public final static String INTERVAL = "1000 ";
}

3. Compile the producer

Public class KafkaProducer extends Thread {
Private Producer <Integer, String> producer;
Private String topic;
Private Properties props = new Properties ();
Private final int SLEEP = 1000*3;
Public KafkaProducer (String topic ){
Props. put ("serializer. class", "kafka. serializer. StringEncoder ");
// The producer directly connects to the broker list
Props. put ("metadata. broker. list", KafkaProperties. BROKER_LIST );
Producer = new Producer <Integer, String> (new ProducerConfig (props ));
This. topic = topic;
    }
@ Override
Public void run (){
Int offsetNo = 1;
While (true ){
String msg = new String ("Message _" + offsetNo );
System. out. println ("Send-> [" + msg + "]");
Producer. send (new KeyedMessage <Integer, String> (topic, msg ));
OffsetNo ++;
Try {
Sleep (SLEEP );
} Catch (Exception ex ){
Ex. printStackTrace ();
            }
        }
    }
}

4. Write a consumer

Public class KafkaConsumer extends Thread {
Private ConsumerConnector consumer;
Private String topic;
Private final int SLEEP = 1000*3;
Public KafkaConsumer (String topic ){
Consumer = Consumer. Createjavaconsumerconne( this. consumerConfig ());
This. topic = topic;
    }
Private ConsumerConfig consumerConfig (){
Properties props = new Properties ();
// The consumer obtains the connection using the zk address.
Props. put ("zookeeper. connect", KafkaProperties. ZK );
Props. put ("group. id", KafkaProperties. GROUP_ID );
Props. put ("zookeeper. session. timeout. ms", KafkaProperties. SESSION_TIMEOUT );
Props. put ("zookeeper. sync. time. ms", KafkaProperties. SYNC_TIMEOUT );
Props. put ("auto. commit. interval. ms", KafkaProperties. INTERVAL );
Return new ConsumerConfig (props );
    }
@ Override
Public void run (){
Map <String, Integer> topicCountMap = new HashMap <String, Integer> ();
TopicCountMap. put (topic, new Integer (1 ));
Map <String, List <KafkaStream <byte [], byte []> consumerMap = consumer
. CreateMessageStreams (topicCountMap );
KafkaStream <byte [], byte []> stream = consumerMap. get (topic). get (0 );
ConsumerIterator <byte [], byte []> it = stream. iterator ();
While (it. hasNext ()){
System. out. println ("Receive-> [" + new String (it. next (). message () + "]");
Try {
Sleep (SLEEP );
} Catch (Exception ex ){
Ex. printStackTrace ();
            }
        }
    }
}

5. Compile the startup helper class

Public class KafkaClientApp {
Public static void main (String [] args ){
KafkaProducer pro = new KafkaProducer (KafkaProperties. TOPIC );
Pro. start ();
KafkaConsumer con = new KafkaConsumer (KafkaProperties. TOPIC );
Con. start ();
    }
}

Then start the test.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.