Kafka and code implementation of single-machine installation deployment under Linux

Source: Internet
Author: User
Tags serialization zookeeper

Technology Exchange Group: 233513714

These days to study the installation and use of Kafka, on the internet to find a lot of tutorials but failed, until the end of the network to think of problems finally installed deployment success, the following describes the installation of Kafka and code implementation

First, close the firewall

Important thing to say 100 times, shut down the firewall ... (If you do not close the firewall, exception in thread "main" kafka.common.FailedToSendMessageException:Failed to send messages after 3 tries will appear. And so on all kinds of wonderful problems)

1. Close firewall:
Systemctl Stop Firewalld.service #停止firewall
Systemctl Disable Firewalld.service #禁止firewall开机启动
Firewall-cmd--state #查看默认防火墙状态 (show notrunning after turn off, show running when turned on)

2. Close Iptables

Service Iptables Stop #停止iptables
Chkconfig iptables off #永久关闭防火墙

Service Iptables Status #查看防火墙关闭状态

The above provides commands to turn off two types of firewalls, which can be selectively manipulated

Second, Kafka installation test

1. Installation Jre/jdk, (Kafka run to rely on the JDK, the installation of the JDK is omitted here, it is necessary to note that the JDK version must support the download of the Kafka version, otherwise will be error, here I installed jdk1.7)

2,: http://kafka.apache.org/downloads.html (i downloaded the version is kafka_2.11-0.11.0.1)

3, Decompression:

TAR-XZVF kafka_2.11-0.11.0.1. tgz

RM kafka_2.11-0.11.0.1. tgz (It is important to remove the compressed package, otherwise the ZK or Kafka will not start up the problem)

CD kafka_2.11-0.11.0.1

4, under the kafka_2.11-0.11.0.1 directory

/bin start and Stop commands, and so on.
/config configuration file
/libs Class Library

5. Modify the configuration

Modify zookeeper.properties under config for the following configuration

maxclientcnxns=100
ticktime=2000
initlimit=10
Synclimit=5

Add the following configuration in Server.properties

port=9092
host.name=10.61.8.6

zookeeper.connect=localhost:2181

zookeeper.connection.timeout.ms=6000

(The above configuration does not need to add)

6. Start, test, stop

(1), Start zookeeper

bin/zookeeper-server-start.sh Config/zookeeper.properties & (& is to be able to exit the command line)

(2), start Kafka

bin/kafka-server-start.sh Config/server.properties &

(3), see if Kafka and ZK start

Ps-ef|grep Kafka

(4), create topic (topic's name is ABC)

bin/kafka-topics.sh--create--zookeeper localhost:2181--partitions 8--replication-factor 2--topic ABC

(5), delete topic

bin/kafka-run-class.sh kafka.admin.DeleteTopicCommand--topic ABC--zookeeper localhost:2181

(6), view topic

bin/kafka-topics.sh--list--zookeeper localhost:2181

(7), Producter push message

bin/kafka-console-producer.sh--broker-list localhost:9092--topic ABC

(8), consumer consumer News

bin/kafka-console-consumer.sh--zookeeper localhost:2181--topic ABC--from-beginning

(9), stop Kafka

bin/kafka-server-stop.sh

(10), Stop zookeeper

bin/zookeeper-server-stop.sh

(11), kill the service

Kill-9 123 (123 is process number)

Third, Java code implementation

Producter

ImportKafka.javaapi.producer.Producer;ImportKafka.producer.KeyedMessage;ImportKafka.producer.ProducerConfig;ImportOrg.slf4j.Logger;Importorg.slf4j.LoggerFactory;Importjava.util.Properties;/*** Created by Administrator on 2017/10/23 0023.*/ Public classKafkaproducter {Private Static FinalLogger log = Loggerfactory.getlogger (kafkaproducter.class); Private FinalProducer<string, string>producer;  Public Final StaticString TOPIC = "abc";  Public Static voidMain (string[] args) {NewKafkaproducter (). produce (); }    PrivateKafkaproducter () {Properties props=NewProperties (); //The Kafka port is configured here .Props.put ("Metadata.broker.list", "10.61.8.6:9092"); //to configure the serialization class for valueProps.put ("Serializer.class", "Kafka.serializer.StringEncoder"); //Configuring the serialization class for keyProps.put ("Key.serializer.class", "Kafka.serializer.StringEncoder"); //0, this means that the producer never waits for confirmation from the agent (same behavior as 0.7).        This option provides the lowest latency, but the weakest durability guarantee (some data will be lost when the server fails). //1, this means that the producer is confirmed after the primary copy receives the data.        This option provides better durability because the client waits until the server confirms that the request was successful (only messages are written to the dead leader, but messages that have not yet been copied will be lost). //-1, which means that the producer is confirmed after all the synchronized copies have received the data. This option provides the best durability and we guarantee that there will be no loss of any messages as long as there is at least one synchronous copy. Props.put ("Request.required.acks", "1"); Producer=NewProducer<string, String> (Newproducerconfig (props)); }    voidProduce () {intMessageno = 1; Final intCOUNT = 10;  while(Messageno <COUNT) {String key=string.valueof (Messageno); String Data= "Hello Kafka" +key; Producer.send (NewKeyedmessage<string, string>(TOPIC, key, data)); Log.info ("", data); Messageno++; }    }}

Consumer

ImportOrg.slf4j.Logger;Importorg.slf4j.LoggerFactory;ImportKafka.consumer.ConsumerConfig;ImportKafka.consumer.ConsumerIterator;ImportKafka.consumer.KafkaStream;ImportKafka.javaapi.consumer.ConsumerConnector;ImportKafka.serializer.StringDecoder;Importkafka.utils.VerifiableProperties;ImportJava.util.HashMap;Importjava.util.List;ImportJava.util.Map;Importjava.util.Properties;/*** Created by Administrator on 2017/10/25 0025.*/ Public classKafkaconsumer {Private Static FinalLogger log = Loggerfactory.getlogger (Kafkaconsumer.class); Private FinalConsumerconnector Consumer;  Public Final StaticString TOPIC = "abc";  Public Static voidMain (string[] args) {NewKafkaconsumer (). consume (); }    PrivateKafkaconsumer () {Properties props=NewProperties (); //Zookeeper ConfigurationProps.put ("Zookeeper.connect", "10.61.8.6:2181"); //Group represents a consumer groupProps.put ("Group.id", "Jd-group"); //ZK Connection timed outProps.put ("zookeeper.session.timeout.ms", "4000"); Props.put ("Zookeeper.sync.time.ms", "200"); Props.put ("Auto.commit.interval.ms", "1000"); Props.put ("Auto.offset.reset", "smallest"); //Serialization ClassesProps.put ("Serializer.class", "Kafka.serializer.StringEncoder"); Consumerconfig Config=Newconsumerconfig (props); Consumer=kafka.consumer.Consumer.createJavaConsumerConnector (config); }    voidconsume () {Map<string, integer> topiccountmap =NewHashmap<string, integer>(); Topiccountmap.put (TOPIC,NewInteger (1)); Stringdecoder Keydecoder=NewStringdecoder (Newverifiableproperties ()); Stringdecoder Valuedecoder=NewStringdecoder (Newverifiableproperties ()); Map<string, list<kafkastream<string, string>>> consumermap =consumer.createmessagestreams (Topiccountmap, Keydecoder, Valuedecoder); Kafkastream<string, string> stream = Consumermap.get (TOPIC). Get (0); Consumeriterator<string, string> it =Stream.iterator ();  while(It.hasnext ()) {Log.info ("Kafka heard the message: {}", It.next (). message ()); } log.info ("Kafka monitoring is complete."); }}

The original is not easy, your support is my power to move forward

Kafka and code implementation of single-machine installation deployment under Linux

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.