Kafka Production and Consumption example

Source: Internet
Author: User
Tags arrays auth commit config serialization zookeeper
Environmental Preparedness
Create topic command-line mode
implementation of producer consumer examples Client Mode
Run consumer producers 1. Environmental Preparedness

Description: Kafka cluster environment I am lazy to use the company's existing environment directly. Security, all operations are done under their own users, if their own Kafka environment, fully can use the Kafka Administrator's users. The creation of topic needs to be done under the user of the Kafka administrator.

1. Log in to the node in the Kafka cluster and switch to the Kafka administrator user

SSH 172.16.150.xx
Su-kafka

2. Create Topic

Create topic command:
kafka-topics--zookeeper bdap-nn-1.cebbank.com,bdap-mn-1.cebbank.com,bdap-nn-2.cebbank.com:2181/ Kafka--create--topic topicname--partitions 4--replication-factor 3
Query topic command:
kafka-topics--zookeeper Bdap-nn-1.cebbank.com,bdap-mn-1.cebbank.com,bdap-nn-2.cebbank.com:2181/kafka–list

Note When you create topic, you specify the number of partitions, zookerper cluster names and topicname to replace them, topicname do not repeat

3. Create read-write topic permissions for my own users

Write permission:
kafka-acls--authorizer-properties zookeeper.connect=bdap-nn-1.cebbank.com,bdap-mn-1.cebbank.com, Bdap-nn-2.cebbank.com:2181/kafka--add--allow-principal user:xx--operation Write--operation Describe--topic Topicname
Read permission:
kafka-acls--authorizer-properties zookeeper.connect=bdap-nn-1.cebbank.com, Bdap-mn-1.cebbank.com,bdap-nn-2.cebbank.com:2181/kafka--add  --allow-principal User:xx--operation READ--topic Topicname--group "*"
2. Command line

1. Need to switch to their own users under
2. Copy the Producer.properties, consumer.properties in the Kafka user directory to your own directory to perform consumer instances

Kafka-console-consumer--zookeeper Bdap-nn-1.cebbank.com,bdap-mn-1.cebbank.com,bdap-nn-2.cebbank.com:2181/kafka- Consumer.config/home/username/consumer.properties--topic topicname--new-consumer--bootstrap-server bdap-nn-1.cebbank.com:9092--from-beginning
executing producer Instances
Kafka-console-producer--broker-list bdap-nn-1.cebbank.com:9092, bdap-mn-1.cebbank.com:9092,bdap-nn-2.cebbank.com : 9092--topic topicname--producer.config/home/username/producer.properties

After the production consumer instance starts, after entering any character in the producer window, the consumer window can receive, the instance run completes.

The instance of the command line is very simple, just a send and receive function, just let us know the Kafka production consumption form first. The actual project is in the code to achieve production consumption. 3. Client Consumer code

Package Kafka.consumer;
Import Java.util.Arrays;

Import java.util.Properties;
Import Org.apache.kafka.clients.consumer.ConsumerRecord;
Import Org.apache.kafka.clients.consumer.ConsumerRecords;
Import Org.apache.kafka.clients.consumer.KafkaConsumer;
Import Org.slf4j.Logger;

Import Org.slf4j.LoggerFactory;
    public class Mykafkaconsumer {private static final Logger log = Loggerfactory.getlogger (Mykafkaconsumer.class); public static void Main (string[] args) throws Interruptedexception {//kerberos configuration, no authentication, no need to introduce SYSTEM.SETPR
         Operty ("java.security.krb5.conf", "d:/krb5.conf");
         System.setproperty ("Java.security.auth.login.config", "d:/lsz_jaas.conf");
         Properties Props = new properties ();
         Log.info ("**********************************************");
         Props.put ("Security.protocol", "Sasl_plaintext");
         Props.put ("Sasl.kerberos.service.name", "Kafka"); Props.put ("Bootstrap.servers", "172.16.150.xx:9092,172.16.150.xx1:9092,172.16.150.xx2:9092 ");
         Consumer groups, if a topic has 4 partitions, and a consumer group has 2 consumers.

         Each consumer consumes 2 grouped props.put ("Group.id", "kafka_lsz_1");

         Autocommit offset to false to manual control offset props.put ("Enable.auto.commit", "true"); Props.put ("auto.commit.interval.ms", "1000"), or increasing the poll interval, can give consumers more time to process the returned message, with the disadvantage that the larger the value will delay the group to rebalance Props.put ("Sessi
         On.timeout.ms "," 30000 ");
         Props.put ("Key.deserializer", "Org.apache.kafka.common.serialization.StringDeserializer");
        Props.put ("Value.deserializer", "Org.apache.kafka.common.serialization.StringDeserializer");
         Props.put ("Auto.offset.reset", "earliest");
         Kafkaconsumer<byte[], byte[]> consumer = new kafkaconsumer<byte[], byte[]> (props);
         Consumer.seektobeginning ();

         Consumer.subscribe (Arrays.aslist ("lsztopic3"));
             /** Automatic commit offset/while (true) {//Consumer subscription topic, call poll method, join group. To remain in the group, you must continually invoke the poll method consumerrecords<byte[], Byte[]> records = Consumer.poll (100); For (consumerrecord<byte[], byte[]> record:records) {System.out.println (Record.topic () + "---" + r
                Ecord.partition ());
            System.out.printf ("offset =%d, key =%s", Record.offset (), Record.key () + "\ r \ n"); }         

         }
    }
}
producer
Package kafka.producer;
Import java.util.Properties;
Import Org.apache.kafka.clients.producer.Callback;
Import Org.apache.kafka.clients.producer.KafkaProducer;
Import Org.apache.kafka.clients.producer.ProducerRecord;
Import Org.apache.kafka.clients.producer.RecordMetadata;
Import Org.slf4j.Logger;
Import Org.slf4j.LoggerFactory;
  public class consume {private static final Logger LOG = Loggerfactory.getlogger (Mykafkaproducer.class);

  private static final String TOPIC = "LSZTOPIC3"; public static void Main (string[] args) throws Exception {System.setproperty ("java.security.krb5.conf", "d:/krb5.conf"
    );
    System.setproperty ("Java.security.auth.login.config", "d:/lsz_jaas.conf");

    Properties Props = new properties ();
    Props.put ("Bootstrap.servers", "172.16.150.xx:9092,172.16.150.xx1:9092,172.16.150.xx2:9092");
    Props.put ("Producer.type", "async");
    Number of retries Props.put ("Message.send.max.retries", "3"); At the time of the asynchronous commit (async), the number of records submitted concurrently props.put ("Batch.num.messagEs "," 200 ");
    Cache Pool Size Props.put ("Batch.size", "16384");
    Set buffer size, default 10KB props.put ("Send.buffer.bytes", "102400");
    Props.put ("Request.required.acks", "1");
    Props.put ("Security.protocol", "Sasl_plaintext");
    Props.put ("Sasl.kerberos.service.name", "Kafka");
    Props.put ("Key.serializer", "Org.apache.kafka.common.serialization.StringSerializer");
   Props.put ("Value.serializer", "Org.apache.kafka.common.serialization.StringSerializer");
    Props.put ("Partitioner.class", "Kafka.producer.KafkaCustomPartitioner");

    kafkaproducer<string,string> kafkaproducer = new kafkaproducer<string,string> (props);
    String key = "";
    String value = "";
    Producerrecord<string,string> records = new producerrecord<string,string> (topic,key,value);
            Kafkaproducer.send (records,new Callback () {public void oncompletion (Recordmetadata metadata, Exception e) {
            if (e!= null) e.printstacktrace (); SySTEM.OUT.PRINTLN ("The offset of the record we just sent is:" +metadata.partition () + "" + Metadata.offset ());

    }
    });
   Thread.Sleep (5000);
  Kafkaproducer.close (); }
}

Of course, after the producer of the client starts, the command line consumer can also receive the message. However, if you use Kerberos authentication, be sure to pay attention to the client and service-side time, Kerberos has a time test, if the two ends of time inconsistent, the consumer can not receive the message.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.