Environment Preparation
Create topic
command-line mode
executing producer consumer instances
Client Mode
Run consumer producers
1. Environmental Preparedness
Description: Kafka Clustered Environment I'm lazy. Direct use of the company's existing environment. Security, all operations are done under their own users, if their own Kafka environment, can fully use the Kafka administrator users. When creating topic, you need to complete it under the Kafka Administrator's user.
1. Log in to the node in the Kafka cluster and switch to the Kafka Admin user
SSH 172.16.150.xx
Su-kafka
2. Create Topic
Create topic command:
kafka-topics--zookeeper bdap-nn-1.cebbank.com,bdap-mn-1.cebbank.com,bdap-nn-2.cebbank.com:2181/ Kafka--create--topic topicname--partitions 4--replication-factor 3
Query topic command:
kafka-topics--zookeeper Bdap-nn-1.cebbank.com,bdap-mn-1.cebbank.com,bdap-nn-2.cebbank.com:2181/kafka–list
Note When you create topic, you specify the number of partitions, zookerper the cluster name and topicname to replace it with your own, topicname do not repeat
3. Create read-write topic permissions for my own users
Write permission:
kafka-acls--authorizer-properties zookeeper.connect=bdap-nn-1.cebbank.com,bdap-mn-1.cebbank.com, Bdap-nn-2.cebbank.com:2181/kafka--add--allow-principal user:xx--operation Write--operation Describe--topic Topicname
Read permission:
kafka-acls--authorizer-properties zookeeper.connect=bdap-nn-1.cebbank.com, Bdap-mn-1.cebbank.com,bdap-nn-2.cebbank.com:2181/kafka--add --allow-principal User:xx--operation READ--topic Topicname--group "*"
2. Command line
1. Need to switch to your own user
2. Copy the Producer.properties, consumer.properties from the Kafka user directory to your own directory to execute the consumer instance
Kafka-console-consumer--zookeeper Bdap-nn-1.cebbank.com,bdap-mn-1.cebbank.com,bdap-nn-2.cebbank.com:2181/kafka- Consumer.config/home/username/consumer.properties--topic topicname--new-consumer--bootstrap-server bdap-nn-1.cebbank.com:9092--from-beginning
executing producer Instances
Kafka-console-producer--broker-list bdap-nn-1.cebbank.com:9092, bdap-mn-1.cebbank.com:9092,bdap-nn-2.cebbank.com : 9092--topic topicname--producer.config/home/username/producer.properties
After the production consumer instance starts, after entering any character in the producer window, the consumer window can receive, then the instance runs complete.
The instance of the command line is very simple, just a transceiver function, just let us first understand the Kafka production and consumption form. The actual project is in the code to achieve production consumption. 3. Client Consumer code
Package Kafka.consumer;
Import Java.util.Arrays;
Import java.util.Properties;
Import Org.apache.kafka.clients.consumer.ConsumerRecord;
Import Org.apache.kafka.clients.consumer.ConsumerRecords;
Import Org.apache.kafka.clients.consumer.KafkaConsumer;
Import Org.slf4j.Logger;
Import Org.slf4j.LoggerFactory;
public class Mykafkaconsumer {private static final Logger log = Loggerfactory.getlogger (Mykafkaconsumer.class); public static void Main (string[] args) throws Interruptedexception {//kerberos configuration, no authentication, no need to introduce SYSTEM.SETPR
Operty ("java.security.krb5.conf", "d:/krb5.conf");
System.setproperty ("Java.security.auth.login.config", "d:/lsz_jaas.conf");
Properties Props = new properties ();
Log.info ("**********************************************");
Props.put ("Security.protocol", "Sasl_plaintext");
Props.put ("Sasl.kerberos.service.name", "Kafka"); Props.put ("Bootstrap.servers", "172.16.150.xx:9092,172.16.150.xx1:9092,172.16.150.xx2:9092 ");
Consumer groupings, if a topic has 4 partitions, and a consumer group has 2 consumers.
Each consumer consumes 2 packet props.put ("Group.id", "kafka_lsz_1");
The auto-commit offset is changed to false to manually control the offset props.put ("Enable.auto.commit", "true"); Props.put ("auto.commit.interval.ms", "1000");//Increase the interval of poll, can provide consumers with more time to process the returned message, the disadvantage is that the higher the value will delay the group rebalance props.put ("Sessi
On.timeout.ms "," 30000 ");
Props.put ("Key.deserializer", "Org.apache.kafka.common.serialization.StringDeserializer");
Props.put ("Value.deserializer", "Org.apache.kafka.common.serialization.StringDeserializer");
Props.put ("Auto.offset.reset", "earliest");
Kafkaconsumer<byte[], byte[]> consumer = new kafkaconsumer<byte[], byte[]> (props);
Consumer.seektobeginning ();
Consumer.subscribe (Arrays.aslist ("lsztopic3"));
/** Auto-Commit offset */while (true) {///Consumer subscribes to topic, call poll method, join to group. To stay in a group, you must continue to call the poll method consumerrecords<byte[], Byte[]> records = Consumer.poll (100); For (consumerrecord<byte[], byte[]> record:records) {System.out.println (Record.topic () + "---" + r
Ecord.partition ());
System.out.printf ("offset =%d, key =%s", Record.offset (), Record.key () + "\ r \ n"); }
}
}
}
producer
Package kafka.producer;
Import java.util.Properties;
Import Org.apache.kafka.clients.producer.Callback;
Import Org.apache.kafka.clients.producer.KafkaProducer;
Import Org.apache.kafka.clients.producer.ProducerRecord;
Import Org.apache.kafka.clients.producer.RecordMetadata;
Import Org.slf4j.Logger;
Import Org.slf4j.LoggerFactory;
public class consume {private static final Logger LOG = Loggerfactory.getlogger (Mykafkaproducer.class);
private static final String TOPIC = "LSZTOPIC3"; public static void Main (string[] args) throws Exception {System.setproperty ("java.security.krb5.conf", "d:/krb5.conf"
);
System.setproperty ("Java.security.auth.login.config", "d:/lsz_jaas.conf");
Properties Props = new properties ();
Props.put ("Bootstrap.servers", "172.16.150.xx:9092,172.16.150.xx1:9092,172.16.150.xx2:9092");
Props.put ("Producer.type", "async");
Retry Count Props.put ("Message.send.max.retries", "3"); Asynchronous commit time (async), number of concurrent committed Records Props.put ("Batch.num.messagEs "," 200 ");
Cache Pool Size Props.put ("Batch.size", "16384");
Set buffer size, default 10KB props.put ("Send.buffer.bytes", "102400");
Props.put ("Request.required.acks", "1");
Props.put ("Security.protocol", "Sasl_plaintext");
Props.put ("Sasl.kerberos.service.name", "Kafka");
Props.put ("Key.serializer", "Org.apache.kafka.common.serialization.StringSerializer");
Props.put ("Value.serializer", "Org.apache.kafka.common.serialization.StringSerializer");
Props.put ("Partitioner.class", "Kafka.producer.KafkaCustomPartitioner");
kafkaproducer<string,string> kafkaproducer = new kafkaproducer<string,string> (props);
String key = "";
String value = "";
Producerrecord<string,string> records = new producerrecord<string,string> (topic,key,value);
Kafkaproducer.send (records,new Callback () {public void oncompletion (Recordmetadata metadata, Exception e) {
if (E! = null) e.printstacktrace (); SySTEM.OUT.PRINTLN ("The offset of the record we just sent is:" +metadata.partition () + "" + Metadata.offset ());
}
});
Thread.Sleep (5000);
Kafkaproducer.close (); }
}
Of course, after the client's producer starts, the command line consumer can also receive the message. However, if you use Kerberos authentication, be sure to pay attention to the client and server time, Kerberos has a time test, if the two ends of the time is inconsistent, the consumer will not receive the message.