Kafka Getting Started and Spring Boot integration tags: blogs
[TOC]
Overview
Kafka is a high-performance message queue and a distributed streaming processing platform (where flows refer to data streams). Written by the Java and Scala languages, originally developed by LinkedIn and open source in 2011, is now maintained by Apache.
Application Scenarios
Here are some common application scenarios for Kafka.
Message Queuing : Kafka can be used as Message Queuing and can be used for asynchronous decoupling, traffic clipping and other scenarios in the system.
Application monitoring : Use Kafka to capture application and server health-related metrics, such as application-related logs, server-related CPUs, occupancy, IO, memory, connections, TPS, QPS, etc., and then process the metrics information to build a monitoring dashboard, Graph and other visual monitoring system. For example, many companies use Kafka and ELK (ElasticSearch, Logstash, and Kibana) to integrate monitoring systems that build application services.
Stream Processing : For example, send the Kafka received data to the Storm Flow computing framework processing.
Basic concepts
record (message): Kafka the basic unit of communication, each message is called a record
producer (producer): The client that sends the message.
Consumer (consumer): A client that consumes messages.
Consumergroup (consumer group): Each consumer belongs to a specific consumer group.
the relationship between consumer and consumer groups :
- If A,b,c belongs to the same consumer group, that message can only be consumed by one of the consumers in A,b,c.
- If a,b,c belong to a different consumer group (such as GA,GB,GC), that message comes in, a,b,c three consumers can consume.
topic (subject): Kafka messages are categorized by topic, similar to tables in the database. Producer post a message to Topic,consumer subscription topic for consumption
partition (Partition): A topic is divided into one or more partitions (partition), and then multiple partitions can be distributed across different machines, which is equivalent to running on more than one machine, Kafka improve performance and throughput in a partitioned way
Replica (Dungeon): One or more replicas of a partition, the role of which is to improve the availability of partitions.
Offset (offset): Offsets are similar to the database self-increment int Id, which is identified by a unique offset as the offset in the Kafka partition continues to increase as the data is continuously written. The effect of offsets is to let consumers know where they are spending, and next time they can spend it. Such as:
Consumer a consumes a record of offset 9, and consumer B consumes a record of offset 11.
Basic structure
The most basic structure of Kafka is the same as the common message queue structure.
Messages are sent through the producer to the Kafka cluster, and then the consumer pulls messages from the Kafka cluster for consumption.
and spring Boot Integrated Integration overview
This integration is based on the spring boot Official document said integration method, the official link, the general idea of integration is to configure the basic information of the producer and the consumer in spring boot application.properties, then the spring boot The Kafkatemplate object is created after startup, which can be used to send messages to Kafka, and then use @KafkaListener annotations to consume the messages inside the Kafka, as follows.
Integrated environment
spring boot
: 1.5.13 version
spring-kafka
: 1.3.5 version
kafka
: 1.0.1 version
Kafka Environment Construction
Start Zookeeper First:
Restart Kafka: replace the following IP for your server IP
Spring Boot and spring for Apache Kafka integration steps
- First, the Spring for Apache Kafka is introduced into the POM
<!-- kafka --> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>1.3.5.RELEASE</version> </dependency>
- The following configuration is then included in the Application.properties configuration file:
Explanations of each configuration can be found in the Kafka configuration in the Spring boot appendix, where the search for the Kafka keyword is targeted.
server.port=8090####### kafka### producer 配置spring.kafka.producer.bootstrap-servers=192.168.10.48:9092spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializerspring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer### consumer 配置spring.kafka.consumer.bootstrap-servers=192.168.10.48:9092spring.kafka.consumer.group-id=anuoappspring.kafka.consumer.enable-auto-commit=truespring.kafka.consumer.auto-commit-interval=100spring.kafka.consumer.max-poll-records=1spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializerspring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializerspring.kafka.listener.concurrency=5
- Create Kafka Producer Producers
package com.example.anuoapp.kafka;import com.alibaba.fastjson.JSON;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.kafka.core.KafkaTemplate;import org.springframework.stereotype.Component;import org.springframework.util.concurrent.ListenableFuture;@Componentpublic class KafkaProducer { @Autowired KafkaTemplate kafkaTemplate; public void kafkaSend() throws Exception { UserAccount userAccount=new UserAccount(); userAccount.setCard_name("jk"); userAccount.setAddress("cd"); ListenableFuture send = kafkaTemplate.send("jktopic", "key", JSON.toJSONString(userAccount)); }}
- Create Kafka Consumer Consumer
package com.example.anuoapp.kafka;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.kafka.annotation.KafkaListener;import org.springframework.stereotype.Component;@Componentpublic class KafkaConsumer { public static final Logger logger = LoggerFactory.getLogger(KafkaConsumer.class); @KafkaListener(topics = {"jktopic"}) public void jktopic(ConsumerRecord consumerRecord) throws InterruptedException { System.out.println(consumerRecord.offset()); System.out.println(consumerRecord.value().toString()); Thread.sleep(3000); }}
- Create a rest API to invoke the Kafka message producer
package com.example.anuoapp.controller;import com.example.anuoapp.kafka.KafkaProducer;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RequestMethod;import org.springframework.web.bind.annotation.RestController;@RestController@RequestMapping("/api/system")public class SystemController { private Logger logger = LoggerFactory.getLogger(SystemController.class); @Autowired KafkaProducer kafkaProducer; @RequestMapping(value = "/Kafka/send", method = RequestMethod.GET) public void WarnInfo() throws Exception { int count=10; for (int i = 0; i < count; i++) { kafkaProducer.kafkaSend(); } }}
- Using post man to invoke the interface created by the 5th step, we can see the output information generated by the following consumers
30{"address":"cd","bind_qq":false,"bind_weixin":false,"card_name":"jk","passwordDirectCompare":false}31{"address":"cd","bind_qq":false,"bind_weixin":false,"card_name":"jk","passwordDirectCompare":false}32{"address":"cd","bind_qq":false,"bind_weixin":false,"card_name":"jk","passwordDirectCompare":false}
At last
Congratulations to you! Spring Boot Kafka integration is complete.
The complete basic source code see:
Link: https://pan.baidu.com/s/1E2Lmbj9A9uruTXG54uPl_g Password: e6d6
Kafka Getting Started and Spring Boot integration