Objective
The days of not writing essays seem to have nothing to produce in general ... In the last section, when it comes to learning Spring cloud Bus, it's better to follow the official documents, because there's no need to go to the next chapter to lay the groundwork for the current chapter, so we'll start with Spring Cloud Stream, and there's a lot of official documentation for this article and the translation of the Spring Cloud Micro Service in the book content and DD blog content, there may be a mixture of places, I hope you forgive me.
Quick Start
About five minutes to show you how to create a spring Cloud stream application, it is how to receive and output the received information from the message middleware to the console, there are two options for message middleware: RABBITMQ and Kafka, this article takes RABBITMQ as the subject
This section mainly simplifies the official documentation in two steps:
- Create a new project with idea
- Add Message Handler, Building and run
First, use idea to create a new project
Open the project directory and create a new moudle, named Firststream,pom file, as follows
<?xml version= "1.0" encoding= "UTF-8"? ><project xmlns= "http://maven.apache.org/POM/4.0.0" xmlns:xsi= "http ://www.w3.org/2001/XMLSchema-instance "xsi:schemalocation=" http://maven.apache.org/POM/4.0.0/http Maven.apache.org/xsd/maven-4.0.0.xsd "> <modelVersion>4.0.0</modelVersion> <groupId> Com.cnblogs.hellxz</groupid> <artifactId>FirstStream</artifactId> <version>0.0.1- snapshot</version> <packaging>jar</packaging> <name>FirstStream</name> <descript Ion>demo Project for Spring boot</description> <parent> <groupid>org.springframework.cloud& Lt;/groupid> <artifactId>spring-cloud-starter-parent</artifactId> <version>dalston.sr5& lt;/version> <relativePath/> </parent> <dependencies> <!--Spring boot test--& Gt <dependency> <groupid>org.springframework.boot</groupid> <artifactId>spring-boot-starter-test</artifactId> <scope>tes T</scope> </dependency> <!--Stream rabbit dependencies include binder-rabbit, so just import this dependency--< Dependency> <groupId>org.springframework.cloud</groupId> <artifactid>spring-clo Ud-starter-stream-rabbit</artifactid> <version>2.0.0.RELEASE</version> </dependency > </dependencies></project>
Second, add Message Handler, Building and run
com.cnblogs.hellxz
Add the Startup class under the package, and add
@SpringBootApplication@EnableBinding(Sink.class)public class FirstStreamApp { public static void main(String[] args) { SpringApplication.run(FirstStreamApp.class, args); } @StreamListener(Sink.INPUT) public void receive(Object payload) { logger.info("Received: " + payload); }}
- We
@EnableBinding(Sink.class)
do this by using Sink
the open binding (binding), which signals the framework to initiate binding to the messaging middleware and automatically creates targets (i.e., queue,topic and others) bound to the Sink.input channel.
- We have added a processing method to listen to a message of type string, which is to show you one of the core features of the framework-automatically convert the incoming message body to the specified type
Start the project, we go to view RABBITMQ page http://localhost:15672 Click Connections, find Now there is a connection coming in, we just project, in queues also have a queue was created, mine is input.anonymous.L92bTj6FRTyOC0QE-Pl0HA
, We'll open the only queue, pull it down. Publish Message,payload Enter one hello world
, click Publlish Message to send a message
View the console and you will seeReceived: hello world
Spring Cloud Stream Introduction
Spring Cloud Stream is a framework for building a message-driven microservices application, a stand-alone production-level created from spring Boot, using spring integration to provide a spring application that connects to the message broker. Introduces 持久发布 - 订阅(persistent publish-subscribe)
the semantics, 消费组(consumer groups)
and 分区(partitions)
the concepts.
You can add @EnableBinding
annotations to your app to immediately connect to the message broker, add @StreamListener
them to the method to receive the stream handling events, and the following example shows a sink app receiving external information
@SpringBootApplication@EnableBinding(Sink.class)public class VoteRecordingSinkApplication { public static void main(String[] args) { SpringApplication.run(VoteRecordingSinkApplication.class, args); } @StreamListener(Sink.INPUT) public void processVote(Vote vote) { votingService.recordVote(vote); }}
@EnableBinding
Annotations take one or more interfaces as parameters (for example, using the sink interface), an interface that often has a reputation for input and output channels, and Spring Stream provides Source
, Sink
Processor
These three interfaces, and you can define the interfaces yourself.
The following shows the interface content of sink
public interface Sink { String INPUT = "input"; @Input(Sink.INPUT) SubscribableChannel input();}
@Input
Annotations distinguish between an input channel, which receives messages into the application, uses @Output
annotations to differentiate the output channel, the message passes through it, and uses both annotations to take a channel name as a parameter, and if the channel name is not provided, The name of the annotated method is used.
You can use Spring Cloud Stream as an out-of-the-box interface, or you can @Autowired
inject this interface, as shown below in the test class for example
@RunWith(SpringJUnit4ClassRunner.class)@SpringBootTestpublic class LoggingConsumerApplicationTests { @Autowired private Sink sink; @Test public void contextLoads() { assertNotNull(this.sink.input()); }}
Main concept (main concepts)
Application Model
The application interacts with the binder in spring cloud stream via inputs or outputs, which is bound by our configuration, and the binder of Spring Cloud stream is responsible for interacting with the middleware. So, we just need to figure out how to interact with Spring Cloud Stream to make it easy to use a message-driven approach.
Abstract Binder (The binder abstraction)
Spring Cloud Stream implements the binder implementations of Kafkat and RABBITMQ, and also includes a testsupportbinder for testing. You can also write your own binder according to the API.
Spring Cloud Stream also uses spring boot's automatic configuration, and the abstract binder gives the spring cloud stream application greater flexibility. For example: We can specify parameters in APPLICATION.YML or application.properties to configure using Kafka or RABBITMQ without modifying our code.
In the project we tested earlier, Application.properties was not modified, and auto-configuration benefited from spring Boot
? With Binder, it is easy to connect the middleware and spring.cloud.stream.bindings.input.destination
change the message middleware (corresponding to the exchanges of Kafka TOPIC,RABBITMQ) by modifying the application.yml.
? Switching between the two does not even require a single line of code to be modified.
Publish-Subscribe (persistent publish-subscribe support)
As the classic spring Cloud Stream's publish-subscribe model, producer production messages are posted on shared topic, and then consumers get messages by subscribing to this topic
?
Where topic corresponds to destinations in spring Cloud stream (exchanges of Kafka TOPIC,RABBITMQ)
Official documents this piece of principle is a bit deep, not written, see official documents
Consumer group (Consumer Groups)
Although the publish-subscribe model is easy to apply through a shared topic connection, the ability to extend services by creating multiple instances of a particular application is equally important, but if these instances consume this data, there is a risk of recurring consumption. We only need one instance of the same application to consume the message, and then we can solve this scenario through the consumer group, when a different instance of an application is placed in a competing consumer group, and only one of the instances in the group can consume messages
Set the configuration of the consumer group to spring.cloud.stream.bindings.<channelName>.group
,
Here is an example from the DD blog:
, messages passed through the network are passed to the consumer group by the group name through the topic.
You can now pass spring.cloud.stream.bindings.input.group=Group-A
or spring.cloud.stream.bindings.input.group=Group-B
make a specified consumer group
All groups that subscribe to the specified subject receive a backup of the publish message, and only one member of each group receives the message, and if no group is specified, the default is to assign an anonymous consumer group to the app, with all other groups in the subscription-release relationship. PS: That is, if the pipeline does not specify a consumer group, then the anonymous consumer group will consume messages with other groups, and there is a problem of recurring consumption.
Consumer type (Consumer Types)
1) There are two types of consumer support:
- Message-driven (Message-driven, sometimes abbreviated
异步
)
- Polled (poll type, sometimes abbreviated as
同步
)
Only message-driven this type of asynchronous consumer before Spring Cloud 2.0, the message is delivered once it is available, and a thread can handle it, and when you want to control the processing speed of the message, you may need to use the synchronous consumer type.
2) Persistence
In General, all consumer groups that have a subscription theme are persisted, except for the anonymous consumer group. the implementation of binder ensures that the consumption subscriptions for all subscriptions are persistent, and that at least one of the consumer groups subscribes to the subject, and the messages that are subscribed to the topic are entered into the group regardless of whether the group is stopped.
Note: The anonymous subscription itself is non-persistent, but there are some binder implementations (such as RABBITMQ) that can create non-persistent group subscriptions
Typically, when an app is bound to a destination, it's a good idea to specify a consumer consumer group. When you extend a spring Cloud stream application, you must specify a consumer group for each input binding. Doing so prevents instances of the application from receiving duplicate messages (unless this behavior is required, which is unusual).
Partition support (partitioning supports)
In the consumer group we can guarantee that the message will not be repeated consumption, but there are multiple instances under the same group, we can not determine whether each message processing is not consumed by the same consumer, the role of partitioning is to ensure that the data with a common identity is processed by the same consumer instance , Of course, the preceding example is narrowly defined, and the communication agent (broken topic) can also be understood as having done the same partitioning. The partitioning concept of Spring Cloud Stream is abstract and can be used for partitions that do not support partitioning binder implementations (for example, RABBITMQ).
Note: To use partition processing, you must configure both the producer and the consumer.
Programming models (programming model)
To understand the programming model, you need to familiarize yourself with the following core concepts:
- Destination Binders (destination binder): Components that are responsible for interacting with external message system integration
- Destination Bindings (Destination binding): Bridge between the external messaging system and the producer and consumer of the application (created by Destination binders)
- Message: Specification data used by producers and consumers to communicate through destination binders.
Destination Binders (destination binder):
Destination Binders is a spring Cloud stream with external messaging middleware that provides the necessary configuration and implementation of extension components to facilitate integration. Routes, connections and delegates, data type conversions, user code calls, etc. that integrate producer and consumer messages.
Although Binders has helped us deal with a lot of things, we still need to configure him. I'll talk later.
Destination Bindings (Destination binding) :
As mentioned earlier, Destination Bindings provides a bridge between producers and consumers that connect external messaging middleware and applications.
Using the @enablebinding annotation to define a destination Binding on a configuration class, the annotation itself contains @configuration, which triggers the basic configuration of spring Cloud stream.
?
The following example shows a fully configured and functioning spring Cloud stream application that is INPUT
converted from a receive message to a string type and printed on the console, and then converts an uppercase message back into OUTPUT
.
@SpringBootApplication@EnableBinding(Processor.class)public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } @StreamListener(Processor.INPUT) @SendTo(Processor.OUTPUT) public String handle(String value) { System.out.println("Received: " + value); return value.toUpperCase(); }}
The return value in the method is forwarded to the other message channel through the sendto annotation , because there is no receive channel defined, the message is missing, the workaround is to create a new interface, as follows
public interface MyPipe{ //方法1 @Input(Processor.OUTPUT) //这里使用Processor.OUTPUT是因为要同一个管道,或者名称相同 SubscribableChannel input(); //还可以如下这样=====二选一即可========== //方法2 String INPUT = "output"; @Input(MyPipe.INPUT) SubscribableChannel input();}
Then add a method below the top method and change it to @enablebinding ({processor.class, mypipe.class}) in the @enablebinding annotation
@StreamListener(MyPipe.INPUT) public void handleMyPipe(String value) { System.out.println("Received: " + value); }
Spring Cloud Stream has provided us with a default implementation of three bound message channels
- Sink: Identifies the contract of the message consumer by specifying the destination of the consumption message.
- Source: The contract used to identify the message producer, in contrast to sink.
- Processor: Integrates the role of sink and source to identify message producers and consumers
Their source code is:
public interface Sink { String INPUT = "input"; @Input("input") SubscribableChannel input();}public interface Source { String OUTPUT = "output"; @Output("output") MessageChannel output();}public interface Processor extends Source, Sink {}
The input and output channels are defined in sink and source by @input and @output annotations, respectively, by using the member variables in the two interfaces to define the names of the input and output channels, and processor inherit both channels because they are inherited from both interfaces.
Note: When you have more than one pipeline, you cannot have the same name as the input and output pipe, or you will receive a message or send an error.
We can define our own input and output channels according to the above source code, define the input channel needs to return the Subscribalechannel interface object, this interface inherits from the Messagechannel interface, it defines the method of maintaining the message channel subscribers Defining an output channel requires returning the Messagechannel interface object, which defines a method for sending messages to the message channel.
Custom message channel send and receive
According to the above, we can also create our own binding channel if you implement the top of the Mypipe interface, then directly use this interface is good
- and the main similar package to build a mypipe interface, to achieve the following
package com.cnblogs.hellxz;import org.springframework.cloud.stream.annotation.Input;import org.springframework.cloud.stream.messaging.Source;import org.springframework.messaging.SubscribableChannel;public interface MyPipe { //方法1// @Input(Source.OUTPUT) //Source.OUTPUT的值是output,我们自定义也是一样的// SubscribableChannel input(); //使用@Input注解标注的输入管道需要使用SubscribableChannel来订阅通道 //========二选一使用=========== //方法2 String INPUT = "output"; @Input(MyPipe.INPUT) SubscribableChannel input();}
Here is the same with Source.output and the second method, we just send the message to the pipeline named output, then listen to the input stream of the output pipeline to get the data at one end
- Extending the main class, adding the Listen output pipeline method
@StreamListener(MyPipe.INPUT) public void receiveFromMyPipe(Object payload){ logger.info("Received: "+payload); }
@enablebinding on the head of the main class instead @EnableBinding({Sink.class, MyPipe.class})
, joins the binding of the Mypipe interface
Under Test/java, create com.cnblogs.hellxz
a new test class under the package, as follows
Package Com.cnblogs.hellxz;import Org.junit.test;import Org.junit.runner.runwith;import Org.springframework.beans.factory.annotation.autowired;import Org.springframework.boot.test.context.springboottest;import Org.springframework.cloud.stream.annotation.enablebinding;import Org.springframework.cloud.stream.messaging.source;import Org.springframework.messaging.support.MessageBuilder; Import Org.springframework.test.context.junit4.SpringRunner; @RunWith (springrunner.class) @EnableBinding (value = { Source.class}) @SpringBootTestpublic class Testsendmessage {@Autowired private source source;//injection interface and injection Messagechanne The difference between l is that you do not need to call the method inside the interface @Test public void Testsender () {source.output (). Send (Messagebuilder.withpayload ("Messa GE from Mypipe "). Build ()); Hypothesis injected with Messagechannel Messagechannel; Because the binding is the source interface,//So the only way to generate Messagechannel is to use it, then the code below will be//messagechannel.send (messagebuilder.withpayload ("Message from Mypipe"). Build ()); }}
Start the main class, empty the output, run the test class, and then you'll get the message in the main class console output in log formMessage from MyPipe
We are getting the Messagechannel instance by injecting the message channel and calling his output method to declare the pipeline, sending the message
Problems that may occur during pipeline injection
The way to inject a message channel is straightforward, but it's also easy to make mistakes, when there are multiple channels in an access port, the instances they return are messagechannel, so that when injected by @autowired, there will often be multiple instances to find an error that cannot be determined to inject the instance. We can specify the name of the message channel via @qualifier, for example:
Create a pipeline with multiple output streams within the main class package
/** * 多个输出管道 */public interface MutiplePipe { @Output("output1") MessageChannel output1(); @Output("output2") MessageChannel output2();}
Create a test class
@RunWith(SpringRunner.class)@EnableBinding(value = {MutiplePipe.class}) //开启绑定功能@SpringBootTest //测试public class TestMultipleOutput { @Autowired private MessageChannel messageChannel; @Test public void testSender() { //向管道发送消息 messageChannel.send(MessageBuilder.withPayload("produce by multiple pipe").build()); }}
Start the test class, you will get the not unique bean you just said, can't inject
Caused by: org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'org.springframework.messaging.MessageChannel' available: expected single matching bean but found 6: output1,output2,input,output,nullChannel,errorChannel
We @Autowired
'll add it next @Qualifier("output1")
, and then the test will start normally.
With the error above, we can clearly see that each messagechannel is the name of the bean using the name of the message channel.
Here we are not using this channel to monitor, just to test and discover the problem
Common configuration settings for consumer groups and partitions
Set up consumer groups and topics for consumers
- Set up a consumer group:
spring.cloud.stream.bindings.<通道名>.group=<消费组名>
- To set a theme:
spring.cloud.stream.bindings.<通道名>.destination=<主题名>
To specify the channel theme for the producer:spring.cloud.stream.bindings.<通道名>.destination=<主题名>
The consumer opens the partition, specifying the number of instances and the instance index
- To open a consumer partition:
spring.cloud.stream.bindings.<通道名>.consumer.partitioned=true
- Number of consumption instances:
spring.cloud.stream.instanceCount=1
(Specified)
- Instance index:
spring.cloud.stream.instanceIndex=1
#设置当前实例的索引值
Producer-Specified partition key
- Partition key:
spring.cloud.stream.bindings.<通道名>.producer.partitionKeyExpress=<分区键>
- Number of partitions:
spring.cloud.stream.bindings.<通道名>.producer.partitionCount=<分区数量>
This article references and references
"Spring Cloud microservices" and the author's blog
https://www.jianshu.com/p/fb7d11c7f798
73743148
http://www.laomn.com/article/item/33322
Official documents
Spring Cloud (15) Stream Introduction, Key concepts and custom message sending and receiving