The pipe and filter modes decompose the task of performing complex processing into a series of discrete elements that can be reused. This mode improves performance, scalability, and reusability, allowing you to perform deployment and scaling of task elements that are handled independently. problem
An application may need to perform a variety of complex tasks based on the different information it handles. A simple, but inflexible, approach is to use the processing of the application as a separate module. However, if some of the same processing needs to be elsewhere in the application, this approach may reduce the chance of code refactoring, reuse, optimization.
Figure 1 below illustrates the process by which individual module methods process data. An application receives and processes data from two sources. The data for each source is handled by a separate module that performs a series of tasks to transform the data and then pass the results to the application's business logic.
Figure 1.
Solutions for implementing standalone modules
Some of the tasks performed by many standalone modules are functionally similar, but because the modules are designed separately, the code that implements the task is tightly coupled in a module where the repetitive parts are not reused to improve extensibility and reusability.
However, the processing tasks that each module performs, or the deployment requirements for each task, may change as business requirements are modified. Some tasks may be computationally intensive and may benefit from running on powerful hardware, while other tasks may not require such expensive resources. In addition, some additional processing may be required in the future, or the tasks performed by the command may change. So you need a skill to solve these problems, but also to increase the code reuse solution. Solution Solutions
It is a good solution to decompose the processing required for each data stream into a discrete set of components (or filters) and then perform a task by each component. These components (filters) can be combined into a single pipeline by standardizing the format of the data received and emitted by each component. This solution helps avoid duplication of code and can easily be removed, replaced, or integrated with additional components to implement functionality when requirements change. Figure 2 shows an example of this structure.
Figure 2
Solutions through pipelines and filters
The time to process a single request depends on the speed of the slowest filter in the pipeline. In particular, when a large number of requests are sent to a component, some or some of the components can become a performance bottleneck for the system. One of the main advantages of a piping structure is that it provides the opportunity for a slow-running filter to use parallel instances, allowing the system to balance the load to improve throughput.
The filters that make up the pipeline can run entirely on different machines, and they can take advantage of the resilience provided by many cloud environments to achieve independent scaling. A compute-intensive filter can run on high-performance hardware, while other low-demand filters can run on slightly worse hardware. The filter does not even have to be in the same datacenter or not at the same location, and the pipeline solution allows each element in the pipeline to run in an environment close to its required resources.
Figure 3 shows an example of a pipeline processing data flow:
Figure 3
Load sharing of components in pipelines
If the input and output of a filter are structured into a single stream, it is possible for multiple filters to be processed in parallel. The first filter in the pipeline can start its work and begin distributing its processing results, which is passed directly to the next filter sequence before the first filter has finished its work.
Another benefit is that the pipeline filter mode can provide good jumps. If a filter fails or a running machine is no longer available, the pipeline can rearrange the work that the filter is doing and point the work directly to another instance of the component. Failure of one filter does not cause a failure of the entire pipe.
Using the pipeline and filter patterns combined with the transaction compensation pattern provides an alternative way to implement distributed transactions. Distributed transactions can be decomposed into separate, paid tasks, each of which can be implemented using a filter and a transaction compensation pattern. The filters in the pipeline can be executed as separate managed tasks that can be physically located close to the data they maintain to reduce the network cost. issues to consider in implementing pipeline filter Patterns
When developers consider implementing the pipe and filter patterns, there are a few things to consider: complexity. This mode increases flexibility while also bringing a corresponding cost to the complexity. Especially when the filters in the pipeline are distributed across different servers. Reliability. It is best to use an architectural technique to ensure that the flow of data between filters in the pipeline is not lost. Idempotent. If one of the filters in the pipeline fails while receiving message processing and redirects to another new filter, some of the work may be done. If this part of the work contains updates to some global state (such as some information stored in the database), the same update operation may be repeated, which creates inconsistencies. Similarly, a similar error occurs when the filter sends one of the processing results to the next filter, but the failed filter may have completed the processing of the request. In these cases, all work must be guaranteed to be consistent in the context of repeated implementation, as well as the results. This, of course, can also cause some of these filters to be executed multiple times for the same request. Therefore, in the design of the pipeline, it is necessary to ensure the power of the same. For more information, you can refer to Jonathan Oliver's blog Power model.
A duplicate message. If a filter in the pipeline fails to send a message after processing to the next filter, another filter will start (as in the case of idempotent aspects above), then the filter will send a copy of the additional message to the pipeline. This may cause two identical instances of a message to be sent to the next filter. To avoid this situation, the pipeline should detect to eliminate duplicate messages.
If developers are implementing pipeline technology through Message Queuing, such as Windows Azure Service bus queues, the Message Queuing architecture provides the ability to automatically discover de-duplicated messages.
Context and state. In a pipeline, each filter is in a separate running state and does not make any assumptions about how it is invoked. This also means that each filter must know enough contextual information for a message to meet the minimum requirements for its processing. This contextual information is sufficient to contain a considerable amount of state. when to use this mode
In some of the following scenarios, consider using the pipeline filter pattern: The entire application process can be decomposed into a series of discrete, independent steps that can be considered using the pipeline filter pattern.
When the application handles different steps that have different extensibility requirements, you can consider implementing the pipeline filter pattern.
Of course, it is also possible to combine different filters to unify the expansion. For more information about this, refer to compute Resource consolidation mode
If you need more flexibility, consider using the pipeline filter pattern. The pipeline filter mode allows the application to flexibly configure different execution steps and execution sequences, and the application can increase or decrease the corresponding processing unit as needed. You can choose the pipeline filter mode when you need to maximize server utilization. If the solution needs to be guaranteed to be reliable and you need to minimize the likelihood of failures performed in each individual processing unit, consider using the pipeline filter pattern.
D Pipeline Filter Mode The following scenarios may not be very useful: some of the steps processed in the application are not idempotent, or the steps must be executed as part of the transaction. The pipeline filter pattern is not appropriate when a filter execution unit is being processed, when the number of contexts required or the state information has been made inefficient. Examples of Use
Developers can choose to use the Message Queuing schema to implement the pipeline filter pattern. Message Queuing constantly receives messages that are not processed. All filter components are then monitored for Message Queuing messages. When a new message arrives, it executes its work, then consumes the message continuously and adds the processed message to the next queue. Then another similar task listens to Message Queuing messages for processing until the filters in the entire pipeline have been processed.
Figure 4
Implementing pipeline filter patterns through Message Queuing
If developers are developing on Windows Azure, you can use Windows Azure Service bus queues to provide a reliable and extensible queue mechanism for your solution. The following Servicebuspipefilter class is an example of this. The following code shows how a filter receives a message from a queue, processes the message, and sends the result to another queue.
public class Servicebuspipefilter {... private readonly string Inqueuepath;
Private readonly string Outqueuepath;
... private queueclient inQueue;
Private Queueclient Outqueue; ... public servicebuspipefilter (..., string inqueuepath, string outqueuepath = null) {... this.in
QueuePath = Inqueuepath;
This.outqueuepath = Outqueuepath;
} public void Start () {...//Create The outbound filter queue if it does not exist.
... this.outqueue = queueclient.createfromconnectionstring (...);
...//Create the inbound and outbound queue clients.
This.inqueue = queueclient.createfromconnectionstring (...); } public void Onpipefiltermessageasync (Func<brokeredmessage, task<brokeredmessage>> asyncFilterTa
SK, ...) {... this.inQueue.OnMessageAsync (async (msg) = {...//Proc ESS the FILter and send the output to the//next queue in the pipeline.
var outmessage = await asyncfiltertask (msg);
Send the message from the filter processor//to the next queue in the pipeline.
if (outqueue! = null) {await outqueue.sendasync (outmessage); }//Note:there is a chance that the same message could being sent twice//or that a-message may
Processed by a upstream or downstream//filter at the same time. This would happen in a situation where processing of a message is//completed, it is sent to the next pi
Pe/queue, and then failed//to complete when using the PeekLock method.
Idempotent message processing and concurrency should be considered//in a Real-world implementation.
}, Options); Public Async Task Close (TimeSpan TimeSpan) {//Pause The processing threads.
This.pauseProcessingEvent.Reset (); There is no clean approach for waiting for the threads to complete//the processing. This example simply stops any new processing, waits//for the existing thread to complete, then closes the MESSAG
e pump//And finally returns.
Thread.Sleep (TimeSpan);
This.inQueue.Close ();
...
} ...
}
The Start method in the Servicebuspipefilter class connects a queue of inputs and outputs, while the Close method removes the corresponding connection. The Onpipefiltermessageasync method actually handles the message, where the Asyncfiltertask parameter is used to specify the processing to be performed. The Onpipefiltermessageasync method continues to wait for messages from the input queue, then processes the received messages based on the Asyncfiltertask parameter, and then publishes the results to the output queue. The queue itself is specified by a constructor function.
The general solution is to implement the filter as a set of units of work. Each unit of work can be scaled independently, depending on the complexity of the business and the amount of resources it needs to perform the processing. Furthermore, throughput can be increased by parallel execution of the filter. The following code shows a unit of work for Windows Azure named Pipefilteraroleentry.
public class Pipefilteraroleentry:roleentrypoint {... private servicebuspipefilter pipefiltera;
public override bool OnStart () {... this.pipefiltera = new Servicebuspipefilter (...,
Constants.queueapath, Constants.queuebpath);
This.pipeFilterA.Start ();
...
}
public override void Run () {This.pipeFilterA.OnPipeFilterMessageAsync (Async (msg) + = {
Clone the message and update it.
Properties set by the broker (deliver count, Enqueue time, ...)
is not cloned and must is copied over if required. var newmsg = Msg.
Clone (); Await Task.delay (500); DOING work trace.traceinformation ("Filter A processed message:{0} at {1}", Msg.
MessageId, Datetime.utcnow);
NEWMSG.PROPERTIES.ADD (Constants.filteramessagekey, "complete");
return newmsg;
});
...
} ...
}
The filter contains a Servicebuspipefilter object. The OnStart method connects the input queue that receives the message and the output queue that receives the processing completion message (defined in the Constats Class). The Run method calls Onpipefiltermessagesasync to do some processing for each received message (in the example, it simply waits for a while). When the processing is complete, a new message is generated (in the example, only some of the information is added to the attribute) and then sent to the output queue.
Developers can also define a new Roleentrypoint implementation based on the sample code. There is not much difference in implementation, just slightly different from the one in which the Run method is handled. Then connecting the different units of work is the implementation of a pipeline filter pattern. Refer to the following code:
public class Finalreceiverroleentry:roleentrypoint {...//Final queue/pipe in the PIP
Eline from which to process data.
Private Servicebuspipefilter queuefinal;
public override bool OnStart () {...//Set up the queue.
this.queuefinal = new Servicebuspipefilter (..., constants.queuefinalpath);
This.queueFinal.Start ();
...
}
public override void Run () {This.queueFinal.OnPipeFilterMessageAsync (async (msg) =
{await task.delay (+);//DOING work//The pipeline message is received. Trace.traceinformation ("Pipeline Message complete-filtera:{0} filterb:{1}", Msg. Properties[constants.filteramessagekey], MSG.
Properties[constants.filterbmessagekey]);
return null;
}
);
...
} ...
}
Related Other modes
In the implementation of the pipeline filter mode, you can refer to some of the following other modes: competing consumers mode. Each pipe may contain one or more filter components. This method is a good reference when considering the parallel execution of the filter component. The competitive consumer model allows multiple components to offload load to improve throughput. Each filter case can compete, and multiple instances of the filter need to ensure that the same message is not processed. The competitive consumer model provides a detailed description of some of the details. Compute Resource Consolidation mode. Sometimes you might consider combining multiple filters together to expand and scale them uniformly. The competitive resource merging model puts forward some consideration and trade-offs on the situation in more details. Compensating-transaction mode. A filter can be handled as a processing unit, or it may be necessary to perform a rollback at some point, or there are some compensating actions to restore the state of the previous message. The compensating transaction pattern describes how the component configuration is implemented to ensure eventual consistency.