Python Advanced Programming Builder (Generator) and Coroutine (ii): Coroutine and Pipeline (pipeline) and dataflow (Data Flow _

Source: Internet
Author: User

Original works, reproduced please indicate the source: point I

In the first two articles, we covered what is generator and coroutine, and in this article we will describe Coroutine's use of analog pipeline (piping) and control dataflow (data flow).

Coroutine can be used to simulate pipeline behavior. by concatenating multiple coroutine together to implement a pipe, the data is passed between the various coroutine through the Send () function:

But where does the data from the pipe come from? This requires a data source, or producer. This producer drives the operation of the entire pipe:

Typically, source simply provides data that drives the entire pipe to run, which is not itself a coroutine, and typically behaves like this pattern:

Where Target is a coroutine, when the Target.send () function is called, the data will pass through the pipe.

Since pipeline has a starting point, the same must have a sink (end-point, that is, the end)

Sink is to collect and process the data transmitted by the Coroutine. Sink the usual pattern is:

In the previous article on Generetor, we implemented the TAIL-F command and tail-f in Unix with generator | grep command, here we also use Coroutine to implement these two commands.

Let's take a look at the code Unix_tail_f_co () function as source

1 #A source that mimics Unix "Tail-f"2 defUnix_tail_f_co (thefile, target):3     " "4 Target is a coroutine5     " "6Thefile.seek (0, 2)#jump to end of file7      whileTrue:8line =Thefile.readline ()9         if  notLine :TenTime.sleep (0.1) One             Continue A         #Send data to coroutine for processing -Target.send (line)

In the above code, you can see that the target is a coroutine, each time the function reads a row of data, after reading, the call to the Target.send () function, the data sent to target, the target receives the next processing.

Now look at the Printer_co () function as sink, the sink is simple enough to simply print the data it receives.

1 # A sink Just prints the lines 2 @coroutine 3 def Printer_co (): 4      while True: 5         # hang up in this place, waiting to receive data 6         Line = (yield)7         print line,

One of the Coroutine function decorators is defined in the previous article that describes Coroutine. As you can see from the code, there is a dead loop as the Sink,print_co () function, which can be seen from line 6th, in this dead loop,

The function hangs until the data arrives, and then each time it receives the data, it prints the output and then hangs again to wait for the data to arrive.

Now you can combine the above two functions to implement the TAIL-F command:

1 f = open ("access-log")2 unix_tail_f_co (F,printer_co ())

The code first opens a file f,f as the data source, passing F and Printer_co () to Unix_tail_f_co (), which implements a pipeline, but in this pipeline, the data is sent directly to sink as Printer_ The Co () function, in the middle, does not pass through other coroutine.

Between sink and source, you can add any coroutine, such as data Transformations (transformation), filtering (filter), and routing (routing), as needed.

Now, we add a Coroutine,grep_filter_co (Pattern,target), where Target is a coroutine

1 @coroutine2 defGrep_filter_co (pattern,target):3      whileTrue:4         #pending, waiting to receive data5Line = (yield)6         ifPatterninchLine :7             #if the received data meets the requirements,8             #is sent to the next coroutine for processing9Target.send (line)
As you can see from the code, GREP_FILTER_CO () has a dead loop, hangs in the loop waiting to receive data, once the data is received, if there is a pattern in the data, the received data is sent to target, so that target will process the data next, Then wait for the data to be received and hang again.
Similarly, now combine these three functions to achieve Tail-f | grep command to form a new pipeline:
f = open ("access-log") Unix_tail_f_co (F,grep_filter_co ("  Python", Printer_co ()))

Unix_tail_f_co () as source, each time a row of data is read from the F file and sent to Grep_filter_co () the Coroutine,grep_filer_co () to filter the received data (filter) : If the received data contains the word "Python", the data is sent to Printer_co () for processing, and then the source sends the next line of data to the pipeline for processing.

Also implemented with generator in the front tail-f | grep command, you can now compare the two:
The generator implementation process is:

The Coroutine implementation process is:

As you can see, Generator gets the data from the pipe during the final iteration, and Coroutine sends the data to pipeline through the Send () function.
With Coroutine, data can be sent to different destinations, such as:

Let's implement the message broadcast (broadcasting) mechanism by first defining a function broadcast_co (targets)
1 #Send data to multiple different coroutine2 @coroutine3 defBroadcast_co (targets):4      whileTrue:5         #pending, waiting to receive data6Item = (yield)7          forTargetinchTargets:8             #received the data, and then sent to the different coroutine9Target.send (item)

The Broadcats_co () function takes a parameter targets, which is a list, where each member is a coroutine, and in a dead loop, the function receives the data and sends the data sequentially to the different coroutine for processing. It then suspends waiting for the data to be received.

 f = open ( access-log   " ) Unix_tail_f_co (f, Broadcast_co ([Grep_filter_co (  '   ,printer_co ())])  

Unix_tail_f_co reads a row of data from F, sends it to BROADCAST_CO (), Broadcast_co () sends the received data sequentially to Gerp_filter_co (), each grep_filter_co () Send the data to the appropriate PRINTER_CO () for processing.

                        |---------------> Grep_filter_co ("python")------> printer_co () unix_tail_f_ Co ()--->broadcast_co ()----> Grep_filter_co ("ply")---------> Printer _co ()                        |---------------> Grep_filter_co ("Swig")---------> printer _co ()

Note that Broadcast_co () sends the data to Grep_filter_co ("Python"), Grep_filter_co ("Python") sends the data to Printer_co (), when Printer_co ( Execution returns to the Grep_filter_co ("Python") function when the execution is suspended again while waiting for the data to be accepted, and Grep_filter_co ("Python") is also suspended waiting for the data to be received, execution back to the Broadcast_co () function, At this point Broadcast_co () will not send the message to Grep_filter_co ("ply"), and only if Grep_filter_co ("ply") has finished executing the suspend, Broadcast_co () will then send the data to the next coroutine.

If you change the above code to this, there will be another mode of broadcast:

f = open ("access-log"= Printer_co () Unix_tail_f_co (F,              Broadcast_co ([ Grep_filter_co ('python', p)),                            Grep_filter_co ('ply  ', p),                            Grep_filter_co ('swig', p)])              ) 

At this point, the broadcast mode is

                       |---------------> Grep_filter_co ("python")---------->| unix_tail_f_co ()--->broadcast_co ()----> Grep_filter_co ("ply")-------- Printer_co ()                       ---------------> Grep_filter_co ("Swig")---- --------->|

The final data is transferred to the same print_co () function, which means the destination of the final data is the same.

Well, this explains the application of Coroutine in analog pipeline and control dataflow is complete, you can see coroutine in the data routing aspect has the very powerful control ability, may combine many different processing methods to use together.

The next article will explain how to use Coroutine is a simple multitasking (multitask) operating system, please expect O (∩_∩) O.

Python Advanced Programming Builder (Generator) and Coroutine (ii): Coroutine and Pipeline (pipeline) and dataflow (data flow _

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.