Functional Programming (18)-Functional library design-parallel Operations component library

Source: Internet
Author: User

As a professional programmer, we often create a library of tools because of our work needs. The so-called tool Library is a function library composed of a set of functions, which is pre-compiled for some common problems that are often encountered at work. Typically, the functionality of these libraries is implemented by a series of functions around these data types, supported by some specially tailored data types. Within the scope of functional programming is no exception. But functions in the functional library are more focused on the ability to combine functions (functional composition), and thus the functional library is generally called the component library (Combinator library), and the in-house function is called a component (Combinator). The designer of a component library has a common and fundamental goal for functional design: Larger functions can be achieved by combining various functions of a component. Functional component library design generally for special functional requirements or issues: first try to use some data types to express the requirements of the project, and then design a series of functions around these specially designed data types to provide solutions to the most basic requirements of the subject. In this section we discuss the design pattern of a functional component library from the design process of a parallel operations component library.

We designed this parallel computing component library to be able to run a common operation on a separate thread. In this way, we can simultaneously run multiple operations on multiple threads simultaneously to achieve the purpose of parallel operation. The problem is straightforward, but how to combine (composition) and deform (transformation) of the operations in their separate running spaces is worth pondering.

Start with the data type: a parallel operation should be like a container that encapsulates a common operation. Let's just create a structure: Par[a],a is the result type returned by the normal operation. This par type is much like the high-order type in front of us, the type of pipe that carries the type a element. If we think of this, we can manipulate the element a in the tube with all the previous functions for higher order types. Then, if an operation is encapsulated in par, it is always necessary to take a method to get the results out after the operation is done in another thread. So we can first get to the two most basic functions:

1 def Unit[a] (A:A): Par[a]    // inject a common operation into Par. Upgrading A to a parallel operation 2get[a] (Pa:par[a]): A    // Extract parallel Run results

The next problem is running the line program: Is it up to the programmer to decide whether an operation should be put into a new line thread or fixed every operation with a new independent thread? Suppose we choose to use a function called by the programmer to determine the generation of a new thread. There are two advantages: 1, you can have a more flexible parallel operation strategy (some have determined that the operation will be completed soon may not be necessary with new threads, independent threading operations may consume more resources); 2, the independent threading mechanism and parallel operations are loosely coupled: par implementations do not need to understand the thread management mechanism. The style of this function is as follows:

def Fork[a] (Pa:par[a]): Par[a]  // set a new operating space for the PA. Does not change PA, or return par[a]

Then putting an operation into a new thread can be expressed using this function:

Async A (A: = a): Par[a] = fork (unit (a))  // do not need to know any information about Par. Knowing that the fork will set a new run space for this operation. Note or return Par[a]

Because we're looking for a loose coupling of threading and parallel operations, we don't actually run parallel operations in par, so par is just a description of a parallel operation. The return or par of the fork simply adds a description of the computing environment and does not actually run the algorithm. In this way par if it is an operation description, then we need a real operating mechanism to get the results of the operation:

1 def Run[a] (Pa:par[a]): A    // since the meaning of Par changes from container to Operation description, we rename get to run

We need to run the real par, such as thread management, calculation run, and so on in the function implementation method of run.

The expression of par now includes the following:

 1  def Unit[a] (A:A): Par[a] //  put a normal shipping calculated to inject par. Upgrade A to a parallel operation description  2  def Fork[a] (Pa:par[a]): Par[a] //  set a new run space for the PA. The result of the return par must run and get the result  3  def async  [A] (A: = a): Par[a] = fork (unit (A)) //  4  def Run[a] (Pa:par[a]): A //  Run the PA and extract the results of the operation   

It should be after v1.6, the Java API contains the Java.util.concurrent package, which includes the Executorservice class provides thread management support. The Executorservice and future classes are translated into Scala as follows:

class Executorservice {  def submit[a] (A:callable[a]): future[a]}trait future[a] {  get: A   Get (Timeout:long, unit:timeunit): A  def cancel (evenifrunning:boolean): Boolean  def Isdone:boolean  def Iscancelled:boolean}

We don't need to get into the bottom details of multithreaded programming, with Java Concurrent executorservice enough. Executorservice provides a way to submit the required operations to the system in callable form; The system immediately returns to the future, and we can use the Future.get to lock the thread to read the operation. Since the result of the operation is read in the form of a lock thread (blocking), it is important to use the time node of get: If you commit an operation, the next direct get will immediately lock the thread until the operation is complete, then we will not get any parallel operation effect. The future also provides functions such as running state and interrupt operation to provide programmers with more powerful and flexible operation control. For more flexible control, the return value of par should be read from a direct lock thread to a future that does not produce a locking thread effect:

1 type par[a] = Executorservice = Future[a]2 def Run[a] (Es:executorservice) (Pa:par[a]): FUTURE[A] = PA (es)

Now the meaning of par has changed from a data type to a function description: Pass in a executorservice and return to the future. We can run this function with run and the system will immediately return to the future without any waiting.

Let's implement these basic functions:

1 Objectpar {2 Import Java.util.concurrent._3 4Type Par[a] = Executorservice = =Future[a]5def Run[a] (Es:executorservice) (Pa:par[a]): future[a] =PA (es)6                                                   //> Run: [A] (Es:java.util.concurrent.ExecutorService) (Pa:ch7.par.par[a]) java.u7                                                   //| til.concurrent.future[a]8 9def Unit[a] (A:A): par[a] = es = = {Ten     NewFuture[a] { OneDefGet: A =a Adef isDone =true -def iscancelled =false -DefGet(Timeout:long, timeunit:timeunit): A =Get thedef cancel (evenifrunning:boolean): Boolean =false -     } -}//> Unit: [A] (A:A) ch7.par.par[a] -def Fork[a] (Pa:par[a]): par[a] = es = = { +Es.submit[a] (NewCallable[a] { -def call:a = run (es) (PA).Get +     }) A}//> Fork: [A] (Pa:ch7.par.par[a]) Ch7.par.par[a] atDefAsyncA (A: = a): Par[a] = fork (unit (a))//> Async: [A] (a: = = a) Ch7.par.par[a] -  -Val A = unit (4+7)//> A:ch7.par.par[int] = <function1> -Val B =Async(2+1)//> B:ch7.par.par[int] = <function1> -Val es = Executors.newcachedthreadpool ()//> es:java.util.concurrent.ExecutorService = Java.util.concurrent.ThreadPool -                                                   //| [Email protected] [Running, pool size = 0, Active threads = 0, queued tasks = in                                                   //| 0, completed tasks = 0] -Run (es) (b).Get                                    //> res0:int = 3 toRun (es) (a).Get                                    //> res1:int = One + Es.shutdown () -  the}

From the application example we can understand that thread management is provided by existing Java tools (Executors.newcachedthreadpool), and we do not need to know the details of thread management. We have also determined that the thread management mechanism is loosely coupled to the parallel operation we designed for par.

Note: The unit does not use Executorservice es, but returns a future that indicates the completion operation (Isdone=true), which is the incoming parameter a of the unit. This operation is run in the current main thread if we use this future get to take the result of an expression. Async selects a new thread by fork and submits an operational task to the new run environment. Let's analyze the arithmetic flow:

1, Val a = unit (4+7), unit constructs a completed new future; Isdone=true, Set future.get = 4 + 7,run (es) (a) to perform an operation on the expression 4+7 in the main thread and fetch the result 11.

2, Val b = Async (2+1) >>> Fork (unit 2+1), run (es) (b) >>> submit (new callable), note def call = run (es) (b). Get: The operation that is submitted here Run (es) (b). Get actually commits an operation again and locks the thread directly (blocking) to wait for the result of the read operation. The first commit callable also requires the lock thread to wait for the commit operation to complete the calculation. If the thread pool can only provide one thread, the first commit callable will occupy this unique thread and wait for the result of the second commit operation, since no thread can provide two commit operations, the operation will never get results, then run (es) (b). The get will generate a deadlock (dead lock).


In this section, we introduce a simple functional parallel component library design, which can be computed by placing one operation in a new thread other than the main thread. But the result of the extraction operation is still locking the thread (blocking). Our next section will discuss how to implement parallel operations with some algorithmic functions.

Functional Programming (18)-Functional library design-parallel Operations component library

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.