Introduced
F # is a parallel (parallel) and responsive (reactive) language. This statement means that an F # program can have multiple operations in progress, such as using the. NET thread for F # calculations, or multiple waiting responses, such as a callback function to wait for an event or message, and a proxy object.
F # 's asynchronous expressions are one way to simplify the writing of asynchronous and responsive programs. In this and future articles, I'll explore some basic ways to use F # for asynchronous programming--roughly speaking, they are patterns used in F # asynchronous programming. Here I assume you have mastered the basics of async, as in the Getting Started guide.
We start with two simple design patterns: CPU asynchronous Parallel (Parallel CPU Asyncs) and I/O asynchronous parallelism (Paralle I/O Asyncs).
Part 2nd of this series describes how to obtain results from asynchronous calculations or back-end calculated cells.
The 3rd part describes lightweight, responsive, independent proxy objects in F #.
Pattern 1:cpu Asynchronous Parallel
First, understand the first pattern: CPU asynchronous parallelism, which means running a series of CPU-intensive computations in parallel. The following code calculates the Fibonacci sequence, which is to be leveled in parallel:
let rec fib x = if x <= 2 then 1 else fib(x-1) + fib(x-2)
let fibs =
Async.Parallel [ for i in 0..40 -> async { return fib(i) } ]
|> Async.RunSynchronously
The result:
val fibs : int array =
[|1; 1; 2; 3; 5; 8; 13; 21; 34; 55; 89; 144; 233; 377; 610; 987; 1597; 2584;
4181; 6765; 10946; 17711; 28657; 46368; 75025; 121393; 196418; 317811;
514229; 832040; 1346269; 2178309; 3524578; 5702887; 9227465; 14930352;
24157817; 39088169; 63245986; 102334155|]
The above code shows the elements of a parallel CPU asynchronous computing pattern:
"Async {...}" Used to specify a series of CPU tasks.
These tasks use Async.Parallel for a fork-join combination.
Here, we use the Async.RunSynchronously method to perform the combined task, which initiates an asynchronous task and waits for its final result synchronously. You can use this pattern to do all kinds of CPU parallelism (such as partitioning and parallel computation of matrix multiplication) or batch processing tasks.