C + + Concurrent programming framework-Concurrent runtime

Source: Internet
Author: User

The Concurrency Runtime is a C + + concurrent programming framework. The Concurrency Runtime simplifies parallel programming and helps you write reliable, scalable, and responsive parallel applications. The Concurrency Runtime increases the level of abstraction, so you do not have to manage the infrastructure details associated with concurrency. With the Concurrency Runtime, you can also specify a scheduling policy that meets the quality of application service requirements.

The Concurrency Runtime provides consistency and predictability for both application and application components that run concurrently. Collaboration task scheduling and collaboration blocking are two examples of concurrent run-time benefits.

The Concurrency Runtime uses the collaboration Task Scheduler to implement a work-stealing algorithm to efficiently allocate work between computing resources. For example, suppose an application has two threads, all two of which are managed by the same runtime. If a thread completes its scheduled task, it can unload the work of other threads. This mechanism balances an application's overall workload.

The Concurrency Runtime also provides synchronization primitives that use collaboration to block synchronization access to resources. For example, suppose a task must have exclusive access to a shared resource. By blocking in a collaborative way, when the first task waits for a resource, the runtime can use the remaining quanta to perform another task. This mechanism will increase the maximum usage of the calculated resource. Architecture

The Concurrency Runtime is divided into the following four components: the parallel Mode library (PPL), the asynchronous Agent library, the Task Scheduler, and the resource manager. These components reside between the operating system and the application. The following illustration shows how concurrent run-time components interact in the operating system and in applications:


The Concurrency Runtime is well configurable, meaning that you can combine existing features to perform more operations. The Concurrency Runtime utilizes lower-level components to provide a variety of functions, such as parallel algorithms.

1) Parallel Pattern Library

The Parallel Pattern Library (PPL) provides a common container and algorithm for performing fine-grained parallel operations. The PPL enables command-line data parallelism by providing parallel algorithms that allocate data sets or data collection calculations between computing resources. It also enables task parallelism by providing task objects that assign multiple independent operations between compute resources.

Use the parallel Schema Library when you have a local calculation that can benefit from parallel execution. For example, you can use the parallel_for algorithm to transform an existing for loop to operate in parallel.

2) Asynchronous Agent Library

The asynchronous Agent library, or only the agent library, provides a programming model and a messaging interface for coarse-grained data streams and pipeline tasks that are based on participants. Asynchronous proxies enable you to efficiently leverage latency by performing work while other components wait for data.

Use the Agent library when you have multiple entities that communicate with each other asynchronously. For example, you can create an agent to read data from a file or network connection, and then use the messaging interface to send that data to another agent.

3) Task Scheduler Program

Task Scheduler can schedule and coordinate tasks at run time. The Task Scheduler has a collaboration capability that uses a work-stealing algorithm to achieve maximum utilization of processing resources.

The Concurrency Runtime provides the default scheduler, so you do not need to manage infrastructure details. However, to meet the quality requirements of your application, you can also provide your own scheduling policy or associate a specific scheduler with a specific task.

4) Resource Manager

The role of the resource manager is to manage computing resources such as processors and memory. If the workload changes at run time, the resource Manager assigns resources to the highest utility to respond to workloads.

The resource Manager acts as an abstract computational resource, primarily interacting with the Task Scheduler. Although you can use the Resource Manager to fine-tune the performance of libraries and applications, you should typically use the features provided by the parallel pattern library, the agent library, and the Task Scheduler. When workloads change, these libraries use the resource manager to dynamically rebalance resources.

example-computes a Fibonacci sequence using the PPL parallel

#include <vector> #include <xutility> #include <iostream> #include <Windows.h> #include < ppl.h> #include <concurrent_vector.h> #include <array> #include <tuple> #include <algorithm
>//Encapsulating call Function and calculating time-consuming template <class function> __int64 time_call (function&& f)//&&-right value reference declaration
    {__int64 begin = GetTickCount ();
    f ();
Return GetTickCount ()-Begin;
    ///COMPUTE Fibonacci sequence int Fibonacci (int n) {if (n < 2) return n;
Return Fibonacci (n-1) + Fibonacci (n-2);

    int _tmain (int argc, _tchar* argv[]) {__int64 elapsed;

    The array std::tr1::array<int of the nth Fibonacci sequence to be computed, 4> a = {41, 24, 26, 42};

    Array of results for sequential execution std::vector< std::tr1::tuple<int,int> > results1;

    The result array of concurrent execution concurrency::concurrent_vector<std::tr1::tuple<int,int>> results2; Computes the Fibonacci sequence using the For_each + Lamada table, and stores the results elapsed = Time_call ([ampersand] {Std::for_each (A.begin (), A.End (), [ampersand] (int n) {results1.push_back (Std::tr1::make_tuple (N, Fibonacci (n)));
    });   

    });

    Std::wcout << L "Serial time:" << elapsed << L "MS" << Std::endl; Print Results Std::for_each (Results1.begin (), Results1.end (), [] (std::tr1::tuple<int,int>& opair) {std:: Wcout << L "fib (" << std::tr1::get<0> (opair) << L "):" << std::tr1::get<1> (Opair)
         << Std::endl;

    });

    Std::wcout << Std::endl; Use parallel_for_each concurrency to perform the same task elapsed = Time_call ([ampersand] {concurrency::p arallel_for_each (A.begin (),
        A.end (), [ampersand] (int n) {results2.push_back (Std::tr1::make_tuple (N, Fibonacci (n)));
    });   

    });

    Std::wcout << L "Parallel Time:" << elapsed << L "MS" << Std::endl; Print Results Std::for_each (Results2.begin (), Results2.end (), [] (std::tr1::tuple<int,int>& opair) {std:: WcoUT << L "fib (" << std::tr1::get<0> (opair) << L "):" << std::tr1::get<1> (Opair) <& Lt
    Std::endl;
   
	});
return 0;
 }

On a quad-core processor computer, the results are as follows:

Serial time:35069 ms

FIB (41): 165580141

FIB (24): 46368

FIB (26): 121393

FIB (42): 267914296

Parallel time:21684 ms

FIB (24): 46368

FIB (26): 121393

FIB (41): 165580141

FIB (42): 267914296

Because the concurrency::p arallel_for_each Concurrent execution, the order of the results of the operation is not fixed.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.