C + + security concurrency Access container elements

Source: Internet
Author: User
Tags mutex

C + + security concurrency Access container elements

2014-9-24 Flyfish
Standard library STL vectors, deque, list, etc. are not thread-safe
For example
Thread 1 is using an iterator (iterator) to read the vector
Thread 2 is inserting the vector, causing the vector to reallocate memory so that the iterator in thread 1 is invalidated


Container for STL
Multiple thread reads are secure and cannot have any write operations to the container during the read process
Multiple threads can write to different containers at the same time.


You cannot expect any STL implementations to solve threading challenges, and you must manually synchronize control.


Scenario 1 locking the vector


Effective the lock frame given by STL


Template<typename container>//a template class lock{//framework for acquiring and releasing mutexes for a container, and  Many of the details are omitted  public:lock (const container& Container): C (Container) {  getmutexfor (c);//Get Mutex in constructor} ~lock () {   releasemutexfor (c);// Free it in a destructor} private:const container& C; };  


More work is needed if industrial strength is to be achieved.


Scenario 2
Microsoft's Parallel Patterns Library (PPL)


See MSDN
Features provided by PPL


1 Task parallelism:a mechanism to execute several work items (tasks) in parallel
Task parallelism: A mechanism for executing several work items (tasks) in parallel


2 Parallel algorithms:generic algorithms The Act on collections of the data in Parallel
Parallel algorithms: A generic algorithm for data sets in parallel


3 Parallel containers and Objects:generic container types that provide safe concurrent access to their


Elements
Parallel containers and objects: Generic container types that provide secure concurrent access to their elements


Example is a comparison of the sequential and parallel computations of the Fibonacci sequence (Fibonacci)


The sequential calculation is
Using the STL Std::for_each algorithm
The results are stored in the Std::vector object.


Parallel computing is
Using PPL Concurrency::p arallel_for_each algorithm
The results are stored in the Concurrency::concurrent_vector object.
parallel-fibonacci.cpp//compile with:/ehsc#include <windows.h> #include <ppl.h> #include < concurrent_vector.h> #include <array> #include <vector> #include <tuple> #include <algorithm > #include <iostream>using namespace concurrency;using namespace std;//Calls the provided work function and retur NS the number of milliseconds//that it takes to call that Function.template <class Function>__int64 Time_call (Func   tion&& f) {__int64 begin = GetTickCount ();   f (); Return GetTickCount ()-Begin;}   Computes the nth Fibonacci number.int Fibonacci (int n) {if (n < 2) return n; Return Fibonacci (n-1) + Fibonacci (n-2);}   int wmain () {__int64 elapsed;   An array of FIBONACCI numbers to compute.   Array<int, 4> a = {24, 26, 41, 42};   The results of the serial computation.   Vector<tuple<int,int>> Results1;   The results of the parallel computation. Concurrent_vector<tuple<int,int>> RESults2;   Use the for_each algorithm to compute the results serially. elapsed = Time_call ([&] {for_each (A.begin (), A.end (), [&] (int n) {results1.push_back (make_tuple      (N, Fibonacci (n)));   });      });   Wcout << L "Serial time:" << elapsed << L "MS" << Endl;   Use the Parallel_for_each algorithm to perform the same task. elapsed = Time_call ([&] {parallel_for_each (A.begin (), A.end (), [&] (int n) {results2.push_back (M      Ake_tuple (N, Fibonacci (n)));      }); Because Parallel_for_each acts concurrently, the results do not//has a pre-determined order.      Sort the Concurrent_vector object/So, the results match the serial version.   Sort (Results2.begin (), Results2.end ());      });   Wcout << L "Parallel Time:" << elapsed << L "MS" << Endl << Endl;   Print the results. For_each (Results2.begin (), Results2.end (), [] (tuple<int,int>& pair) {wcout << L "fib (" << get<0> (pair) << L "):" << get<1> (pair) << Endl; });}






The namespace is capitalized concurrency, and the general namespace is all lowercase.


Stick to a simple example code
Computes the square of each element in a Std::array object using the Parallel_for_each algorithm
The parameters are lambda functions, function objects, and function pointers, respectively.
#include "stdafx.h" #include <ppl.h> #include <array> #include <iostream>using namespace Concurrency; Using namespace Std;using namespace std::tr1;//Function Object (functor) class that computes the square of its input.temp Late<class ty>class squarefunctor{public:void operator () (ty& N) const{n *= n;}};/ /Function that computes the square of its input.template<class ty>void square_function (ty& N) {n *= n;}  int _tmain (int argc, _tchar* argv[]) {//Create an array object that contains 5 values.array<int, 5> values = {1, 2,  3, 4, 5};//use a lambda function, a function object, and a function pointer to//compute the square of each element of The array in parallel.//is a lambda function to square each element.parallel_for_each (Values.begin (), Values.end (), [] ( int& N) {n *= n;}); /Use a Function object (functor) to square each element.parallel_for_each (Values.begin (), Values.end (), squarefunctor&lt ;int> ());//use of a function pointer to squareEach Element.parallel_for_each (Values.begin (), Values.end (), &square_function<int>);//Print each element Of the array to the Console.for_each (Values.begin (), Values.end (), [] (int& N) {wcout << n << endl;}); return 0;}




There is a sentence in Microsoft's Concurrent_vector.h file
Microsoft would like to acknowledge the this concurrency data structure implementation
is based on the Intel implementation in its threading Building Blocks ("Intel Material").
That is, Microsoft's concurrent_vector is based on Intel's threading Building blocks.


Scenario 3 Intel TBB (threading Building Blocks)
Features provided by Intel TBB
1 Thread-safe containers that are used directly, such as concurrent_vector and concurrent_queue.
2 common parallel algorithms, such as parallel_for and Parallel_reduce.
The 3 template class atomic provides no-lock (Lock-free or Mutex-free) concurrent programming support.




Scenario 4
The lock-free data structure supports the library concurrent data Structures (LIBCDS).
Address
http://sourceforge.net/projects/libcds/
After download, there is a compilation environment directly from VC2008 to VC2013, depending on the boost library


Scenario 5 Boost
Using Boost.lockfree

The Boost.lockfree implements three types of lock-free data structures:


1 Boost::lockfree::queue
2 Boost::lockfree::stack
3 Boost::lockfree::spsc_queue

Producer-Consumer
The following code implements the
Implementation of a multi-write generation, multi-consumption queue.
Generates an integer and is consumed by 4 threads


#include <boost/thread/thread.hpp> #include <boost/lockfree/queue.hpp> #include <iostream># Include <boost/atomic.hpp>boost::atomic_int producer_count (0); Boost::atomic_int consumer_count (0); Boost:: Lockfree::queue<int> Queue (+), const int iterations = 10000000;const int producer_thread_count = 4;const int Consu        Mer_thread_count = 4;void producer (void) {for (int i = 0; I! = iterations; ++i) {int value = ++producer_count;    while (!queue.push (value));    }}boost::atomic<bool> done (false); void consumer (void) {int value;    while (!done) {while (Queue.pop (value)) ++consumer_count; } while (Queue.pop (value)) ++consumer_count;}    int main (int argc, char* argv[]) {using namespace std;    cout << "Boost::lockfree::queue is";    if (!queue.is_lock_free ()) cout << "not";    cout << "Lockfree" << Endl;    Boost::thread_group Producer_threads, Consumer_threads; for (iNT i = 0; I! = Producer_thread_count;    ++i) Producer_threads.create_thread (producer);    for (int i = 0; I! = Consumer_thread_count; ++i) Consumer_threads.create_thread (consumer);    Producer_threads.join_all ();    Done = true;    Consumer_threads.join_all ();    cout << "produced" << producer_count << "objects." << Endl; cout << "Consumed" << consumer_count << "objects." << Endl;}





C + + security concurrency Access container elements

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.