Writing concurrent queue classes in Linux

Source: Internet
Author: User
Tags bool int size mutex time limit

  This article mainly introduces the concurrent queue classes in Linux, with the following features: Concurrent blocking queues, time-out limits, size limits

Design concurrent queues   code as follows: #include <pthread.h> #include <list> using namespace std;   Template <typename t> class queue  {  public:      Queue ()       {&NB Sp         Pthread_mutex_init (&_lock, NULL);     }      ~queue () &nbsp ;     {          Pthread_mutex_destroy (&_lock);    }      V OID push (const t& data);     T pop ();  private:      list<t> _list;      pthread_mutex_t _lock; };   Template <typename t> void queue<t>::p ush (const t& value)   {      Pthread_mute X_lock (&_lock);     _list.push_back (value);     Pthread_mutex_unlock (&_lock); }   Template <typename t> T queue<t&gt::p op ()   {      if (_list.empty ())       {      &NBsp   Throw "element not found";    }     Pthread_mutex_lock (&_lock);      T _temp = _list.front ();     _list.pop_front ();     Pthread_mutex_unlock (&_lock);     return _temp; }       The above code is valid. However, consider the situation where you have a long queue (which may contain more than 100,000 elements), and at some point during code execution, there are far more threads reading data from the queue than the thread that added the data. Because the Add and remove data operations use the same mutex, the speed at which the data is read affects the thread access lock that writes the data. So, how about using two locks? One lock is used for read operations and another for write operations. Give the modified Queue class.   Code as follows: Template <typename t> class queue  {  public:      Queue ()     &NBSP ; {          Pthread_mutex_init (&_rlock, NULL);          Pthread_mute X_init (&_wlock, NULL);    }      ~queue ()       {          Pthread_mutex_destro Y (&_rlock);         Pthread_mutex_destroy (&_wlock);    }      void push (const t& data); &nBsp   T pop ();  private:      list<t> _list;      pthread_mutex_t _rlock, _wloc K };     template <typename t> void Queue<t&gt::p ush (const t& value)   {      Pthre Ad_mutex_lock (&_wlock);     _list.push_back (value);     Pthread_mutex_unlock (&_wlock); }   Template <typename t> T queue<t&gt::p op ()   {      if (_list.empty ())       {          Throw "element not found";    }     Pthread_mutex_lock (&A Mp;_rlock);     T _temp = _list.front ();     _list.pop_front ();     Pthread_mutex_unlock (&_rlock);     return _temp; }       design concurrency blocking queue   Currently, if a read thread attempts to read data from a queue that does not have data, it simply throws an exception and continues execution. However, this practice is not always what we want, and the read thread is likely to want to wait (that is, block itself) until the data is available. This queue is called a blocking queue. How do I keep the read thread waiting after discovering that the queue is empty? One approach is to poll the queues on a regular basis. However, because this practice does not guarantee that data in the queue is available, it can lead to a waste of a large number of CPU cycles. The recommended approach is to use a conditional variable, or PthreaA variable of type d_cond_t.     Code as follows: Template <typename t> class blockingqueue  {  public:      Blockingqueu E ()       {          Pthread_mutexattr_init (&_attr);      &NBS P  //Set lock recursive         Pthread_mutexattr_settype (&_attr,pthread_mutex_recursive_ NP);          Pthread_mutex_init (&_lock,&_attr);         Pthread_cond_init (&_cond, NULL);    }      ~blockingqueue ()       {          Pthread_mut Ex_destroy (&_lock);         Pthread_cond_destroy (&_cond);    }      void push (const t& data);     BOOL Push (const t& data, const int seconds); Time-out push     T pop ();     T pop (const int seconds); Time-out pop   private:      List<T> _list;      pthread_mutex_t _lock;     pthread_mutexattr_t _attr;     pthread_cond_t _cond; };   Template <typename t> T blockingqueue<t&gt::p op ()   {      Pthread_mutex_lock (& _lock);     while (_list.empty ())       {          pthread_cond_wait (&_cond, &_lock);    }     T _temp = _list.front ();     _list.pop_front ();     Pthread_mutex_unlock (&_lock);     return _temp; }   template <typename t> void Blockingqueue <t>::p ush (const t& value)   {      p Thread_mutex_lock (&_lock);     CONST BOOL Was_empty = _list.empty ();     _list.push_back (value);     Pthread_mutex_unlock (&_lock);     if (was_empty)           pthread_cond_broadcast (&_cond); }       Concurrent blocking queue design There are two of them to noteAspect:   1. You can use pthread_cond_signal instead of pthread_cond_broadcast. However, Pthread_cond_signal releases at least one thread waiting for the condition variable, which is not necessarily the longest-waiting read thread. Although using pthread_cond_signal does not compromise the functionality of the blocking queue, this may cause some read threads to wait too long.   2. A false thread wakeup may occur. Therefore, after you wake up the read thread, make sure that the list is not empty before proceeding with it. It is strongly recommended that POPs () based on a while loop be used.   Design concurrency blocking queues with timeout limits   in many systems, data cannot be processed at all if new data cannot be processed within a specific time period. For example, the ticker for news channels shows real-time stock quotes from the financial exchange, which receives new data every n seconds. If you cannot process some of the previous data within n seconds, you should discard the data and display the latest information. Based on this concept, let's look at how to increase the timeout limit for adding and removing concurrent queues. This means that if the system cannot perform the add and remove operations within the specified time limit, the operation should not be performed at all.     Code as follows: template <typename t> bool Blockingqueue <t>::p ush (const t& data, const int seconds) &N Bsp {    struct Timespec ts1, ts2     const BOOL Was_empty = _list.empty ();     Clock_gettime (Clock_realtime, &TS1);     Pthread_mutex_lock (&_lock);     Clock_gettime (Clock_realtime, &ts2);     if ((ts2.tv_sec–ts1.tv_sec) <seconds)       {        Was_empty = _li St.empty (); &nbsP       _list.push_back (value);    }     Pthread_mutex_unlock (&_lock);     if (was_empty)           pthread_cond_broadcast (&_cond); }   Template <typename t> T blockingqueue <t>::p op (const int seconds)   {      struct Timespec Ts1, ts2;      clock_gettime (Clock_realtime, &ts1);      Pthread_mutex_lock ( &_lock);     Clock_gettime (Clock_realtime, &ts2);      //check:if _lock      if (TS1.TV_SEC–TS2.TV_SEC) < seconds)       {          ts2.tv_sec = seconds;//Specify Wake up time   &N Bsp     while (_list.empty () && (result = = 0))           {      &NBSP ;       result = Pthread_cond_timedwait (&_cond, &_lock, &ts2);     &NBSP  }         if (result = 0)//Second check:if time off when timedwait       &NB Sp   {            T _temp = _list.front ();             _li St.pop_front ();             Pthread_mutex_unlock (&_lock);             return _temp;        }     {    pthread_mutex_unlock (&lock);     Throw "timeout happened"; }       design concurrency blocking queues with size restrictions   Finally, discuss the concurrency blocking queues with a size limit. This queue is similar to a concurrent blocking queue, but has a limit on the size of the queue. In many embedded systems with limited memory, queues with a size limit are really needed. For blocking queues, only read threads need to wait when there is no data in the queue. For blocked queues with a size limit, write threads also need to wait if the queue is full.     Code as follows: Template <typename t> class boundedblockingqueue  {  public:      Bound Edblockingqueue (int size): maxSize (size)       {          Pthread_mutex_init ; _lock, NULL);          pThread_cond_init (&_rcond, NULL);         Pthread_cond_init (&_wcond, NULL);         _array.reserve (maxSize);    }      ~boundedblockingqueue ()       {          PTHR Ead_mutex_destroy (&_lock);         Pthread_cond_destroy (&_rcond);         Pthread_cond_destroy (&_wcond);    }      void push (const t& data);     T pop ();  private:      vector<t> _array; or t* _array If you prefer     int maxSize;     pthread_mutex_t _lock;     pthread_cond_t _rcond, _wcond; };   Template <typename t> void Boundedblockingqueue <t>::p ush (const t& value)   {    &NB Sp Pthread_mutex_lock (&_lock);     CONST BOOL Was_empty = _array.empty ();     while (_array.size () = maxSize)       {          pthread_cond_wait (&_wcond, &_lock);    }      _array.push_back (value);     Pthread_mutex_unlock (&_lock);     if (was_empty)           pthread_cond_broadcast (&_rcond); }   Template <typename t> T boundedblockingqueue<t&gt::p op ()   {      Pthread_mutex_lo CK (&_lock);     CONST BOOL Was_full = (_array.size () = = MaxSize);     while (_array.empty ())       {          pthread_cond_wait (&_rcon D, &_lock);    }     T _temp = _array.front ();     _array.erase (_array.begin ());     Pthread_mutex_unlock (&_lock);     if (was_full)         pthread_cond_broadcast (&_wcond);     return _temp;       The 1th note is that this blocking queue has two conditional variables instead of one. If the queue is full, the write thread waits for the _wcond condition variable; After the read thread takes out the data from the queueAll threads need to be notified. Similarly, if the queue is empty, the read thread waits for the _rcond variable, and the write thread sends broadcast messages to all threads after inserting the data into the queue. What happens if no threads are waiting for _wcond or _rcond when a broadcast notification is sent? Nothing will happen; The system ignores the messages. Also note that two condition variables use the same mutex.  

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.