Write your own Concurrency Queue class (queue concurrency blocking queue) in Linux _linux shell

Source: Internet
Author: User
Tags int size mutex

Designing concurrent Queues

Copy Code code as follows:

#include <pthread.h>
#include <list>
using namespace Std;

Template <typename t>
Class Queue
{
Public
Queue ()
{
Pthread_mutex_init (&_lock, NULL);
}
~queue ()
{
Pthread_mutex_destroy (&_lock);
}
void push (const t& data);
T pop ();
Private
List<t> _list;
pthread_mutex_t _lock;
};

Template <typename t>
void Queue<t>::p ush (const t& value)
{
Pthread_mutex_lock (&_lock);
_list.push_back (value);
Pthread_mutex_unlock (&_lock);
}

Template <typename t>
T Queue<t>::p op ()
{
if (_list.empty ())
{
Throw "element not found";
}
Pthread_mutex_lock (&_lock);
T _temp = _list.front ();
_list.pop_front ();
Pthread_mutex_unlock (&_lock);
return _temp;
}

The above code is valid. However, consider the situation where you have a long queue (which may contain more than 100,000 elements), and at some point during code execution, there are far more threads reading data from the queue than the thread that added the data. Because the Add and remove data operations use the same mutex, the speed at which the data is read affects the thread access lock that writes the data. So, how about using two locks? One lock is used for read operations and another for write operations. Give the modified Queue class.

Copy Code code as follows:

Template <typename t>
Class Queue
{
Public
Queue ()
{
Pthread_mutex_init (&_rlock, NULL);
Pthread_mutex_init (&_wlock, NULL);
}
~queue ()
{
Pthread_mutex_destroy (&_rlock);
Pthread_mutex_destroy (&_wlock);
}
void push (const t& data);
T pop ();
Private
List<t> _list;
pthread_mutex_t _rlock, _wlock;
};


Template <typename t>
void Queue<t>::p ush (const t& value)
{
Pthread_mutex_lock (&_wlock);
_list.push_back (value);
Pthread_mutex_unlock (&_wlock);
}

Template <typename t>
T Queue<t>::p op ()
{
if (_list.empty ())
{
Throw "element not found";
}
Pthread_mutex_lock (&_rlock);
T _temp = _list.front ();
_list.pop_front ();
Pthread_mutex_unlock (&_rlock);
return _temp;
}

Designing concurrent blocking Queues

Currently, if a read thread attempts to read data from a queue that does not have data, it simply throws an exception and continues execution. However, this practice is not always what we want, and the read thread is likely to want to wait (that is, block itself) until the data is available. This queue is called a blocking queue. How do I keep the read thread waiting after discovering that the queue is empty? One approach is to poll the queues on a regular basis. However, because this practice does not guarantee that data in the queue is available, it can lead to a waste of a large number of CPU cycles. The recommended approach is to use a conditional variable, a variable of type pthread_cond_t.

Copy Code code as follows:

Template <typename t>
Class Blockingqueue
{
Public
Blockingqueue ()
{
Pthread_mutexattr_init (&AMP;_ATTR);
Set Lock Recursive
Pthread_mutexattr_settype (&AMP;_ATTR,PTHREAD_MUTEX_RECURSIVE_NP);
Pthread_mutex_init (&AMP;_LOCK,&AMP;_ATTR);
Pthread_cond_init (&_cond, NULL);
}
~blockingqueue ()
{
Pthread_mutex_destroy (&_lock);
Pthread_cond_destroy (&_cond);
}
void push (const t& data);
BOOL Push (const t& data, const int seconds); Time-out push
T pop ();
T pop (const int seconds); Time-out pop

Private
List<t> _list;
pthread_mutex_t _lock;
pthread_mutexattr_t _attr;
pthread_cond_t _cond;
};

Template <typename t>
T Blockingqueue<t>::p op ()
{
Pthread_mutex_lock (&_lock);
while (_list.empty ())
{
Pthread_cond_wait (&_cond, &_lock);
}
T _temp = _list.front ();
_list.pop_front ();
Pthread_mutex_unlock (&_lock);
return _temp;
}

Template <typename t>
void Blockingqueue <t>::p ush (const t& value)
{
Pthread_mutex_lock (&_lock);
const BOOL Was_empty = _list.empty ();
_list.push_back (value);
Pthread_mutex_unlock (&_lock);
if (was_empty)
Pthread_cond_broadcast (&_cond);
}

There are two aspects to be aware of concurrent blocking queue design:

1. You can use pthread_cond_signal instead of pthread_cond_broadcast. However, Pthread_cond_signal releases at least one thread waiting for the condition variable, which is not necessarily the longest-waiting read thread. Although using pthread_cond_signal does not compromise the functionality of the blocking queue, this may cause some read threads to wait too long.

2. There may be a false thread wakeup. Therefore, after you wake up the read thread, make sure that the list is not empty before proceeding with it. It is strongly recommended that POPs () based on a while loop be used.

To design a concurrency blocking queue with a time-out limit

In many systems, data is not processed at all if new data cannot be processed within a specific time period. For example, the ticker for news channels shows real-time stock quotes from the financial exchange, which receives new data every n seconds. If you cannot process some of the previous data within n seconds, you should discard the data and display the latest information. Based on this concept, let's look at how to increase the timeout limit for adding and removing concurrent queues. This means that if the system cannot perform the add and remove operations within the specified time limit, the operation should not be performed at all.

Copy Code code as follows:

Template <typename t>
BOOL Blockingqueue <t>::p ush (const t& data, const int seconds)
{
struct Timespec ts1, ts2;
const BOOL Was_empty = _list.empty ();
Clock_gettime (Clock_realtime, &ts1);
Pthread_mutex_lock (&_lock);
Clock_gettime (Clock_realtime, &ts2);
if ((ts2.tv_sec–ts1.tv_sec) <seconds)
{
Was_empty = _list.empty ();
_list.push_back (value);
}
Pthread_mutex_unlock (&_lock);
if (was_empty)
Pthread_cond_broadcast (&_cond);
}

Template <typename t>
T blockingqueue <t>::p op (const int seconds)
{
struct Timespec ts1, ts2;
Clock_gettime (Clock_realtime, &ts1);
Pthread_mutex_lock (&_lock);
Clock_gettime (Clock_realtime, &ts2);

The _lock of the Check:if
if ((ts1.tv_sec–ts2.tv_sec) < seconds)
{
Ts2.tv_sec + = seconds; Specify Wake Up Time
while (_list.empty () && (result = 0))
{
result = Pthread_cond_timedwait (&_cond, &_lock, &ts2);
}
if (result = = 0)//Second check:if time to Timedwait
{
T _temp = _list.front ();
_list.pop_front ();
Pthread_mutex_unlock (&_lock);
return _temp;
}
}
Pthread_mutex_unlock (&lock);
Throw "Timeout happened";
}

To design a concurrency blocking queue with a size limit

Finally, a concurrency blocking queue with a size limit is discussed. This queue is similar to a concurrent blocking queue, but has a limit on the size of the queue. In many embedded systems with limited memory, queues with a size limit are really needed.
For blocking queues, only read threads need to wait when there is no data in the queue. For blocked queues with a size limit, write threads also need to wait if the queue is full.

Copy Code code as follows:

Template <typename t>
Class Boundedblockingqueue
{
Public
Boundedblockingqueue (int size): maxSize (size)
{
Pthread_mutex_init (&_lock, NULL);
Pthread_cond_init (&_rcond, NULL);
Pthread_cond_init (&_wcond, NULL);
_array.reserve (maxSize);
}
~boundedblockingqueue ()
{
Pthread_mutex_destroy (&_lock);
Pthread_cond_destroy (&_rcond);
Pthread_cond_destroy (&_wcond);
}
void push (const t& data);
T pop ();
Private
Vector<t> _array; or t* _array if you prefer
int maxSize;
pthread_mutex_t _lock;
pthread_cond_t _rcond, _wcond;
};

Template <typename t>
void Boundedblockingqueue <t>::p ush (const t& value)
{
     Pthread_mutex_lock (&_lock);
    const BOOL Was_empty = _array.empty ();
    while (_array.size () = maxSize)
    {
     & nbsp;  pthread_cond_wait (&_wcond, &_lock);
   }
    _array.push_back (value);
    Pthread_mutex_unlock (&_lock);
    if (was_empty)
        pthread_cond_broadcast (&_ Rcond);
}

Template <typename t>
T Boundedblockingqueue<t>::p op ()
{
Pthread_mutex_lock (&_lock);
const BOOL Was_full = (_array.size () = = MaxSize);
while (_array.empty ())
{
Pthread_cond_wait (&_rcond, &_lock);
}
T _temp = _array.front ();
_array.erase (_array.begin ());
Pthread_mutex_unlock (&_lock);
if (was_full)
Pthread_cond_broadcast (&_wcond);
return _temp;
}

The 1th note is that this blocking queue has two conditional variables instead of one. If the queue is full, the write thread waits for the _wcond condition variable; The read thread needs to notify all threads after fetching data from the queue. Similarly, if the queue is empty, the read thread waits for the _rcond variable, and the write thread sends broadcast messages to all threads after inserting the data into the queue. What happens if no threads are waiting for _wcond or _rcond when a broadcast notification is sent? Nothing will happen; The system ignores the messages. Also note that two condition variables use the same mutex.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.