"Go" buffer design--ring queue

Source: Internet
Author: User

Original link: http://blog.csdn.net/billow_zhang/article/details/4420789

When communicating between the two modules of a program, the buffer becomes a frequently used mechanism.

For example, the Write module writes information to the buffer, and the read-out module reads the information out of the buffer. This makes:

    • The program is clearly divided into modules to establish a good modular architecture, so that the write and read out into a high-aggregation, low-coupling module.
    • For the processing of the write and readout can be produced by the uneven speed, so that the entire processing velocity tends to smooth uniform state, to avoid the readout module processing slow speed so that the write module waiting to make the response slow down the situation, also avoid writing the module is not uniform, so that the readout module busy and idle situation.
    • You can increase the concurrency of the processing. Due to the good design and partitioning of the write and read-out modules, they can be isolated and isolated from each other, thus using threads and processes to produce different concurrent processing, and through the buffer size adjustment, so that the processing to achieve a good match and running state. For example, a write module can have n threads or processes, a read-out module can have m threads and processes, and a cache flush can configure the size of L. N, M, L can be set by simulation test to adapt to the value of the specific application. It is also possible to set up a mechanism for automatic adjustment, which, of course, creates the complexity of the design.

The buffer is obviously not suitable for the following scenario:

    • The reception and processing of data is closely related and difficult to partition modules.
    • There is clearly no case of uneven processing between the modules in the processing, or not the main problem.
    • A situation in which a synchronous response is required. Obviously, the write-side simply push the information into the queue, and cannot get the read-out processing response information, only suitable for asynchronous information transmission situation.
Design of the buffer:
    • The buffer is a FIFO queue. The Write module inserts the information into the queue, and the readout module pops the message out of the queue.
    • The writing module and the readout module need to coordinate and synchronize the information.
    • For multi-threaded and multi-process write or readout modules, critical section processing is required between the writing module and the readout module.

Queues use ring queues, such as. The ring queue is characterized by the need for dynamic memory release and allocation, using a fixed-size memory space for reuse. In the actual queue insert and eject operation, the head will increase when the push operation is continuous, and the tail will increase when the pop operation. Push speed, it is possible to catch up with tail, this time that the queue is full, can no longer push operation, need to wait for the pop operation to free up the queue space. When the operation of the pop fast, so that tail chase head, this time that the queue is empty, can no longer do the pop operation, need to wait for push to come in the data.

The following is a list of the source programs for the data structure of a ring queue class.

[CPP]View Plaincopy
  1. /* LoopQue.h
  2. Author:zhangtao
  3. Date:july 26, 2009
  4. */
  5. # ifndef Loopque_h
  6. # define Loopque_h
  7. # include <new>
  8. namespace XTL {
  9. Template<typename _tp>
  10. Class Loopque_impl {
  11. Public
  12. static int addsize (int max_size) {
  13. return max_size * sizeof (_TP);
  14. }
  15. Loopque_impl (int msize): max_size (Msize), _front (0), _rear (0), _size (0) {}
  16. _tp& Front () {return data[_front];}
  17. void push (const _tp& value) {
  18. Data[_rear] = value;
  19. _rear = (_rear + 1)% Max_size;
  20. _size++;
  21. }
  22. void Pop () {
  23. _front = (_front + 1)% Max_size;
  24. _size--;
  25. }
  26. int Check_pop (_tp& TV) {
  27. if (empty ())
  28. return-1;
  29. TV = Front ();
  30. Pop ();
  31. }
  32. int Check_push (const _tp& value) {
  33. if (full ())
  34. return-1;
  35. push (value);
  36. }
  37. BOOL Full () const {return _size = = max_size;}
  38. BOOL Empty () const {return _size = = 0;}
  39. int size () const {return _size;}
  40. int capacity () const {return max_size;}
  41. Private
  42. int32_t _front; Front Index
  43. int32_t _rear; Rear Index
  44. int32_t _size; Queue data Record number
  45. Const int32_t max_size; Queue capacity
  46. _TP Data[0]; Data record occupy symbol
  47. };
  48. Template<typename _tp>
  49. struct Loopque_allocate {
  50. loopque_impl<_tp>& allocate (int msize) {
  51. Char *p = new Char[sizeof (loopque_impl<_tp>) +
  52. Loopque_impl<_tp>::addsize (msize)];
  53. return * (New (P) loopque_impl<_tp> (msize));
  54. }
  55. void deallocate (void *p) {
  56. Delete [] (char *) p;
  57. }
  58. };
  59. template< TypeName _tp, typename Alloc = loopque_allocate<_tp> >
  60. Class Loopque {
  61. Public
  62. typedef _TP VALUE_TYPE;
  63. Loopque (int msize): Impl (Alloc.allocate (msize)) {}
  64. ~loopque () {alloc.deallocate ((void *) &impl);}
  65. value_type& Front () {return Impl.front ();}
  66. Const value_type& Front () const {return impl.front;}
  67. void push (const value_type& value) {Impl.push (value);}
  68. void Pop () {Impl.pop ();}
  69. int Check_pop (value_type& TV) {return Impl.check_pop (TV);}
  70. int Check_push (const value_type& value) {return Impl.check_push (value);}
  71. BOOL Full () const {return impl.full ();}
  72. BOOL Empty () const {return impl.empty ();}
  73. int size () const {return impl.size ();}
  74. Private
  75. Alloc Alloc;
  76. loopque_impl<_tp>& Impl;
  77. };
  78. }//End of < namespace STL >
  79. # endif//end of <ifndef loopque_h>

The program defines two classes of Loopque_impl and Loopqueue. The former defines the basic data structure and implementation of the ring queue, and the latter carries out a memory allocation wrapper.
The constructor for the Loopque_impl of 21 rows is created with the space size of the queue as a parameter for this class. That is, when the class is created, it determines the size of the queue.

The array space for the queue defined in 63 rows is _TP data[0]. It seems to be a strange thing. In fact, this space size should be the max_size array space. But since Max_size is determined at the time of class creation, here, data[0] only acts as a placeholder. Therefore, loopque_impl this class is not directly used, need to properly allocate memory size, in order to use, which is also required to design a class loopqueue one of the important reasons. Perhaps you wonder, why use this? Wouldn't it be easy to define a pointer, for example: _tp *data, and then use data = new _tp[max_size in the constructor? But, don't forget, our ring queue class is likely to be an inter-process shared class. For example, one process push operation, another process pop operation. In this way, the class needs to be built into shared memory. A member of a class in a shared memory, if it contains pointers or references to such types, will cause a lot of trouble in memory allocation. And the way we design this class with this placeholder will reduce this hassle and complexity.

The Addsize class function of line 17 determines the size of additional memory space required by the Loopque_impl.

Loopqueue shows how to use Loopque_impl to solve memory allocation problems. From the 79-row template parameter, we see that there is a alloc type in addition to the parameters of the buffer data storage type _TP. This is the template class used to allocate loopque_impl memory space.

In the member of Loopqueue, a reference impl (102 rows) of LOOPQUE_IMPL is defined. This reference is the space that points to the Loopque_impl used to allocate space using Alloc.


The alloc template parameter has a DEFAULT definition value of Loopque_allocate. From this default allocated memory class, we can see a sample of the implementation of the allocation Loopque_impl, see 69 lines:


Char *p = new Char[sizeof (loopque_impl<_tp>) + loopque_impl<_tp>::addsize (msize)];
return * (New (P) loopque_impl<_tp> (msize));


Here, we first allocate enough memory based on Msize to the data[msize], and then use the location new operation to create the Loopque_impl in this memory area. In this way, the Loopque_impl class can use the members of its _TP data[0]. In fact, this member already has _TP Data[msize] such a space. Here, if we design another class that allocates memory, for example, Loopque_shmalloc. This class uses shared memory and establishes the Loopque_impl class in it. So we can use:
LOOPQUE<_TP, loopque_shmalloc>

To create a ring queue class that can be used to communicate between processes.

At this point, we can summarize:
    • The data structure and basic operations of the ring queue are fairly simple. The main operation is push and get.
    • The ring queue may require a thread or inter-process sharing operation. In particular, the sharing between processes makes memory allocation a relatively complex issue. For this problem, we define the basic data structure as no pointers and reference members, suitable for memory management classes, and then design a proxy for the two-time wrapper class, the memory allocation function as a template can be customized parameters.
Here, we design and implement the circular queue data structure, and also for this structure can be customized storage to shared memory design policy strategy. But the following work will be more challenging:
    • For push operations, wait for the queue to have space before the operation, which means that the queue is not full state. The push operation is not resumed until this state has occurred, otherwise the push action hangs.
    • For get operations, you need to wait for the queue to have data before the operation, which means that the queue is not empty. The get operation is not resumed until this state has occurred, otherwise the get action hangs.
    • The above push operation/get operations are generally different processes and threads. There are also processes and threads that may have multiple push operations and get operations.

These work will be carried out in the subsequent design. and listen to tell.

Attachment: Loopque Test procedure:

[CPP]View Plaincopy
  1. /* Tst-loopque.cpp
  2. Test program for <LoopQue> class
  3. Author:zhangtao
  4. Date:july 27, 2009
  5. */
  6. # include <iostream>
  7. # include <cstdlib>//For <atol> function
  8. # include "Xtl/loopque.h"
  9. Int
  10. Main (int argc, char **argv)
  11. {
  12. int qsize = 0;
  13. if (argc > 1)
  14. Qsize = Atol (argv[1]);
  15. if (Qsize < 1)
  16. Qsize = 5;
  17. Xtl::loopque<int> Queue (qsize);
  18. for (int i = 0; i < (qsize-1); i++) {
  19. Queue.push (i);
  20. Std::cout << "Loop push:" << i << "/n";
  21. }
  22. Queue.check_push (1000);
  23. Std::cout << "full:" << queue.full () << "Size:"
  24. << queue.size () << "/n/n";
  25. for (int i = 0; i < qsize; i++) {
  26. int val = Queue.front ();
  27. Std::cout << "Loop pop:" << val << "/n";
  28. Queue.pop ();
  29. }
  30. Std::cout << "/nempty:" << queue.empty () << "Size:"
  31. << queue.size () << "/n";
  32. return 0;
  33. }

"Go" buffer design--ring queue

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.