Ring Buffer Bufferc Language Implementation __c language

Source: Internet
Author: User
Tags float number int size message queue mutex

The implementation of ring buffer Bufferc language

The problem of too many Message Queuing lock calls is solved, and the other annoying thing is probably too much memory allocation and release operations. The frequent memory allocation not only increases the overhead of the system, but also makes the memory fragmentation increasing, which is very unfavorable to the long-term stable operation of our server. Perhaps we can use memory pools, such as the small memory allocator that comes with the SGI STL. But for this sort of strict, first-in, first-out sequence, block size is not small, and block size is not uniform memory allocation, more use is called ring buffer scheme, mangos Network Code also has such a thing, its principle is relatively simple.

Just like two people around a round table in the chase, the runner is controlled by the network IO thread, when the data is written, the person is running forward, chasing the person is a logical thread, will keep chasing the people running. If you catch up with what to do. That is, there is no data to read, wait for a while, wait for the runner to run a few steps forward and then chase, always can not let the game has no play. Then if the people running too slow, the runner turned round to catch up with the people. Then you should take a rest. If you keep chasing back, you'll have to change to a faster runner, or the game won't play.

In particular, we have emphasized that it is a requirement for the use of annular buffers to be processed in strict, first-in, first-out order. That is, everyone must obey the rules, chasing people can not cross the table from the past, the runner is certainly not allowed to run in turn. As for why, there is no need for more explanations.

Ring buffers are a good technique for not allocating memory frequently, and in most cases, the repeated use of memory also allows us to do more with less memory blocks.

In a network IO thread, we will prepare a ring buffer for each connection to temporarily store the received data to handle the half packet and sticky packets. After the unpack and decryption is complete, we will copy the packet to the logical thread message queue, and if we use only one queue, it will also be a ring buffer, IO thread to write, logical thread to read in the back, chasing each other. But if we use the optimization scenario described earlier, we may no longer need a ring buffer here, at least we don't need them to be annular anymore. Since we no longer have simultaneous reads and writes to the same queue, each queue is written to the logical thread after it is filled, and the logical thread clears the queue and then gives it to the IO thread to write, a fixed sized buffer. It doesn't matter, such a good technology, in other places must also be used.

In communication programs, ring buffers are often used as data structures to store the transmitted and received information in communications. The ring buffer is a first-in, first-out loop buffer that provides mutual exclusion access to the buffer to the communication program. 1. Realization principle of annular buffer

A ring buffer usually has a read pointer and a write pointer. The read pointer points to the readable data in the ring buffer, and the write pointer points to a writable buffer in the ring buffer. Data reads and writes of buffers can be realized by moving the read and write pointers. In general, the read user of the ring buffer will only affect the read pointer, and the write user will only affect the write pointer. If you have only one read user and one write user, you do not need to add a mutex protection mechanism to ensure that the data is correct. If you have more than one read-write user accessing the ring buffer, you must add a mutex protection mechanism to ensure that multiple users are mutually exclusive to the ring buffer.

Figure 1, Figure 2, and Figure 3 are a schematic diagram of a circular buffer. Figure 1 is the initial state of the ring buffer, and you can see that both the read and write pointers point to the first buffer place; Figure 2 is after adding a data to the ring buffer, you can see that the write pointer has moved to the position of Block 2, and the read pointer is not moving; Figure 3 is the state of the ring buffer after it has been read and added, You can see that two data has been added to the ring buffer and a data has been read.

Data.

2. Example: Realization of annular buffer

The ring buffer is one of the most widely used data structures in the data communication program, and the following code implements a ring buffer:

/*ringbuf. c*/

#include<stdio. H>

#include<ctype. H>

#define NMAX 8

int iput = 0; /* The current placement position of the ring buffer * *

int iget = 0; /* The current fetch position of the buffer * *

int n = 0; /* The total number of elements in the ring buffer * *

Double Buffer[nmax];

/* The address number calculation function of the ring buffer, if it reaches the end of the wake buffer, will circle back to the head.

Valid address number of ring buffer is: 0 to (NMAX-1)

*/

int addring (int i)

{

return (i+1) = = Nmax? 0:i+1;

}

* * Take an element from the ring buffer.

Double Get (void)

{

int POS;

if (n>0) {

Pos = Iget;

Iget = addring (iget);

n--;

return Buffer[pos];

}

else {

printf ("Buffer is empty\n");

return 0.0;

}

* * Insert an element into the ring buffer/

void put (double z)

{

if (N<nmax) {

Buffer[iput]=z;

Iput = addring (iput);

n++;

}

Else

printf ("Buffer is full\n");

}

int main{void)

{

Chat opera[5];

Double Z;

do {

printf ("Please input p|g|e?");

scanf ("%s", &opera);

Switch (ToLower (opera[0])) {

Case ' P ':/* put *

printf ("Please input a float number?");

scanf ("%lf", &z);

Put (z);

Break

Case ' G ':/* get */

z = Get ();

printf ("%8.2f from buffer\n", z);

Break

Case ' E ':

printf ("end\n");

Break

Default

printf ("%s-operation command error! \ n ", opera);

}/* End Switch * *

}while (opera[0]!= ' e ');

return 0;

}

In the can communication card device driver, in order to enhance the communication ability of can communication card, improve the communication efficiency, according to the characteristics of can, using the two-level buffer structure, that is directly oriented to the can communication card transceiver buffer and directly oriented to the system call receive frame buffer. In the communication of the transceiver buffer generally uses the ring queue (or called the FIFO queue), the use of ring buffer can make read and write concurrent execution, read process and write process can use the "producer and consumer" model to access the buffer, thereby facilitating the use and management of the cache. However, the execution of the ring buffer is not efficient, and before each byte is read, you need to determine if the buffer is empty, and you need to "break-line" When you move the tail pointer (that is, when the pointer is to the end of the buffer memory, you need to redirect it to the first address of the buffer); And you also need to do "break-line" When you move the tail pointer. Most of the procedures in the process of implementation are dealing with individual extreme situations. Only a small part of the operation is actually effective. This is the so-called "8:2" relationship in software engineering. Combined with the actual situation of can communication, the annular queue is improved in this design, which can greatly improve the efficiency of data transmitting and receiving. Because the receiving and sending buffers on the can communication card receive only one frame of can data at a time, and according to the communication protocol of can, the sending data of can controller consists of 1 byte identifiers, one byte RTR and dlc bits and 8 bytes data region, a total of 10 bytes; receive buffer similar to, There are also 10 bytes of registers. So the can controller's data is short, fixed-length frames (data can be less than 8 bytes). As a result, the use of 10 bytes of data block industry allocated memory is more convenient, that is, each need memory buffer, directly allocated 10 bytes, because the address of these 10 bytes is linear, it does not need to "fold line" processing. More importantly, when you write data to a buffer, just to determine whether there are free blocks at once and get the first pointer to the block, so as to reduce the repeatability of the conditional judgment, greatly improve the execution efficiency of the program; Also, when reading data from the buffer queue, it reads a 10-byte block of data at a time, Also reduces the repeatability of conditional judgments. The data structure called "block_ring_t", as shown in the can card driver, is used as a buffer for sending and receiving data:

typedef struct {

Long signature;

unsigned char *head_p;

unsigned char *tail_p;

unsigned char *begin_p;

unsigned char *end_p;

unsigned char buffer[block_ring_buffer_size];

int usedbytes;

}block_ring_t;

The data structure adds a member usedbytes to the Universal ring queue, which indicates how many bytes of space in the current buffer are occupied. Using Usedbytes, you can easily make a buffer full or empty judgment. When usedbytes=0, the buffer is empty; When usedbytes=block_ring_buffer_size, the buffer is full. In addition to the transceiver buffer, there is also a receive frame buffer, the receiving frame queue is responsible for the management of the Hilon a protocol to obtain the data frame. Because it is possible to receive multiple data frames, and according to the Can bus remote communication protocol, high priority message will preempt the bus, it is possible to receive a low priority and is divided into several pieces of data frames sent by a high priority data frame interrupted. This occurs when packets are received at the same time in multiple data frames, and a receive queue is required to manage the data frames that are received at the same time. When a new packet arrives, it should be based on the addr (mailing address), mode (mode of communication), index (the serial number of the packet) to determine whether the new data frame. If so, open a new frame_node; otherwise, if there is already a corresponding frame node in place, append the data to the end of the frame, and when inserting the data, you should check that the sequence number of the receive package is correct, and that the packet data is discarded if it is incorrect. Each time a new frame_node is established, the Frame_queue is required to request the memory space, and when the Frame_queue is full, the node that is left behind (the first received but not completed frame) is released and the pointer to that node is returned. When a system call reads the received frame, it frees the node space so that the device driver can reuse the node. form buffer: Loop buffer Queue Learning

A buffer FIFO queue is needed between threads in a project, one thread adds data to the queue, and another thread takes the data (the classic producer-consumer issue). Start thinking about using STL vector containers, but without random access, frequent deletion of the top elements causes memory movement and reduces efficiency. Using linklist as a queue, you also need to frequently allocate and release node memory. So the realization of a finite size of the FIFO queue, directly using an array of circular reading.

The reading and writing of the queue needs to be synchronized with the external process thread (another Rwguard class is written, see another article)

To the project's targeted simplicity, the implementation of a simple ring buffer queue, than the STL vector simple

PS: The first use of templates, the original class template definition to be placed in the. h file, or there will be a connection error.

Template <class _type>

Class Csharequeue

{

Public

Csharequeue ();

Csharequeue (unsigned int bufsize);

Virtual ~csharequeue ();

_type Pop_front ();

BOOL Push_back (_type item);

return capacity

unsigned int capacity () {//warning: Requires external data consistency

return m_capacity;

}

Returns the current number

unsigned int size () {//warning: Requires external data consistency

return m_size;

}

Is full//warning: need external control Data consistency

BOOL Isfull () {

Return (m_size >= m_capacity);

}

BOOL IsEmpty () {

return (m_size = 0);

}

Protected

UINT M_head;

UINT M_tail;

UINT m_size;

UINT m_capacity;

_type *pbuf;

};

Template <class _type>

Csharequeue<_type>::csharequeue (): M_head (0), M_tail (0), m_size (0)

{

PBuf = new _type[512];//Default 512

m_capacity = 512;

}

Template <class _type>

Csharequeue<_type>::csharequeue (Unsignedint bufsize): M_head (0), M_tail (0)

{

if (BufSize > | | bufsize < 1)

{

PBuf = new _type[512];

m_capacity = 512;

}

Else

{

PBuf = new _type[bufsize];

m_capacity = bufsize;

}

}

Template <class _type>

Csharequeue<_type>::~csharequeue ()

{

Delete[] PBuf;

PBuf = NULL;

M_head = M_tail = m_size = m_capacity = 0;

}

An element pops up front

Template <class _type>

_type csharequeue<_type>::p Op_front ()

{

if (IsEmpty ())

{

return NULL;

}

_type itemtmp;

Itemtmp = Pbuf[m_head];

M_head = (m_head + 1)% M_capacity;

--m_size;

return itemtmp;

}

Join queue from Tail

Template <class _type>

BOOL Csharequeue<_type>::p ush_back (_type Item)

{

if (Isfull ())

{

return FALSE;

}

Pbuf[m_tail] = Item;

M_tail = (m_tail + 1)% M_capacity;

++m_size;

return TRUE;

}

#endif

Http://hi.baidu.com/zkheartboy/blog/item/f162b20fdbf250eeab6457be.html

Universal class for implementing ring buffers: http://hi.baidu.com/broland/blog/item/6a6ddf813f3425c69123d956.html

Http://hi.baidu.com/uc100200/blog/item/c6d670543df4544fd00906ac.html

Http://hi.baidu.com/282280072/blog/item/9927685090cbb9928d543075.html

★ Ring Buffer Introduction

Ring buffer is a commonly used data structure in the producer and consumer models. The producer puts the data at the end of the array, and the consumer removes the data from the other end of the array, and when the tail of the array is reached, the producer goes back to the array's head.

If there is only one producer and one consumer, a lock-free access ring buffer can be achieved. The write index allows only the producer to access and modify, so long as the writer saves the new value to the buffer before updating the index, the reader will always see a consistent data structure. Similarly, reading an index only allows consumers to access and modify it.

Principle diagram of ring buffer implementation

As the figure shows, when the reader and the writer pointer are equal, the buffer is empty, and the buffer is full as long as the write pointer is behind the read pointer.

★ Ring Buffer Internal Structure

◇ Similar external interface

Before we introduce the ring buffer, let's review the normal queues first. The normal queue has a write end and a read out end. When the queue is empty, the read-side cannot read the data, and when the queue is full (to the maximum size), the write-side cannot write the data.

For consumers, the ring buffer and the queue buffer are the same. It also has a write-side (for push) and a read-out End (for POPs), as well as a buffer "full" and "empty" state. Therefore, switching from the queue buffer to the ring buffer is a smoother transition for users.

◇ different internal structure

Although the external interface of the two is similar, but the internal structure and operating mechanisms are very different. The internal structure of the queue there's not much to be verbose about here. Focus on the internal structure of the ring buffer.

We can imagine that the readout end of the ring buffer (hereinafter referred to as R) and the writing end (hereinafter referred to as W) are two people chasing on the stadium track (R chasing W). When R catches the W, the buffer is empty, and when W catches up with R (W is running more than R), the buffer is full.

For the sake of image, find a picture and make a slight change as follows:

As can be seen from the above illustration, all push and pop operations of the ring buffer are performed in a fixed storage space. When the queue buffer is pushed, storage space may be allocated to store the new element, and the storage space for discarded elements may be freed when POPs. So the ring way compared to the queue mode, less of the buffer elements used in storage space allocation, release. This is a major advantage of the ring buffer.

★ The realization of ring buffer

If you already have an existing ring buffer available to use, and you are not interested in the inner implementation of the ring buffer, you can skip this paragraph.

◇ array mode vs linked list mode

The internal implementation of the ring buffer can be based on the array (the array here, referring to the continuous storage space) or based on the linked list implementation.

The array is a one-dimensional continuous linear structure on the physical storage, and it can allocate storage space once in initialization, which is the advantage of array method. But to use arrays to simulate loops, you have to logically connect the headers and tails of the arrays. When traversing an array sequentially, the tail element (the last element) has to be treated in a special way. To access the next element of the tail element, return to the head element (the No. 0 element). As shown in the following illustration:

Use linked lists in a way that is exactly the opposite of the array. The list eliminates the special handling of the End-to-end connection. But linked lists are cumbersome to initialize, and in some cases (such as the IPC that is referred to later) are less convenient to use.

◇ Read and write operation

The ring buffer maintains two indexes, corresponding to the write end (W) and the Read End (R) respectively. When writing (push), make sure the loop is not full, and then copy the data to the corresponding element of W, and then the W to the next element; when reading (POP), make sure the loop is not empty, then return the element of R, and finally r points to the next element.

◇ judge "Empty" and "full"

The above operation is not complicated, but there is a little trouble: when the empty ring and the full loop, R and W all point to the same position. This makes it impossible to tell whether it is "empty" or "full". There are generally two ways to solve the problem.

Option 1: Always maintain an element without

When the empty ring, R and W overlap. When W runs faster than R, and when the distance r has an element interval, the loop is considered full. When the elements in the ring occupy a larger amount of storage space, this method appears very soil (waste of space).

Option 2: Maintain additional variables

If you do not like the above approach, you can also use additional variables to solve. For example, you can use an integer to record the number of elements that have been saved in the current loop (the integer >=0). When R and W overlap, the variable can be known as "empty" or "full".

◇ Storage of elements

Because the ring buffer itself is the cost of reducing the allocation of storage space, the type of the elements in the buffer should be chosen well. Store data of value types as much as possible, rather than storing data of pointer (reference) types. Because the pointer type of data will cause storage space (such as heap memory) allocation and release, so that the effect of the ring buffer is compromised.

★ Application Occasions

We have just introduced the implementation mechanism inside the ring buffer. Following the usual practice of the previous post, let's introduce the use of threads and processes.

If you have a ready-made, mature ring buffer in your programming language and development library, it is highly recommended that you use a ready-made library, do not reinvent the wheel, and do not find a ready-made one to consider your own implementation. If you practice practicing purely in your spare time, that's a different story.

◇ for concurrent Threads

Similar to the queue buffer in a thread, the ring buffer in the thread also considers thread-safe issues. Unless you are using a library of ring buffers that have helped you achieve thread safety, you'll have to do it yourself. Thread mode of the ring buffer used more, the relevant online information is also more, the following is a general introduction to several.

For C + + programmers, it is highly recommended to use boost-provided circular_buffer templates, which were first introduced in the Boost 1.35 release. Given boost's position in the C + + community, you should be able to use the template with confidence.

For C programmers, you can look at Open source project Circbuf, but the project is GPL-less, not too active, and only one developer. Everybody use it carefully. It is recommended to use it only as a reference.

For C # Programmers, you can refer to an example on the CodeProject.

◇ for concurrent processes

There seems to be few out-of-the-box libraries available for the ring buffers between processes. They had to do their own work and have ample clothing.

An IPC type that is suitable for inter-process ring buffering, common with shared memory and files. In both ways, a circular buffer is usually implemented in the form of an array. The program allocates a fixed length of storage space in advance, then the specific read and write operations, Judge "empty" and "full", element storage and other details can be referred to the previous mentioned.

The performance of shared memory mode is very good, and it is suitable for scenarios with large data traffic. However, some languages, such as Java, do not support shared memory. Therefore, this method has some limitations in the system of multi Language Cooperative development.

The file format is very well supported in programming languages, and almost all programming languages support manipulating files. However, it may be limited by the performance of disk read/write (disc I/O). So the file format is not very suitable for fast data transmission, but for some "data units" on a large occasion, the file format is worth considering.

For the ring buffer between processes, we should also consider the problems of synchronization and mutual exclusion between processes.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.