The concept of cyclic buffer (ring buffer), in fact from the Linux kernel (Maybe), provides a lock-free method for solving competition problems in some special situations. This particular situation is when the producer and the consumer have only one, and in other cases it must be locked. Corresponds to the definition of it in the Linux kernel:
struct Kfifo {
unsigned char *buffer;
unsigned int size;
unsigned int in;
unsigned int out;
spinlock_t *lock;
};
Of course there is a corresponding operation function for it, which is no longer mentioned here (not today's emphasis). We just need to understand the concept.
About definitions: The buffer points to the buffers that hold the data, the size of the buffer, in is the index of the write pointer, out is the reading pointer subscript, lock is a spin lock added to the struct KFIFO (which says no locks are not here), preventing multiple processes from accessing this data structure concurrently. When in==out, the description buffer is empty, and when (In-out) ==size, the buffer is full.
Note: We hold the corresponding reading and writing pointers, when the first batch of data (blue) is completed, the second batch of data (red) will continue our data operation according to the current write pointer position, and when the maximum buffer_size is reached, it will return to the beginning of the buffer.
For this I give a simple simulation implementation class:
* * * ===================================================================================== * * FILENAME:RING_BU Ffer_class.h * version:1.0 * created:2013 year November 28 13:08 04 seconds * Revision:none * Compiler: Clang * Author:sim szm, xianszm007@gmail.com * ========================================================== =========================== */#include <iostream> class Ring_buffer {public:ring_buffer (void* buffer, Unsig
Ned int buffer_size);
void Buffer_data (const void* data, unsigned int& len);
void Get_data (void* outdata, unsigned int& len);
void Skip_data (unsigned int& len);
inline unsigned int free_space ();
inline unsigned int buffered_bytes ();
Private:void flush_state ();
unsigned char *read_ptr, *write_ptr;
unsigned char *end_pos;
unsigned char *buffer;
int Max_read, max_write, Buffer_data_;
}; Ring_buffer::ring_buffer (void* buffer, unsigned int buffer_size) {buffer = (unsigned char*) buffer;
End_pos = buffer + buffer_size;
read_ptr = write_ptr = buffer;
Max_read = buffer_size;
Max_write = Buffer_data_ = 0;
Flush_state ();
} void Ring_buffer::buffer_data (const void* data, unsigned int& len) {if (Len > (unsigned int) max_read)
Len = (unsigned int) max_read;
memcpy (read_ptr, data, Len);
Read_ptr = Len;
Buffer_data_ = Len;
Flush_state ();
} void Ring_buffer::get_data (void* outdata, unsigned int& len) {if (Len > (unsigned int) max_write)
Len = (unsigned int) max_write;
memcpy (Outdata, Write_ptr, Len);
Write_ptr = Len;
Buffer_data_-= Len;
Flush_state ();
} void Ring_buffer::skip_data (unsigned int& len) {unsigned int requestedskip = len;
for (int i=0; i<2; ++i) {//May overwrite, do two times int skip = (int) len;
if (Skip > Max_write) skip = Max_write;
Write_ptr + = skip; Buffer_Data_ = Skip;
Len-= skip;
Flush_state ();
len = Requestedskip-len; inline unsigned int ring_buffer::free_space () {return (unsigned int) max_read;} inline unsigned int ring_buffer::bu
Ffered_bytes () {return (unsigned int) buffer_data_} void Ring_buffer::flush_state () {if (write_ptr = = End_pos)
write_ptr = buffer;
if (read_ptr = = end_pos) read_ptr = buffer;
if (read_ptr = = write_ptr) {if (Buffer_data_ > 0) {max_read = 0;
Max_write = end_pos-write_ptr;
else {max_read = end_pos-read_ptr;
Max_write = 0;
} else if (Read_ptr > write_ptr) {max_read = end_pos-read_ptr;
Max_write = read_ptr-write_ptr;
else {max_read = write_ptr-read_ptr;
Max_write = end_pos-write_ptr; }
}
All we have to say is ring buffer. Regarding an application in our log processing, we know that for program logging provides detailed information about the application state before the failure, and will continue to generate a lot of trace data during a period of operation. and debug information and continue to write it to a text file on disk. Hundreds of millions of effective logging requires a lot of disk space, and in a multithreaded environment, the disk space required increases exponentially. There are some problems with regular log processing, such as the availability of hard disk space and the slow disk I/O when writing data to a file. Continuous write operations to the disk can greatly degrade the performance of the program, causing it to run slowly. Typically, you can resolve space problems by using the log rotation policy, saving the log in several files, truncating and overwriting the file size when it reaches a predefined number of bytes.
To overcome space problems and minimize disk I/O, some programs can record their trace data in memory and dump the data only when requested. This circular, in-memory buffer is called a cyclic buffer. It can save related data in memory, rather than writing it to a file on disk every time. You can dump the data in memory to disk when you need it (for example, when a user requests to dump memory data into a file, when a program detects an error, or when the program crashes because of an illegal operation or received signal). Circular buffer logging consists of a fixed sized memory buffer that the process uses to log records.
Of course, we now face most of the collaborative work of multithreading, for logging, if the traditional locking mechanism to access our storage files, these threads will be in the acquisition and release lock spent most of the time, so take the loop buffer will be a good way. By enabling each thread to write data to its own block of memory, you can completely avoid synchronization problems. When a request is received from a user for dump data, each thread obtains a lock and dumps it to a central location. Or allocate a large global block of memory and divide it into smaller slots, each of which can be used by one thread for logging. Each thread is only able to read and write its own slot, not the entire buffer. When each thread attempts to write data for the first time, it tries to find an empty memory slot and marks it as busy. When a thread obtains a specific slot, the corresponding bit in the bitmap for tracking slot usage can be set to 1, and when the thread exits, the bit is reset to 0. Here you need to maintain a global list of currently used slot numbers and thread information for the threads that are using it.
However, it should be noted that when a thread is dead, the slot is not released and the thread ID is used again and assigned a new slot before the garbage collector frees the slot. For new threads, it is important to check the global list and reuse the same slot (if it was used by previous instances). Because the garbage collector thread and the writer thread may try to modify the global list at the same time, some kind of locking mechanism is also required.