One of the simple memory pools (c implementations)

Source: Internet
Author: User
Tags int size
One of the simple memory pools (c implementations)
It is known that frequent allocation of memory free memory consumes system resources and can easily cause memory fragmentation. So write a simple memory pool implementation, the simpler the better, why. It is less efficient to do complicated than direct malloc. So this memory pool is linked to the memory block, the allocation of a fixed size of the memory block, from the pool to take memory and return the memory is the use of the free linked table stack operation, no use of the lock, if you want to thread-safe, it is recommended to call the memory pool outside the lock.

Having done a simple test, the efficiency of the 100,000-time memory pool call is probably higher than the direct allocation of 30-50% freed memory. But the premise is that the memory pool is not lock (Pthread_mutex), lock-memory pool efficiency and direct allocation of memory efficiency, and sometimes more points. (The test environment is a dual-core CPU,FREEBSD7 per 2k,4)

Code implementation:
struct Memblock
{
int used;
void* data;
struct memblock* next;
struct memblock* createnext;
} ;

struct Mempool
{
int size; Memblock size
int unused; Idle Memblock size
int datasize; The size of each allocated data (that is, memblock.data)
struct Memblock * free_linkhead; Idle Memblock Linked table header
struct Memblock * create_linkhead; All created Memblock linked header, memory pool released when used to prevent the memory pool from releasing the seemingly memblock not returned

};
typedef void (* Free_callback) (void *); Release the callback function, release the Membloc.data, you can simply use the free function directly

void Mempool_init (int initialsize, int datasize); Initialize Mempool
void Mempool_dealloc (struct Mempool * Pool,free_callback callback); Release Mempool
void * Mempool_get (struct mempool * pool); Get a Memblock
void Mempool_release (struct mempool * pool, struct memblock * block); Return a Memblock

/* ********************************
* Mempool
* ***************************** */
malloc a Memblock
static struct Memblock * Mempool_allocblock (struct mempool * pool);

------------------Implement--------
void *
Mempool_init (int initialsize, int datasize)
{
struct Mempool * pool = malloc (sizeof (struct mempool));
Pool-> unused = 0;
Pool-> datasize = datasize;
Pool-> free_linkhead = NULL;

Pre-initialize InitialSize memory blocks
Pool-> create_linkhead = NULL;
int i;
for (i = 0; i < initialsize i + +) {
struct Memblock * block = Mempool_allocblock (pool);
Mempool_release (pool, block);
}
return (pool);
}

void
Mempool_dealloc (struct Mempool * pool, Free_callback callback)
{
struct Memblock * block = NULL;
Freed all created Memblock
while (pool-> create_linkhead!= NULL) {
block = Pool-> create_linkhead;
Pool-> create_linkhead = Pool-> create_linkhead-> createnext;
Performs a free callback.
if (callback) {
(* callback) (Block-> data);
}
Free (block);
}
Free (pool);
L_debug ("%s:size (%d), unused (%d)", __func__, pool-> size, pool-> unused);
}

static struct Memblock *
Mempool_allocblock (struct Mempool * pool)
{
struct Memblock * block = malloc (sizeof (struct memblock));
Block-> data = malloc (sizeof (pool-> datasize));
Block-> next = NULL;
Block-> used = 1; Indicates that you have used

Join all created Memblock of the linked table header
Block-> createnext = Pool-> create_linkhead;
Pool-> create_linkhead = block;

Pool-> size + +;
return (block);
}

void
Mempool_release (struct Mempool * pool, struct memblock * block)
{
if (block = = NULL) {
L_warn ("%s:release a null!", __func__);
return;
}
if (block-> used!= 1) {
L_warn ("%s:used!=1", __func__);
return;
}
Place the returned memory block in the free list header.
Block-> used = 0; Indicates an idle
Block-> next = Pool-> free_linkhead;
Pool-> free_linkhead = block;
Pool-> unused + +; Idle number +1
}

void *
Mempool_get (struct Mempool * pool)
{

struct Memblock * block = NULL;
if (pool-> free_linkhead) {
To remove a block of memory from an idle list header
block = Pool-> free_linkhead;
Pool-> free_linkhead = Pool-> free_linkhead-> next;
Block-> next = NULL;
Block-> used = 1; Indicates that you have used
Pool-> unused--; Number of free memory blocks-1
}
else {
There is no free memory block to create a
block = Mempool_allocblock (pool);
}
return (block);
The implementation of this memory pool above is actually more like the implementation of a fallback list. It is not very convenient to use, the memory block to be applied is a member of one block structure, and each time from the system memory heap request is a small piece of a small piece, also does not consider byte alignment. So let's take a look at the implementation of a new memory pool.     This memory pool is based on the fixed-size memory pool principle in the C + + Application performance optimization book, made some changes in C language. Everyone is interested to go and see, which is the most detailed.     the principle of this memory pool, the memory pool by N Memblock in a two-way linked list, each memblock is composed of a head block +m a fixed length of memchunk composition, Memchunk is the memory block you will be requesting from the pool in the future.     let's look at the following conditions:    1. After the memory pool is initialized, the memblock header of the memory pool is null.     2. For the first time, a memchunk is requested from the pool, and the memory pool requests a (Memblock head) + chunksize*initsize memory block from the system memory heap according to Initsize and chunksize , initializes the block-head data field and holds the first 4 bytes of each chunk to the number of the next available chunk in the Memblock, because it is a fixed-length chunk, so it is easy to calculate the chunk address based on the number and chunk length. After the Memblock is created, the first chunk (set to a) is returned to the user, and the initial field of the block head is set to the value of chunk a 4-byte (that is, the next available chunk number). The created block is also added to the header of the list.     3. The next time you apply for a memchunk, traverse the list, find the block with the free chunk, and deal with the block for the first time when you apply. Also check the block for extra free chunk, and then move the block to the head of the list. To increase the speed of traversing the list at the next request. If you traverse the list and do not find a block with free chunk, you apply a block from the system memory heap to add it to the list header.      4. To return the application's memchunk (assuming a) to the pool, traverse the Memblock list, according to A's addressTo find the block where a is located.       find it by the address of this memchunk A to calculate its number;      will Block->first Number is stored in the first 4 bytes of A; Change the Block->first to the number of a. (That is, Chunk's linked list operation)       Finally, move the memblock where a is located to the header (because there are idle chunk) to increase the speed of the application chunk. (The list is only traversed once). In the book, there is a deal: if the block's chunk are idle, The Block is released (returned to the system memory heap), I did not do this, I intend to write a separate cleanup operation.              Probably the principle is that, considering the compatibility with 64-bit machines, chunk and block are aligned in 8 bytes. The memheap in the code is mempool. It's just the name I should become heap.       in the following code, there is a more detailed comment on the memory pool implementation. Looking back at the principle of this memory pool, the obvious advantage is to reduce the memory fragmentation, byte alignment, but there is an obvious problem is that if there are a large number of memory pool (thousands) of memblock, the traversal of block search will be a performance bottleneck, the application of chunk operation is better, Some of the internal do some optimization processing, return chunk when the speed of the lookup linked list will be slower, the worst case is how many memblock to retrieve how many times. You can consider doing some of the search optimization and change, do not use the two-way linked list, in other ways to do. The simplest optimization is to use game particle system commonly used an algorithm, will have free chunk block put a linked list, there is no idle chunk block put another linked list, and then do some distribution changes, perhaps to improve some speed.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.