Memory pool principle and code for fixed memory block size

Source: Internet
Author: User

Some cases require a large number of fixed-size blocks of memory that are very inefficient to use with malloc, which is a good fit for a memory pool solution.

Here is the full source of a memory pool with a fixed memory block size. Note: The memory application is not used by malloc, but by its own definition of torch::heapmalloc, simple modification can be.

See GitHub "click" For details of the code.

Read the code first, then read the explanation.

/** Description of fixed size memory pool *: *-only for cases where a large number of fixed size memory blocks are required*/Template<size_t _size>classfixedmemorypool{ Public:    enum{_chunk_count =1024x768/_size};    Fixedmemorypool (); ~Fixedmemorypool (); Fixedmemorypool (ConstFixedmemorypool &); Fixedmemorypool&operator=(ConstFixedmemorypool &) =Delete; /** Get a block of memory from the memory pool * NOTE: *-it is best to put it back in the memory pool (to improve performance) *-Cannot use:: Free () frees memory, memory pool will uniformly free memory blocks in the pool*/    void*Alloc (); /** Put the memory block back into the memory pool * Note: Must be a memory block requested from the memory pool to put it back*/    voidFree (void*mem); /** Empty the memory pool * Note: When this interface is called, the memory blocks requested by Alloc are all freed*/    voidClear (); /** Get total Memory pool size (in bytes)*/size_t getcapacity (); /** Gets the size of the memory block (in units of byte)*/size_t getitemsize ();Private:    structChunk; Chunk*Makechunk ();Private: Union chunkitem {Chunkitem*Next;    uint8_t Mem[_size];    }; structChunk {chunkitem itemarray[_chunk_count];}; Chunkitem*M_rootitem; Std::list<chunk *>m_chunklist;}; Template<size_t _size>Fixedmemorypool<_Size>:: Fixedmemorypool () {M_rootitem=nullptr; Chunk*chunk = This-Makechunk (); if(!Chunk)return;    M_chunklist.push_back (chunk); M_rootitem= chunk->ItemArray;} Template<size_t _size>Fixedmemorypool<_size>::~Fixedmemorypool () { This-Clear ();} Template<size_t _size>Fixedmemorypool<_size>::fixedmemorypool (ConstFixedmemorypool &) {M_rootitem=nullptr; Chunk*chunk = This-Makechunk (); if(!Chunk)return;    M_chunklist.push_back (chunk); M_rootitem= chunk->ItemArray;} Template<size_t _size>void* fixedmemorypool<_size>:: Alloc () {if(!M_rootitem) {Chunk*chunk = This-Makechunk (); if(!chunk)returnnullptr;        M_chunklist.push_back (chunk); M_rootitem= chunk->ItemArray; }    void*item =M_rootitem; M_rootitem= m_rootitem->Next; memset (item,0, _size); returnitem;} Template<size_t _size>voidFixedmemorypool<_size>::free (void*mem) {    if(!MEM)return; Chunkitem*item = static_cast<chunkitem*>(MEM); Item->next =M_rootitem; M_rootitem=item;} Template<size_t _size>voidFixedmemorypool<_size>:: Clear () { for(Chunk *chunk:m_chunklist)    {Torch::heapfree (chunk); } m_chunklist.clear ();} Template<size_t _size>size_t Fixedmemorypool<_Size>:: Getcapacity () {returnM_chunklist.size () * _chunk_count *_size;} Template<size_t _size>size_t Fixedmemorypool<_Size>:: Getitemsize () {return_size;} Template<size_t _size>TypeName Fixedmemorypool<_size>::chunk* fixedmemorypool<_size>:: Makechunk () {Chunk*chunk = (Chunk *) Torch::heapmalloc (sizeof(Chunk)); if(!Chunk)returnnullptr;  for(inti =0; I < _chunk_count-1; i++) {Chunk->itemarray[i].next = & (chunk->itemarray[i+1]); } Chunk->itemarray[_chunk_count-1].next =nullptr; returnChunk;}

The first thing to do is to clear the structure of this memory pool:

chunklist[    Chunk[chunkitem, ... Chunkitem],    chunk[chunkitem, ... Chunkitem],    ...    Chunk[chunkitem, ... Chunkitem]]

Chunkitem is very important, he is a union, which has two members:

    Union chunkitem {        chunkitem*  Next;//point to Next Chunkitem        uint8_t     mem[_size];//_ Size is the template parameter    };
structchunkitem itemarray[_chunk_count];};  

When Chunkitem is in the pool, only the next field is used, and next uses all the Chunkitem in a chunk to concatenate into a list.

Here someone found, chunk inside is not a chunkitem array, dry hair also with next series into linked list.

What I want to say is that there is a deep meaning in this. First of all, this is more flexible, of course, is mainly related to the recovery of memory blocks, the memory block is returned when the location of the Chunkitem is not sure (in the array position, in which chunk), so directly linked to the list of his head on it, no need to care. So this chunk is just initialized when the array and the linked list is equivalent, after the alloc and free, the linked list and array is not the same.

mem field is relatively simple, by the last class parameter _size control its size, generally is the size of Chunkitem (_size if greater than 4), Chunitem itself is allocated to the user's memory block, size is also the user through the template parameter _ Size

    chunkitem*          M_rootitem;    Std::list<chunk *>  m_chunklist;

First say M_rootitem, is actually pointing to the available Chunkitem pointers, when the Alloc application can be very simple

    void *item = m_rootitem;     = M_rootitem->Next;     0 , _size);     return item;

M_chunklist, in fact, is the chunk record list, mainly used for memory release, because a chunk memory block number is fixed, so when the memory block is applied, then reconstruct a chunk out, and then save all chunk to M_ Chunklist. This will allow you to:

     for (Chunk *chunk:m_chunklist) {        torch::heapfree (Chunk);    }    M_chunklist.clear ();

Here's a look at the more critical Alloc implementations:

template<size_t _size>void* fixedmemorypool<_size>:: Alloc () {if(!m_rootitem) {//when M_rootitem is empty, it means that there is no memory block available. Now we need to reconstruct a chunk.Chunk *chunk = This->makechunk ();//construct a chunk out, see below for details        if(!chunk)returnnullptr; M_chunklist.push_back (chunk); //log the newly constructed chunk to the list, otherwise it will cause a memory leakM_rootitem = chunk->itemarray;//let M_rootitem point to the first element in the newly constructed chunk ItemArray    }    //return the M_rootitem directly, and then point M_rootitem to the next    void*item =M_rootitem; M_rootitem= m_rootitem->Next; memset (item,0, _size); returnitem;}

Details of the Makechunk:

template<size_t _size>TypeName Fixedmemorypool<_size>::chunk* fixedmemorypool<_size>:: Makechunk () {Chunk*chunk = (Chunk *) Torch::heapmalloc (sizeof(Chunk));//apply for a Chunk memory block, struct Chunk {chunkitem itemarray[_chunk_count];};    if(!Chunk)returnnullptr;  for(inti =0; I < _chunk_count-1; i++) {//initializes the list structure, pointing the first one to the next ChunkitemChunk->itemarray[i].next = & (chunk->itemarray[i+1]); } Chunk->itemarray[_chunk_count-1].next = nullptr;//Attention!!! The last one must point to null, so that when the last one is applied, M_rootitem becomes null.    returnChunk;}

Free is simple:

template<size_t _size>void fixedmemorypool<_size>::free (void *mem) {     if return ;     *item = static_cast<chunkitem*>(mem);     Item//  directly put memory block into the list header    M_rootitem = item;}

Fixed-size memory pool implementations are not complex, but are very useful, and alloc and free simply move a pointer to complete the application and release of memory (without triggering makechunk), which is very efficient.

Only the core algorithm of the memory pool is described here, and the rest is relatively simple.

Memory pool principle and code for fixed memory block size

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.