Memory Pool Technology

Source: Internet
Author: User
This article has been migrated to: http://cpp.winxgui.com/cn:dive-into-memory-pool

Overview

Mempool technology is highly respected. I searched by Google and did not find a detailed article on principles. Therefore, I would like to add one. In addition, boost: The difference between the pool component and the classic mempool is added. The Application of mempool in SGI-STL/stlport is also described.

 

Classic memory pool technology classic memory pool technology is a technology used to allocate a large number of small objects of the same size. This technology can greatly accelerate the process of memory allocation/release. Below we will explain in detail the mysteries. The classic memory pool only involves two constants: memblocksize and itemsize (the size of a small object, but not smaller than the pointer size, and cannot be smaller than 4 bytes on a 32-bit platform ), and two pointer variables memblockheader and freenodeheader. Start, Both pointers are null. Class mempool
{
PRIVATE:
Const int m_nmemblocksize;
Const int m_nitemsize;

Struct _ freenode {
_ Freenode * pprev; byte data [m_nitemsize-sizeof (_ freenode *)];
};

Struct _ memblock {
_ Memblock * pprev;
_ Freenode data [m_nmemblocksize/m_nitemsize];
};

_ Memblock * m_pmemblockheader;
_ Freenode * m_pfreenodeheader;
 
Public:
Mempool (INT nitemsize, int nmemblocksize = 2048)
: M_nitemsize (nitemsize), m_nmemblocksize (nmemblocksize ),
M_pmemblockheader (null), m_pfreenodeheader (null)
{
}
}; The pointer variable memblockheader concatenates all applied memory blocks into a linked list so that all applied memory blocks can be released. The freenodeheader variable concatenates all free Memory nodes (freenode) into a chain.

This section involves two key concepts:Memory block (memblock)AndFree memory node (freenode). The memory block size is generally fixed to memblocksize bytes (excluding pointers used to create a linked list ). Memory blocks are divided into Multiple Memory nodes at the beginning of application. Each node is itemsize (the size of a small object), and memblocksize/itemsize is counted. The memblocksize/itemsize Memory nodes are all free at the beginning, and they are serialized as linked lists. Let's take a look at the process of applying for/releasing memory, so we can easily understand the purpose of doing so. The code for applying for memory is as follows: void * mempool: malloc () // No Parameter
{
If (m_pfreenodeheader = NULL)
{
Const int ncount = m_nmemblocksize/m_nitemsize;
_ Memblock * pnewblock = new _ memblock;
Pnewblock-> data [0]. pprev = NULL;
For (INT I = 1; I <ncount; ++ I)
Pnewblock-> data [I]. pprev = & pnewblock-> data [I-1];
M_pfreenodeheader = & pnewblock-> data [nCount-1];
Pnewblock-> pprev = m_pmemblock;
M_pmemblock = pnewblock;
}
Void * pfreenode = m_pfreenodeheader;
M_pfreenodeheader = m_pfreenodeheader-> pprev;
Return pfreenode;
} The memory application process is divided into two situations:

  • The linked list (freenodelist) in the free memory node is not empty.
    In this case, the alloc process is only the process of extracting the next node from the linked list.
  • Otherwise, a new memory block is required ).
    In this process, we need to cut the newly applied memblock into multiple nodes and concatenate them.
    Mempool technology overhead mainly lies in this.
The code for releasing memory is as follows: void mempool: Free (void * P)
{
_ Freenode * pnode = (_ freenode *) P;
Pnode-> pprev = m_pfreenodeheader;
M_pfreenodeheader = pnode;
}

 

The release process is extremely simple. You only need to mount the node to the beginning of the free memory linked list (freenodelist. Performance analysis mempool technology applies for memory/releases memory extremely fast (slower than autofreealloc ). In most cases, the memory allocation complexity is O (1). The main overhead is when freenodelist is empty and a new memblock needs to be generated. The memory release complexity is O (1 ).
Boost: Pool

Boost: the pool is a variant of the memory pool technology. The main changes are as follows:

  • Change memblock to a non-fixed length (memblocksize), but: m_nitemsize * 32 for 1st requests, m_nitemsize * 64 for 2nd requests, m_nitemsize * 3rd for 128 requests, and so on. Instead of using a fixed memblocksize, we use this prediction model (yes, this is a prediction model for users' memory requirements. In factSTD: this model is also used to increase the memory size of the vector.Is a detailed improvement.
  • Added the ordered_free (void * P) function.

    What distinguishes ordered_free from free is that free mounts the node to be released to the beginning of the free memory linked list (freenodelist). ordered_free assumes that freenodelist is ordered, therefore, we traverse freenodelist to insert the node to be released to a proper position.

    We have seen that the complexity of free is O (1), which is very fast. However, please note that ordered_free is a cost-effective operation, and its complexity is O (n ). Here N is the size of freenodelist. For a system with frequent release/application requests, this n is likely to be a large number. This boost is clearly described: http://www.boost.org/libs/pool/doc/interfaces/pool.html

Note: Do not think that ordered_free is provided by boost. This will be explained later when we discuss boost: object_pool.

General memory allocation component based on memory pool technology
SGI-STL carries forward the memory pool technology and uses it to implement its most fundamental allocator.
The general idea is to create 16 mempools. The <= 8-byte memory application is allocated by the mempool No. 0, and the <= 16-byte memory application is allocated by the mempool No. 1, <= the 24-byte memory is allocated by the mempool on the 2nd, and so on. Finally,> 128 bytes of memory is allocated by normal malloc requests. Note:

The above code is pseudocode (struct _ freenode, _ memblock cannot be compiled), and error handling is eliminated.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.