Introduction to memory pool technology

Source: Internet
Author: User

I have seen an introduction to the memory pool technology, which has benefited a lot.

Original post address: http://www.ibm.com/developerworks/cn/linux/l-cn-ppp/index6.html

 6.1 Principle of Performance Optimization for custom memory pools

As mentioned above, the reader has learned the difference between "Heap" and "stack. In programming practice, it is inevitable to use a large amount of memory on the stack. For example, to maintain the data structure of a linked list in a program, each node that adds or deletes a linked list needs to allocate or release a certain amount of memory from the memory stack; when maintaining a dynamic array, if the size of the dynamic array cannot meet the needs of the program, a new memory space should also be allocated on the memory stack.

6.1.1 insufficient default memory management functions

Using the default memory management function new/delete or malloc/free to allocate and release memory on the heap will incur some additional overhead.

When the system receives a request to allocate a certain size of memory, it first looks for the memory idle block table maintained internally, in addition, you need to allocate a certain algorithm (such as allocating the first memory block not smaller than the applied size to the requester, or allocating the most suitable memory block, or allocate the maximum idle memory blocks) to find the appropriate size of idle memory blocks. If the idle memory block is too large, you need to cut it into allocated parts and smaller idle blocks. Then the system updates the memory idle block table to complete a memory allocation. Similarly, when the memory is released, the system adds the released memory block to the idle memory block table. If possible, the adjacent idle blocks can be merged into large idle blocks.

The default memory management function also takes into account multi-threaded applications and requires locking each time the memory is allocated and released, which also increases the overhead.

It can be seen that if the application allocates and releases memory frequently on the stack, it will cause performance loss. In addition, this will cause a large amount of memory fragments in the system and reduce the memory utilization.

The default memory allocation and release algorithms naturally take into account performance. However, the general version of these memory management algorithms requires more extra work to cope with more complex and extensive situations. For a specific application, a custom memory pool suitable for its specific memory allocation and release mode can achieve better performance.

6.1.2 definition and classification of memory pools

The idea of a custom memory pool is clearly indicated by the word "pool". Applications can use the system's memory allocation to call a one-time pre-applied memory of an appropriate size as a memory pool, the memory pool can be used for memory allocation and release by the application itself. Only when the memory pool size needs to be dynamically expanded, the system's memory allocation function needs to be called. All memory operations at other times are under control of the application.

The memory pool customized by the application has different types according to different application scenarios.

From the thread security perspective, the memory pool can be divided into single-thread memory pool and multi-thread memory pool. The single-thread memory pool is used by only one thread throughout its lifecycle, so mutual access is not required. The multi-thread memory pool may be shared by multiple threads, therefore, You need to lock each time the memory is allocated and released. Relatively speaking, the single-thread memory pool has higher performance, while the multi-thread memory pool has a wider application scope.

The memory pool can be divided into fixed memory pools and variable memory pools. The so-called fixed memory pool refers to the memory unit size allocated by the application from the memory pool each time has been determined in advance, is fixed; the variable memory pool allows the size of memory units allocated each time to change as needed, with a wider application range and lower performance than the fixed memory pool.

6.1.3 example of how the memory pool works

The following uses a fixed memory pool as an example to describe how the memory pool works, as shown in 6-1.

Figure 6-1 fixed Memory Pool

A fixed memory pool is composed of a series of fixed memory blocks, each of which contains a fixed number and size of memory units.

As shown in Figure 6-1, the memory pool contains a total of four memory blocks. When the memory pool is generated for the first time, only one memory block is requested from the system. The returned Pointer serves as the header pointer of the entire memory pool. Then, as the application constantly needs the memory, when the memory pool determines that it needs to be dynamically expanded, it will apply for a new memory block from the system and link all these memory blocks through pointers. For the operating system, it has allocated four equal-size memory blocks to the application. Because the size is fixed, the allocation speed is relatively fast. For applications, the memory pool opens up a certain size, but there is still room in the memory pool.

For example, to enlarge the memory block, you can see 4th memory blocks, including part of the memory pool block header information and three memory pool units of the same size. Unit 1 and Unit 3 are idle, and Unit 2 has been allocated. When an application needs to allocate a memory unit size through the memory pool, it simply needs to traverse all the memory pool block header information and quickly locate the memory pool block with idle units. Then, based on the block header information of the block, you can locate the address of 1st idle units, return the address, and mark the next idle unit. When an application releases a memory pool unit, you can directly mark the memory unit as an idle unit in the corresponding memory pool header information.

It can be seen that, compared with the system memory management, the memory pool operates very quickly and has the following advantages in performance optimization.

(1) In special cases, such as frequent allocation and release of fixed-size memory objects, complex allocation algorithms and multi-thread protection are not required. You do not need to maintain additional overhead of the memory idle table to achieve high performance.

(2) because a certain amount of continuous memory space is opened as a memory pool block, the program locality is improved to a certain extent and the program performance is improved.

(3) It is easier to control page boundary alignment and memory byte alignment without memory fragmentation.

6.2 Implementation instance of a memory pool

This section analyzes the implementation of a memory pool that is actually applied to a large application, and describes the usage and working principles in detail. This is a memory pool that is used in a single-threaded environment and has a fixed allocation unit size. It is generally used to allocate memory for class objects or struct that are dynamically and frequently created during execution and may be created multiple times.

This section describes the data structure declaration and illustration of the memory pool, and then describes its principles and behavior characteristics. Next, I will explain the implementation details one by one. Finally, I will introduce how to apply the memory pool in the actual program and compare it with the performance of the program that uses the common memory function to apply for memory.

6.2.1 Internal Structure

The Declaration of the memory pool class MemoryPool is as follows:

class MemoryPool{private:    MemoryBlock*   pBlock;    USHORT          nUnitSize;    USHORT          nInitSize;    USHORT          nGrowSize;public:                     MemoryPool( USHORT nUnitSize,                                  USHORT nInitSize = 1024,                                  USHORT nGrowSize = 256 );                    ~MemoryPool();    void*           Alloc();    void            Free( void* p );};

MemoryBlock refers to the structure of the memory block header attached to the memory pool to allocate memory requests. It describes the memory block usage information associated with it:

struct MemoryBlock{    USHORT          nSize;    USHORT          nFree;    USHORT          nFirst;    USHORT          nDummyAlign1;    MemoryBlock*  pNext;    char            aData[1];static void* operator new(size_t, USHORT nTypes, USHORT nUnitSize){return ::operator new(sizeof(MemoryBlock) + nTypes * nUnitSize);}static void  operator delete(void *p, size_t){::operator delete (p);}MemoryBlock (USHORT nTypes = 1, USHORT nUnitSize = 0);~MemoryBlock() {}};

The data structure of this memory pool is 6-2.

Figure 6-2 Data Structure of the Memory Pool

6.2.2 overall mechanism

The overall mechanism of this memory pool is as follows.

(1) During running, the MemoryPool memory pool may have multiple memory blocks that meet the memory application request, these memory blocks are a large continuous memory area opened up from the process heap. They are composed of a MemoryBlock struct and multiple memory units available for allocation, all memory blocks constitute a memory block linked list. The pBlock of MemoryPool is the header of this linked list. For each memory block, you can use the pNext Member of the MemoryBlock structure in its header to access the memory block that follows it.

(2) Each memory block consists of two parts: a MemoryBlock struct and multiple memory allocation units. These memory allocation units are fixed in size (represented by nUnitSize of MemoryPool). The MemoryBlock struct does not maintain the information of the allocated units. Instead, it only maintains information about unallocated free allocation units. It has two important members: nFree and nFirst. NFree records the number of free allocation units in the memory block, while nFirst records the number of the next available units. The first two bytes (that is, a USHORT value) of each free allocation unit record the number of the next free allocation unit following it. In this way, by using the first two bytes of each free allocation unit, all the free allocation units in a MemoryBlock are linked.

(3) When a new memory request arrives, the MemoryPool traverses the MemoryBlock linked list through pBlock until it finds the memory block where a MemoryBlock is located, there are also free allocation units (by checking whether nFree members of the MemoryBlock struct are greater than 0 ). If such a memory block is found, obtain the nFirst value of its MemoryBlock (which is the number of the 1st Free units available for allocation in the memory block ). Then, locate the starting position of the freely allocated unit based on the number (because the size of all allocation units is fixed, the start position of each allocation unit can be offset by the number allocation unit size ), this location is the starting address of the memory used to meet the memory request. However, before returning this address, you must first record the values of the first two bytes starting from this position (these two values record the numbers of the next freely allocated unit) nFirst Member of the MemoryBlock assigned to this memory block. In this way, the next request will be satisfied with the memory unit corresponding to this number, and the nFree of the MemoryBlock of this memory block will be decreased by 1, then, the starting position of the previously located memory unit is returned to the caller as the return address of the memory request.

(4) If a free memory allocation unit is not found from the existing memory block (when the memory is requested for 1st times, and when all memory allocation units in all existing memory blocks have been allocated ), memoryPool will apply for a memory block from the process heap (this memory block includes a MemoryBlock structure and multiple memory allocation units next to it. Assume that the number of memory allocation units is n, n can be set to nInitSize or nGrowSize in the MemoryPool. After the application is completed, one of the allocation units is not allocated immediately. Instead, the memory block needs to be initialized first. The initialization operation includes setting nSize of the MemoryBlock to the size of all memory allocation units (note that it does not include the size of the MemoryBlock struct) and nFree to n-1 (note that N-1 is used instead of n, because the new memory block is applied to meet a new memory request, a free storage unit will be allocated immediately. If it is set to n-1, after assigning a free storage unit, you do not need to decrease n by 1). nFirst is 1 (You know that nFirst is the number of the next free storage unit that can be allocated. The reason for being 1 is the same as that for nFree, that is, the free allocation unit numbered 0 is immediately allocated. Now it is set to 1, and the nFirst value does not need to be modified afterwards). The construction of MemoryBlock requires more important tasks, that is, linking all the freely allocated units after the allocation units numbered 0. As mentioned above, the first two bytes of each free allocation unit are used to store the number of the next free allocation unit. In addition, because the size of each allocation unit is fixed, the product of its number and unit size (nUnitSize member of MemoryPool) can be used as the offset value for locating. The only problem now is to find the starting address? The answer is that the aData [1] member of MemoryBlock starts. Because aData [1] actually belongs to the MemoryBlock struct (the last byte of the MemoryBlock struct), in essence, the last byte of the MemoryBlock struct is also used as part of the allocated allocation unit. Because the entire memory block consists of a MemoryBlock struct and an integer allocation unit, this means that the last byte of the memory block will be wasted, in Figure 6-2, this byte is marked with a small part of the thick black background located at the last part of the two memories. After determining the starting position of the allocation unit, it is easy to link the freely allocated unit. That is, starting from the aData position, the first two bytes are taken every nUnitSize to record the number of the freely allocated unit after it. Since all allocation units are free at the beginning, this number is the number of the unit whose position is followed by the number of its own number plus 1. After initialization, the starting address of the 1st allocation units in the memory block is returned. It is known that the address is aData.

(5) When a deleted unit needs to be recycled, it will not return to the process heap, but to the MemoryPool. When the result is returned, the MemoryPool can know the starting address of the unit. In this case, the MemoryPool starts to traverse the maintained memory block linked list to determine whether the starting address of the Unit falls within the address range of a memory block. If it is not within the range of all memory addresses, the recycled unit does not belong to this MemoryPool. If it is within the address range of a memory block, then it will add the recycled allocation unit to the head of the free allocation unit linked list maintained by the memory block's MemoryBlock, and increase its nFree value by 1. After recycling, considering the effective utilization of resources and the performance of subsequent operations, the operations of the memory pool will continue to judge: if all the allocation units of the memory block are free, this memory block will be removed from the MemoryPool and returned to the process heap as a whole; if there are still non-free allocation units in the memory block, this memory block cannot be returned to the process heap. However, because a allocation unit is returned to the memory block, that is, the memory block has a free allocation unit for the next allocation, so it will be moved to the header of the memory block maintained by the MemoryPool. In this case, when the next memory request arrives, the MemoryPool traverses its memory block linked list to find the free allocation unit, and the memory block will be found in the 1st searches. This memory block does have a free allocation unit, which can reduce the number of traversal times of the MemoryPool.

In summary, each memory pool maintains a memory block linked list (single-chain table), and each memory block is composed of a block header structure (memoryblock) that maintains the information of the memory block) it consists of multiple allocation units. The block header structure memoryblock further maintains a "linked list" consisting of all the free allocation units of the memory block ". This linked list is not linked by "pointer to the next freely allocated Unit", but by "Number of the next freely allocated unit, this number value is stored in the first two bytes of the free allocation unit. In addition, the starting position of the 1st free allocation units is not the "1st address location" next to the memoryblock structure, instead, it is the last byte aData of the memoryblock struct "internal" (or perhaps not the last one, because of the byte alignment problem), that is, the allocation unit actually goes wrong. Because the space behind the memoryblock structure is exactly an integer multiple of the allocation unit, the last byte of the memory block is not actually used due to the sequential dislocation. One reason for doing so is to consider the porting of different platforms, because the alignment of different platforms may be different. That is, when applying for memory size of the memoryblock, it may return a memory larger than the total size of all its members. The last few bytes are used to "complete" and make aData the starting position of the first 1st allocation units, so that they can work on different alignment platforms.

6.2.3 detail analysis

With the above general impression, this section will carefully analyze its implementation details.

(1) The structure of memorypool is as follows:

MemoryPool::MemoryPool( USHORT _nUnitSize,                            USHORT _nInitSize, USHORT _nGrowSize ){    pBlock      = NULL;            ①    nInitSize   = _nInitSize;       ②    nGrowSize   = _nGrowSize;       ③    if ( _nUnitSize > 4 )        nUnitSize = (_nUnitSize + (MEMPOOL_ALIGNMENT-1)) & ~(MEMPOOL_ALIGNMENT-1); ④    else if ( _nUnitSize <= 2 )        nUnitSize = 2;              ⑤    else        nUnitSize = 4;}

As shown in Area ①, when a memorypool is created, it does not immediately create a memory block that meets the memory application. That is, the memory block linked list is empty at the beginning.

② And ③ respectively set "Number of allocation units included in the memory block created 1st Times", and "Number of allocation units included in the memory block created subsequently ", these two values are specified by parameters when the memorypool is created, and remain unchanged in the lifetime of the memorypool object.

The following code is used to set nunitsize. For this value, refer to the passed _ nunitsize parameter. However, two factors need to be considered. As mentioned above, when each allocation unit is in the Free State, the first two bytes are used to store "the number of the next freely allocated unit ". That is, each allocation unit "has at least" two bytes ", which is the reason for the assignment at location ⑤. ④ It is to round the size of _ nunitsize greater than 4 bytes to a multiple of the smallest mempool _ alignment greater than _ nunitsize (provided that mempool_alignment is a multiple of 2 ). For example, when _ nunitsize is 11, mempool_alignment is 8, nunitsize is 16, mempool_alignment is 4, nunitsize is 12, mempool_alignment is 2, nunitsize is 12, and so on.

(2) When a memory request is sent to the memorypool:

void* MemoryPool::Alloc(){    if ( !pBlock )           ①    {……    }    MemoryBlock* pMyBlock = pBlock;    while (pMyBlock && !pMyBlock->nFree )②        pMyBlock = pMyBlock->pNext;    if ( pMyBlock )         ③    {        char* pFree = pMyBlock->aData+(pMyBlock->nFirst*nUnitSize);        pMyBlock->nFirst = *((USHORT*)pFree);        pMyBlock->nFree--;        return (void*)pFree;    }    else                    ④    {        if ( !nGrowSize )            return NULL;pMyBlock = new(nGrowSize, nUnitSize) FixedMemBlock(nGrowSize, nUnitSize);        if ( !pMyBlock )            return NULL;        pMyBlock->pNext = pBlock;        pBlock = pMyBlock;        return (void*)(pMyBlock->aData);    }}

A memorypool consists of four steps to satisfy memory requests.

① First, judge whether the current memory block linked list of the memory pool is empty. If it is empty, it means that this is 1st memory application requests. At this time, apply for a memory block with ninitsize allocation units from the process heap and initialize the memory block (mainly initializing memoryblock struct members and creating the initial free allocation unit linked list, the code is analyzed in detail below ). If the memory block is successfully applied and initialized, 1st allocation units are returned to the calling function. The first allocation unit starts with the last byte in the memoryblock structure.

② The function is to traverse the memory block linked list when there are existing memory blocks in the memory pool (that is, the memory block linked list is not empty) and find memory blocks with "free allocation unit.

③ Check if a memory block with a free allocation unit is found, locate the free allocation unit that can be used by the memory block. "Positioning" takes the last byte location of the MemoryBlock structure as the starting position, and the nUnitSize of the MemoryPool as the step size. After finding it, you need to modify the nFree information of the MemoryBlock (the remaining free allocation unit is reduced by one) and the information of the linked list of the free storage unit of the memory block. In the memory block found, pMyBlock-> nFirst is the header of the free storage unit linked list in the memory block, the number of the next free storage unit is stored in the first two bytes of the free storage unit indicated by pMyBlock> nFirst (that is, the free storage unit just located. With the position just located, take the value of the first two bytes and assign it to pMyBlock-> nFirst. This is the new header of the free storage unit linked list of the memory block, that is, the number of the next freely allocated unit (if nFree is greater than zero ). After modifying the maintenance information, you can return the address of the freely allocated unit you just located to the called function you applied. Note: Because the allocation unit has been allocated, and the memory block does not need to maintain the allocated allocation unit, the information of the first two bytes of the allocation unit is no longer useful. From another perspective, the memory pool is unknown and does not need to know how to handle this memory after the free allocation unit is returned to the called function. When this allocation unit is returned to the called function, its content is meaningless for calling the function. Therefore, it is almost certain that the calling function will overwrite the original content when using the memory of this unit, that is, the content of the first two bytes will also be erased. Therefore, each storage unit does not introduce redundant maintenance information because of the need to link, but directly uses the first two bytes in the unit. After allocation, the first two bytes can also be used by the calling function. In the free state, maintenance information is stored, that is, the number of the next freely allocated unit. This is a good example of effective memory utilization.

④ Indicates that the memory block with the free allocation unit is not found during the time interval ②. In this case, you need to apply for a new memory block from the process heap. Because it is not the first time to apply for a memory block, the number of allocated units in the applied memory block is ngrowsize, instead of ninitsize. As in Area ①, initialize the newly applied memory block and insert the memory block into the head of the memory block linked list of the memorypool, then, return the 1st allocation units of the memory block to the calling function. The reason for inserting this new memory block into the head of the memory block linked list is that the memory block has many free allocation units available for allocation (unless ngrowsize is equal to 1, this should be unlikely. Because the memory pool means to apply for a large block of memory from the process heap at a time for multiple subsequent requests), put it in the header so that the next time you receive the memory application, reduce the Traversal Time of memory blocks at ②.

The memorypool in Figure 6-2 can be used to display the memorypool: alloc process. Figure 6-3 shows the internal state of the memorypool at a certain time point.

Figure 6-3 Internal memorypool status at a certain time point

Because the memory block linked list of memorypool is not empty, it will traverse its memory block linked list. Because 1st memory blocks have free allocation units, they are allocated from 1st memory blocks. Check nfirst, whose value is M. Then pblock-> aData + (pblock-> nfirst * nunitsize) locates the starting position of the free allocation unit numbered M (expressed in pfree ). Before pfree is returned, you need to modify the maintenance information of this memory block. First, decrease nfree by 1, and then obtain the value of the first two bytes starting at pfree (note that the value of aData here is K. Actually not this byte. It is the value of ushort which is composed of aData and another byte following it. It cannot be misunderstood ). If the value is K, the nfirst of pblock is K. Then, pfree is returned. The memorypool structure is 6-4.

Figure 6-4 structure of the memorypool

Figure 6-4 structure of the memorypool

We can see that the first 1st available units (M number) have been displayed as allocated. The nfirst of pblock has pointed to the number of the next freely allocated unit in the original M unit, that is, K.

(3) When memorypool recycles memory:

void MemoryPool::Free( void* pFree ){    ……    MemoryBlock* pMyBlock = pBlock;    while ( ((ULONG)pMyBlock->aData > (ULONG)pFree) ||         ((ULONG)pFree >= ((ULONG)pMyBlock->aData + pMyBlock->nSize)) )①    {         ……    }    pMyBlock->nFree++;                     ②    *((USHORT*)pFree) = pMyBlock->nFirst;  ③    pMyBlock->nFirst = (USHORT)(((ULONG)pFree-(ULONG)(pBlock->aData)) / nUnitSize);④    if (pMyBlock->nFree*nUnitSize == pMyBlock->nSize )⑤    {        ……    }    else    {        ……    }}

As mentioned above, the whole memory block may be returned to the process heap when the allocation unit is recycled, the memory block to which the recycled allocation unit belongs may also be moved to the head of the memory block linked list in the memory pool. Both operations need to modify the linked list structure. In this case, you need to know the memory block at the previous position in the linked list.

1. traverse the memory block linked list of the memory pool to determine the pointer range of the memory block to be recycled allocation unit (pfree), and compare the pointer value to determine.

When running to ②, pmyblock is the memory block that contains the units to be recycled allocated pointed to by pfree (of course, You should also check the situation when pmyblock is null, that is, pfree does not belong to this memory pool, so it cannot be returned to this memory pool. You can add it on your own ). In this case, the nfree increment of pmyblock is 1, indicating that the free allocation unit of this memory block has one more.

③ Used to modify the information of the free allocation unit linked list of the memory block, it points the value of the first two bytes of the allocation unit to the number of the first allocable free allocation unit in the memory block.

④ Change the nfirst value of pmyblock to the number of the unit to be recycled. The number calculates the difference between the starting position of the unit and the aData position of pmyblock, then, divide it by the step size (nunitsize.

In essence, Steps 3 and 4 are used to "truly Recycle" the units to be recycled ". It is worth noting that these two steps actually make the recycling unit the next allocable free allocation unit of the memory block, placing it in the head of the free allocation unit linked list. Note that the memory address does not change. In fact, the memory address of a allocation unit remains unchanged after allocation or when it is in the Free State. The only difference is its status (Allocated/free) and its position in the freely allocated unit linked list when it is in the Free State.

⑤ Check whether all the units that contain the memory block of the recycle unit are in the Free State and whether the memory is in the head of the memory block linked list. If yes, return the entire memory block to the process heap and modify the structure of the memory block linked list.

Note: When determining whether all units of a memory block are in the Free State, the system does not traverse all the units, but determines whether nFree multiplied by nUnitSize is equal to nSize. NSize is the size of all allocation units in the memory block, excluding the size of the MemoryBlock struct in the header. The purpose is to quickly check whether all allocation units in a memory block are in the Free State. Because we only need to combine nFree and nUnitSize to calculate the conclusion, instead of traversing and calculating the number of allocation units in all free States.

In addition, it should be noted that nFree, nInitSize, or nGrowSize cannot be compared here to determine that all allocation units in a memory block are in the Free State, this is because the memory block allocated for 1st times (the number of allocation units is nInitSize) may be moved to the back of the linked list, or even after it is moved to the back of the linked list, at a time, all the units are in the Free State and are returned to the process heap. That is, when the allocation unit is recycled, it cannot be determined whether the number of allocation units in a memory block is nInitSize or nGrowSize, therefore, you cannot compare nFree with nInitSize or nGrowSize to determine whether all allocation units of a memory block are in the Free State.

The memory pool status after the allocation is used as an example. Assume that the last unit in the first memory block needs to be recycled (allocated, assuming its number is m, and the pFree Pointer Points to it ), 6-5.

It is not difficult to find that the nFirst value is changed from 0 to m. That is to say, the next allocation unit of this memory block is the m-numbered unit, rather than the 0-numbered unit (the first allocation is the latest recycling unit. From this point of view, this process is similar to the stack principle. But here "Forward" means "recycle", while "out" means "allocate "). Correspondingly, m's "next free unit" is marked as 0, that is, the memory block's original "next unit to be allocated ", this also indicates that the recently recycled allocation unit is inserted in the "free allocation unit linked list" header of the memory block. Of course, nFree increments by 1.

Figure 6-5 memory pool status after allocation

The status 6-6 is shown before processing.

It should be noted that although pfree is "recycled", pfree still points to the unit of M number. During the recycling process, the first two bytes of the Unit are overwritten, however, the contents of other parts have not changed. In addition, from the perspective of memory usage of the entire process, the status of the M-numbered unit is still "valid ". Because the "recycle" is only recycled to the memory pool, but not to the process heap, the program can still access this unit through pfree. However, this is a very dangerous operation, because the first two bytes of the Unit have been overwritten during the recycle process, and the unit may soon be re-allocated by the memory pool. Therefore, access to this unit through the pfree pointer after collection is incorrect. Read operations read the wrong data, and write operations may destroy data from other places in the program, therefore, you must be extremely careful.

Next, you need to determine the internal usage of the memory block and its position in the memory block linked list. If the ellipsis "...... "The other part is also allocated, that is, nfree multiplied by nunitsize is not equal to nsize. Because the memory block is not in the chain table header, you need to move it to the chain table header, as shown in Figure 6-7.

Figure 6-7 memoryblock movement caused by recycling

If the ellipsis "...... "All other parts are free allocation units, that is, nfree multiplied by nunitsize equals to nsize. Because this memory block is not in the head of the linked list, you need to recycle the entire memory block to the process heap. The structure of the memory pool is 6-8 after the memory block is recycled.

Figure 6-8 Structure of the memory pool after recovery

A memory block will be initialized after the application, mainly to establish the initial free allocation unit linked list. The following is the detailed code:

MemoryBlock::MemoryBlock (USHORT nTypes, USHORT nUnitSize): nSize  (nTypes * nUnitSize),  nFree  (nTypes - 1),                     ④  nFirst (1),                              ⑤  pNext  (0){char * pData = aData;                  ①for (USHORT i = 1; i < nTypes; i++) ②{*reinterpret_cast<USHORT*>(pData) = i; ③pData += nUnitSize;}}

Here we can see that the initial value of pdata is aData, that is, unit 0. However, in the loop at ②, I starts from 1, and then changes the first two bytes of pdata to I at ③ in the loop. That is, the first two bytes of unit 0 are the first two bytes of Unit 1 and the first two bytes of the value is 2 until the first two bytes of the (nTypes-2) unit are (nTypes-1 ). This means that at the beginning of a memory block, its free-allocation unit linked list starts from 0. Connect them in sequence until the last 2nd units point to the last unit.

Also note that in its initialization list, nfree is initialized to nTypes-1 (instead of ntypes), and nfirst is initialized to 1 (instead of 0 ). This is because 1st units are allocated immediately after the 0-number unit is constructed. Note that the last unit does not set the value of the first two bytes at the beginning, because the Unit does not have a free allocation unit in the memory block at the beginning. However, as shown in the preceding example, when the last unit is allocated and recycled, the first two bytes are set.

Figure 6-9 shows the status of a memory block after initialization.

Figure 6-9 Status of a memory block after Initialization

When the memory pool destructor, all the memory blocks in the memory pool need to be returned to the process heap:

MemoryPool::~MemoryPool(){    MemoryBlock* pMyBlock = pBlock;    while ( pMyBlock )    {        ……    }}

6.2.4 usage

After analyzing the internal principles of the memory pool, This section describes how to use it. From the above analysis, we can see that the memory pool mainly has two external interface functions: alloc and free. Alloc returns the allocation unit (fixed memory size) applied for, and free recycles the memory of the allocation unit represented by the passed pointer to the memory pool. The allocated information is specified through the memorypool constructor, including the allocation unit size and the number of allocation units contained in the memory block applied for 1st times, and the number of allocation units contained in the memory block applied for by the memory pool.

To sum up, when you need to improve the application/collection efficiency of some key class objects, you can consider opening up the space required for all the Class Object generation from a memory pool. When destroying an object, you only need to return it to the memory pool. "All objects of a class are allocated in the same memory pool object" is designed to declare a static memory pool object for such classes, at the same time, in order to allow all its objects to open up memory from this memory pool, rather than obtaining from the process heap by default, a new operator needs to be reloaded for this class. Correspondingly, collection is also for the memory pool, rather than the default heap of the process, and a delete operator needs to be reloaded. The alloc function of the memory pool is used in the new operator to meet the memory requests of all such objects. To destroy an object, you can call the free operation of the memory pool in the delete operator.

6.2.5 Performance Comparison

In order to test the effect of using the memory pool, a small test program can find that the memory pool mechanism takes 297 ms. Without a memory pool mechanism, it took 625 MS, and the speed increased by 52.48%. The reason for the increase in speed can be attributed to the following: first, in addition to occasional memory application and destruction, this will cause the allocation and destruction of memory blocks from the process heap, the vast majority of Memory Applications and destruction are carried out by the memory pool in the memory block that has been applied for, rather than directly dealing with the process heap, and directly dealing with the process heap is a very time-consuming operation; second, this is the memory pool in a single-threaded environment. We can see that there are no thread protection measures in the Alloc and Free operations of the memory pool. Therefore, if Class A uses this memory pool, all Class A objects must be created and destroyed in the same thread. However, if Class A uses the memory pool and Class B also uses the memory pool, the thread used by Class A does not have to be the same as the thread used by Class B.

In addition, it has been discussed in Chapter 1st, because the memory pool technology makes the same type of objects distributed in adjacent memory areas, and programs often traverse the same type of objects. Therefore, there should be fewer missing pages during the program running, but this can only be verified in a real complex application environment.

 

Figure 6-7 memoryblock movement caused by recycling

6.3 Summary of this Chapter

The application and release of memory have a great impact on the overall performance of an application, and even become the bottleneck of an application in many cases. The method to eliminate the bottleneck caused by memory application and release is usually to provide a suitable memory pool for the actual memory usage. The memory pool improves performance mainly because it can take advantage of some "Features" in the actual memory usage scenarios of applications ". For example, some memory applications and releases must occur in one thread, and some types of objects are generated and destroyed more frequently than other types of objects in the application. To address these features, we can provide customized memory pools for these special memory usage scenarios. This eliminates unnecessary operations in the actual application scenario in the default memory mechanism provided by the system to improve the overall performance of the application.

Figure 6-6 memory pool status before processing to Section 6

Figure 6-5 memory pool status after allocation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.