Anyway We have introduced the boost::p ool component in the memory pool (mempool) technical details. From the perspective of memory management, this is a traditional mempool component, although there are some improvements (but only performance improvements). But unlike Boost::object_pool, it fits well with the idea I emphasized in C + + memory management change. It can be argued that Boost::object_pool is a generic GC allocator component.
I have raised the concept of GC allocator many times. It still needs to be emphasized that the so-called GC allocator is the allocator with the ability of garbage collection. In C + + memory Management Change (1) We introduced this concept, but there is no clear GC allocator word.
Boost:object_pool Memory Management Concept
The great thing about Boost::object_pool is that C + + acknowledges from the top of the library that programmers are making mistakes in memory management, and it is difficult for programmers to make sure that memory does not leak. Boost::object_pool allows you to forget to free memory. Let's take a look at an example:
class X { … };
void func()
{
boost::object_pool<X> alloc;
X* obj1 = alloc.construct();
X* obj2 = alloc.construct();
alloc.destroy(obj2);
}
If Boost::object_pool is just an ordinary allocator, then this code obviously has a problem, because the Obj1 destructor is not executed and the requested memory is not released.
But this piece of code is perfectly normal. Yes, the destructor of Obj1 did execute, and the requested memory was freed. This means that Boost::object_pool supports automatic memory recycling (through Object_pool::~object_pool destructors) by enabling you to manually free up memory (by proactively calling Object_pool::d Estroy). This is in compliance with GC allocator specifications.
Note: Memory management is better said to be object management. The application and release of memory is more precisely the creation and destruction of objects. But here we do not deliberately distinguish between the two differences.
Boost:object_pool and Autofreealloc
As we know, Autofreealloc does not support manual release, and can only wait for Autofreealloc object destructor to free all memory at once. So is it possible to think that Boost::object_pool is more complete than autofreealloc?
Fact Both Boost::object_pool and Autofreealloc are not GC allocator in the complete sense. Autofreealloc because it can only be released at once, it only applies to specific conditions. However, although Autofreealloc is not universal, it is a universal GC allocator. and Boost::object_pool can only manage one object, not the universal allocator, the limitations are actually stronger.
Implementation details of Boost:object_pool
We should have a general grasp of the boost::object_pool. Now, let's go deep into the Object_pool implementation details.
In the memory pool (mempool) technical details, we introduce boost::p ool components, deliberately remind everyone to pay attention to the Pool::ordered_malloc/ordered_free function. In fact, Boost::object_pool's malloc/construct, Free/destroy function calls Pool::ordered_malloc, Ordered_free function, not pool::malloc, Free function.
Let's explain why.
In fact, the key is that object_pool to support the manual release of memory and automatically reclaim memory (and automatically perform destructors) in two modes. If there is no automatic destructor, then the ordinary mempool is enough, and there is no need for ordered_free. Since there is automatic recycling and there is manual release, it is necessary to distinguish which nodes (node) in the memory block (memblock) are free memory nodes (freenode) and which nodes are already in use. For those nodes that are already free of memory, it is clear that the destructor of the object can no longer be invoked.
Let's look at the implementation of the Object_pool::~object_pool function:
Template <typename T, TypeName userallocator>
Object_pool<t, Userallocator>::~object_pool ()
{
Handle trivial case
if (!this->list.valid ())
Return
Details::P odptr<size_type> iter = this->list;
Details::P odptr<size_type> next = iter;
Start ' Freed_iter ' at beginning's free list
void * Freed_iter = this->first;
Const Size_type partition_size = This->alloc_size ();
Todo
{
Increment Next
Next = Next.next ();
Delete all contained objects that aren ' t freed
Iterate ' I ' through all chunks at the memory block
for (char * i = iter.begin (); I!= iter.end (); i = = partition_size)
{
If this chunk are free
if (i = = Freed_iter)
{
Increment Freed_iter to point to next in the free list
Freed_iter = Nextof (Freed_iter);
Continue searching chunks in the memory block
Continue
}
This chunk isn't free (allocated).
Static_cast<t *> (static_cast<void *> (i))->~t ();
and continue searching chunks in the memory block
}
Free storage
Userallocator::free (Iter.begin ());
Increment iter
iter = next;
while (Iter.valid ());
Make the blocks list empty so, the inherited destructor doesn ' t try to
Free it again.
This->list.invalidate ();
}
This code is not difficult to understand, object_pool traversing all the requested memory blocks (Memblock) and traversing all of the nodes (node), if the node does not appear in the Free Memory node (freenode) List (freenodelist), It is the node that the user does not voluntarily release, need to carry on the corresponding destructor operation.
Now you see, the Ordered_malloc is to let the memblocklist in the Memblock order, Ordered_free is to let freenodelist all freenode orderly. And Memblocklist, Freenodelist ordered, is to quickly detect whether node is free or used (this is actually a collection of the process of intersection, suggest you look at the std::set_intersection, it is defined in the STL < algorithm>).