Shared Windows C ++ Library Memory Pool

Source: Internet
Author: User

Directory structure

  1. Memory Pool Introduction
  2. Introduces the improvements and technologies of memory pool Analysis
  3. How to work with STL containers
  4. Performance Testing
  5. How to Use

 

Memory Pool introduction:

There are many good articles about the memory pool (here, and here ...). In today's PC machines, the memory pool has not been so obvious, and the operating system has a very good memory management. But why do we still need a memory pool? Two points: 1. Reduce the memory management burden and improve performance. (For example, you need to apply for sufficient memory before a intensive algorithm, use it, and then release it in a unified manner.) 2. reduce page errors (page errors are caused by the exchange of virtual memory and physical memory), reduce memory fragments, and improve performance.

 

This article introduces the improvements and technologies used for analyzing the memory pool:

The memory pool component is transformed from the SGI memory. For the principle, refer to STL source code analysis or refer to here to use a picture to describe it.

 

On this basis, I have added several important improvements:

  • Whether the template parameter is multithreading is added. Based on this parameter, determine whether to use lock and variable volatile modification.
  • Increase the upper limit of the template parameter allocation block. The default value is 256.
  • Added a memory allocation policy for template parameters. Three methods are available: virtualalloc, heapalloc, and malloc.
  • Added applied memory management and centralized release.

This section describes how to select a memory allocation policy.

 

// Memory Allocation Method on Win32

Struct virtualallocatetraits
{
Static void * allocate (size_t size)
{
// Keep the specified memory page in the physical memory and do not allow it to be switched to the disk page file.
Void * P =: virtualalloc (null, size, mem_reserve | mem_commit | mem_top_down, page_execute_readwrite );
: Virtuallock (p, size );

Return P;
}

Static void deallocate (void * P, size_t size)
{
: Virtualunlock (p, size );
: Virtualfree (p, size, mem_release );
}
};

Struct heapallocatetraits
{
Static handle getheap ()
{
Static handle heap = NULL;

If (heap = NULL)
{
Heap =: heapcreate (0, 0, 0 );

// Set the low-fragment heap
Ulong uheapfragvalue = 2;
: Heapsetinformation (heap, heapcompatibilityinformation, & uheapfragvalue, sizeof (ulong ));
}

Return heap;
}

Static void * allocate (size_t size)
{
Return: heapalloc (getheap (), heap_zero_memory, size );;
}

Static void deallocate (void * P, size_t/* size */)
{
: Heapfree (getheap (), 0, P );
}
};

Struct mallocallocatetraits
{
Static void * allocate (size_t size)
{
Return STD: malloc (size );
}

Static void deallocate (void * P, size_t)
{
Return STD: Free (P );
}
};
By default, mallocallocatetraits is used to allocate memory. If you want to further control the memory, such as using the locked memory to read files asynchronously, you can choose virtualalloctraits or customize it when locking the memory at the network layer, you only need to meet the allocate and deallocate interface constraints.
How to match STL containers: first, STL proposed the concept of Allocator, which separates data from memory allocation and release, that is, memory management is independent from the implementation of containers. This is a great revolutionary change. Please refer to the discussion (here, here, here ). Then, in order to make the memory pool conform to the Allocator interface, an adapter-sgiallocator is required. Example:
typedef std::vector<int, SGIAllocator<int, false>> SVector;
typedef std::list<int, SGIAllocator<int, false, 1024>> SList;

Performance test:

STD: new, STD: Allocator, and boost: pool are used as the reference objects respectively. (Loki does not want to test) at a fixed cycle of 10000 times, according to 8 bytes, 64 bytes, 1 K, 4 K size for 4 tests. Each test interval is 5 times, and the average value is obtained. This is part of the test case:
for(DWORD i = 0; i != dwCount; ++i)
{
arr[i] = new char[dwSize];
memset(arr[i], i, dwSize);

if( i % 2 == 0 )
delete arr[i];
}

for(DWORD i = 0; i != dwCount; ++i)
{
if( i % 2 != 0 )
delete arr[i];
}
The performance comparison test is as follows. It is obvious that STD: New is used as the benchmark. The more requests are released for small data, the more dominant the memory pool is. When the data size is large, the gap between the memory pool and the standard allocation mode will be smaller. However, as a server program, we encourage the use of memory pools for memory management. Haha, self-explanatory ~
Memory Pool usage:
// Sgimemorypool, which does not lock the malloc allocation policy for a single thread
Typedef Allocator <char, sgimemorypool <false, dwsize> sgialloct;
Sgialloct alloc2;

// Sgimemorypool, which does not lock the malloc allocation policy for a single thread
Dwlast = 0;
{
Qperformancetimer perf (dwlast );

For (DWORD I = 0; I! = Dwcount; ++ I)
{
Arr [I] = alloc2.allocate (dwsize );
Memset (ARR [I], I, dwsize );

If (I % 2 = 0)
Alloc2.deallocate (ARR [I], dwsize );
}

For (DWORD I = 0; I! = Dwcount; ++ I)
{
If (I % 2! = 0)
Alloc2.deallocate (ARR [I], dwsize );
}
}
Cout <"sgialloct:" <dwlast <Endl;

 

Disadvantages:

This memory pool also does not provide an interface for releasing some of the memory. Sorry ~~

You are welcome to download and try it out. If you have better suggestions, please let me know and I will definitely improve it. Thank you!

 

Download:

At present, we will not place it on Google code, but download it from the old csdn location. Click here

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.