GLIBC uses Ptmalloc as its memory manager implementation. About Ptmalloc How to manage memory, I read a lot of tutorials, which I think the most transparent, want to know the truth of the students recommend to go there to see. Here is a sloppy summary of yourself, not suitable as a learning book (all link over).
- The BRK-assigned chunk list can only be released linearly from top start. Release the middle chunk, which cannot be returned to the OS, but is linked into the Bins/fast bins container.
- Mmap allocates memory, which is equivalent to mapping a chunk directly from physical memory. When this memory is released, it can be returned directly to the OS.
- For a reqest of memory, whether it is allocated by BRK or by mmap, this is determined by the GLIBC strategy mechanism.
- There is a threshold that can regulate this strategy. By default, less than 128kb is allocated by BRK, and greater than equals is allocated by mmap.
- But the modern GLIBC implementation (which has not yet been investigated) supports dynamic tuning of threshold technology. By default, on a 64-bit system, the BRK can be dynamically resized from 128kb to 32MB. The adjustment strategy can basically be summed up as: found that the top can release more than 256kb of available memory, the threshold adjusted to 256kb. And so on until 32MB.
- This threshold can also be artificially controlled. See the link below for details.
- I have written a small program to verify the above points, and I find it is true. The test is about, with a doubly linked list (std::d eque) to load the design of Chuck, according to the instructions, either for the end of a chunk, either eject a chunk from the tail end, or pop a chunk from the first end, observe the memory usage. found that for small size of the chunk, from the end of the pop-up elements, memory can be released, but the chunk from the first side of the eject, the memory is not released, if the chunk is large enough, whether from the end or the first side, the memory can be released.
glibc uses these two mechanisms to manage the memory of the user program, which is intentionally designed. After all, the cost of communication with the underlying system is expensive, and if you manipulate a large amount of small chunks of memory at will, it is equivalent to frequent communication with system calls, which obviously reduces the efficiency of the program. Putting small chunks of memory into a heap maintained by the BRK is equivalent to implementing a cache, which can be saved together and returned to the system. To be fair, this kind of design is smart.
However, it does not have smart enough to do well. First, because its implementation is relatively simple, it only maintains a pointer to the top of the heap. So if you want to return it to the system, you must return it from top to bottom. Imagine this situation, if the heap top has a block of memory has been occupied, and the following all the memory is useless. Can the memory below be returned to the system? Unfortunately, this design determines that the answer is not. There is a "hole (Hole)" problem.
In addition, this design is not friendly to some user programs that frequently request/release small chunks of memory due to business requirements. Such 3D software, like ours, is a typical case: a huge geometry that is actually made up of thousands of small patches, each of which is not large enough to be large in quantity. So our software is faced with "the memory has been released, but not returned to the system" the strange problem. The best strategy for dealing with this problem should be to design and use a "Dedicated memory pool" technology that is suitable for our software in the early stages, applying for a contiguous chunk of memory space, and manually "cutting" it for many facets. It will be returned to the system in batches according to the situation. In short, the design of their own memory management solution is always flexible, depending on the needs of the project to build.
In other words, although glibc has developed this somewhat "tough" memory management scheme, but also provides some methods to allow the adjustment of related thresholds (threshold), although we can not interfere with how to manage memory, but at least through these methods, decided "how big, how small is small" And the "number of savings to return" such issues.