Go Manual memory allocation

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

2013-10-27

Go Manual memory allocation

Go Manual memory allocation

When using go, sometimes you want to manage the memory yourself. So decide to write a manual memory management package. Just be bored practicing practiced hand ...

Overall design

Level two allocation. Larger memory is allocated in pages, 4k per page. The allocated chunk of memory can only be 1 pages, 2 pages, 3 pages ... Smaller memory is allocated using the Buddy algorithm in conjunction with allocation pooling. The buddy algorithm is mainly convenient for recycling, and maintains a free list for various irregular sizes.

The memory management algorithm, which is basically similar to that used by go itself, does not introduce garbage collection tag information, and uses the buddy allocation algorithm for small objects.

Buddy algorithm

The memory used by the management node is associated with the minimum maximum node. If the smallest unit is too small, too many management nodes are wasted. Number of management nodes = minimum number of units * If the largest unit is too large, you need to record the size with a longer integer, and the size of the individual management nodes increases.

The buddy algorithm manages the size of each block of memory selected as 4k. Larger memory is managed with a page splitter. The minimum cell size is selected as 32B, so that 8-bit storage node size can be used (8 bits can represent up to 255 cell nodes, and 4k contains 128 size 32 cell nodes).

Management node using uint8, a page with 64 nodes, so 4k management node only need 64B, very provincial!

Gosize [64]uint8

An improved Buddy algorithm

Would have liked Buddy allocator, but Buddy applicability is not so good, can allocate the size of the type only 32b,64b,128b,256b,512b,1024b,2048b,4096b. The basic buddy algorithm has a very serious disadvantage, the internal fragmentation is very serious. For example, the application of 66B of memory, will allocate 128B, too wasted!

So I made some changes and interpolated the buddy allocator with different minimum units. For example, the minimum single size of the benchmark is 16, then the size it is responsible for is:

16 32 64 128 256 512 1024 2048

If I choose another minimum unit of 24 for the Buddy allocator, it is responsible for the size of:

24 48 96 192 384 768 1536 3072

Together, memory utilization is much higher. In this case, you can get some of the smallest unit size of the buddy allocator, then it can make up 2 of the number of times caused by excessive internal debris problem. I calculated that when the minimum unit is used to 56, the memory wastage rate is no more than 1:1.125, probably more than 1.1111.

But doing so leads to another problem, external fragmentation. is between the various buddy dispensers, must be exactly 4096B to not waste.

16 32 64 128 256 512 1024 204824 48 96 192 384 768 1536 3072      (768 + 1280)    A40 80 160 320 640 1280 2560         (640 + 1408)    B56 112 224 448 896 1792 3584        (896 + 1152)    C72 144 288 576 1152 2304            (1152 + 896)    D88 176 352 704 1408 2816    (704 1344)  (1408 640)  E104 208 416 832 1664 3328   (1664 384)              F120 240 480 960 1920 3840   ()                      G168 336 672 1344 2688                               H

So I carefully designed a number of combinations, just to gather up 4096B.

组合 1408 1408 1280 E E B组合 1536 1280 1280 A B B组合 896 1152 896 1152 C D C D组合 768 1280 768 1280 A B A B组合 1408  1344 1344  

In fact, only e is enough, that is, the first three combinations. So the selected Buddy Splitter is:

B 1280E 1408A 1536D 1152C 896

Small object Allocation Pool

The use of Buddy in fact is more to facilitate the collection of memory when the merger. To speed up the allocation, maintain an allocation pool, maintain a free list for each size category, and allocate it directly from the list, and recycle to see if it needs to be merged.

A large buddy managed block is cut small and becomes a lot of small objects to hang into the free list.

Page Allocator

This is the same way the go language is used. Each time the allocation is a few pages of memory, recycling the way the bitmap is used for merge collection.

Well, not much nonsense, on the code

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.