Deep understanding of memory allocation in the Go language

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

Preface: The first article after the launch of the column, the next will be on the Go language memory, GC, concurrent programming, etc., in-depth understanding of the language of Go.

One, Memory model overview

First define several concepts:
(1) Cache: Thread-private, each time the object allocation from the cache query, small objects if you can get free memory is not locked. Look at the structure of the cache (omitting fields related to GC, etc.)

type mcache struct {    alloc [numSpanClasses]*mspan // 用于分配的span    spanclass   spanClass  // size class and noscan (uint8)}

Read the source we can see that the cache has an array of 0 to N, each of which is mounted with a linked list, the nodes of the linked list represent a memory unit, and the memory blocks of the nodes of the same linked list are equal, and the memory sizes of the different list are different ( The size is based on the value of Spanclass as the subscript to correspond to the class_to_size array in sizeclass.go). Next we look at the memory allocation in the logic here, we can find Malloc.go in the MALLOCGC (the source is not affixed up), in fact, the logic is very simple, the source annotation is also very clear, mainly to determine whether the object is small objects and large objects (small objects are also divided into tiny and small). A large object is allocated directly to the heap, and the small object extracts a list of memory blocks based on Sizeclass, and then takes out the available nodes of the list. The handling of the tiny object is interesting, it cannot be a pointer, because multiple tiny objects are assigned to an object and cannot be garbage scanned.

(2) Central: thread sharing, if the cache can not find free memory, then the cache will request a batch of small object memory into the local cache, this process needs to be locked. (Note: Central is a page) structure as follows (same as only memory-related code):

type mcentral struct {    spanclass spanClass    nonempty  mSpanList // 有空闲内存的span列表    empty     mSpanList // 无空闲内存的span列表}

We can see the code of Malloc.go's MALLOCGC, we find that if the cache memory is not enough, then we call the refill function of mcache.go, then mcentral.go function of Cachespan, then take out the corresponding Sizeclass according to the centr size. Al gets the appropriate block of memory. In this layer of memory management granularity is span.

(3) Heap: Thread-shared, if there is no free memory page in Central, then the memory is requested from the heap, and the process needs to be locked. Look at the structure definition (list several important memory-related)

type mheap struct {    free      [_MaxMHeapList]mSpanList // 页数在127以内的空闲span链表数组    freelarge mTreap                   // 页数大于127时,转而使用treap    allspans []*mspan                  // 记录申请过的span    spans []*mspan                     // 记录arena区域页号跟mspan的映射关系    bitmap        uintptr // Points to one byte past the end of the bitmap    bitmap_mapped uintptr    //还有几个跟arena相关的参数就不列举了    }

In this layer, the unit of application memory is page, the page from the heap request is continuous, managed by span, this logic we can see Mheap.go's alloc_m function. If Central is requesting memory for the heap, then the most appropriate span will be taken according to the number of page. Next steal a diagram, think the painting is very detailed:

Two, memory allocator-MSAPN and Fixalloc

These are the basic tool components of the memory allocator, we often see these two when we look at the code, and then we will explain them separately.
(1) Mspan
This is used to manage page objects, and is a continuous page with the structure defined as follows:

type mspan struct {    next *mspan     // next span in list, or nil if none    prev *mspan     // previous span in list, or nil if none    list *mSpanList // 1.9后计划移除的字段,不做学习    startAddr uintptr // 第一个span的地址    npages    uintptr // 该span存储的page个数    }

The above gives a few important fields, it can be seen that the span structure of the next and prev pointers are used to construct a doubly linked list, in fact, the use of span is only to manage a continuous set of page, or relatively simple
(2) Fixalloc
This is the two specific objects used to manage Mcache and mspan, and the structure is defined as follows:
type fixalloc struct {    size   uintptr    first  func(arg, p unsafe.Pointer) // called first time p is returned    arg    unsafe.Pointer    list   *mlink    chunk  uintptr     nchunk uint32    inuse  uintptr // in-use bytes now    stat   *uint64    zero   bool // zero allocations}

List is a linked list, each node is a fixed-size block of memory (Cachealloc in the size of sizeof (Mcache), the size of Spanalloc is sizeof (Mspan)). Next we see the Alloc function in Mfixalloc.go, the logic is roughly as follows: When using Fixalloc to assign Mcache and Mspan, the first thing is to determine whether the list is empty, not empty, or to return a block of memory to use, if it is empty, then determine if there is enough memory on the chunk available, and then processing

Three, summary

Write for two days, finally finished, or write very rough, I believe with the back of the study, can learn more, and then come back to modify.
Eldest brother said, read the source can not only stay in the level of reading source, to think about after reading their own harvest, more thinking if it is me, how to design.
The first is to improve their ability to read the code, and then learn the treap tree, there is the author in the design of these memory models to consider the subtle ideas, such as how to more quickly calculate the sizeclass, how to avoid false sharing and other problems. Some problems, such as falsesharing, are a very hidden problem, but they are very important.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.