Linux memory model

Source: Internet
Author: User

First, preface

3 memory models are supported in the Linux kernel, flat memory model,discontiguous and sparse. The so-called memory model, in fact, is from the CPU point of view, the distribution of its physical memory, in the Linux kernel, the use of the way to manage these physical memory. In addition, it should be explained that: the main focus of this article in the share memory system, that is, all the CPUs share a physical address space.

The contents of this article are arranged as follows: In order to be able to clearly parse the memory model, we describe some basic terminology, which is in chapter two. Chapter Three describes the working principle of three kinds of memory models, the last one is code parsing, the code is from 4.4.6 kernel, for architecture-related code, we use ARM64 for analysis.

Ii. terminology related to memory models

1. What is page frame?

One of the most important functions of the operating system is to manage the various resources in the computer system as the most important resource: memory, which we must manage. In the Linux operating system, the physical memory is managed according to page size, the specific page size is related to hardware and Linux system configuration, 4k is the most classic settings. Therefore, for physical memory, we divide it into page size by page, and the memory area of page size in each physical memory we call the page frame. We set up a struct page data structure for each physical page frame to track the usage of each of the physical pages: Is it used for the kernel body segment? or a page table for a process? is used for various file caches or in Free State ...

Each page frame has a one by one page data structure that defines the PAGE_TO_PFN and Pfn_to_page macros used to convert between the page frame number and the page data structure, specifically how to convert the memory Modle, we'll describe in chapter three the 3 memory models in Linux kernel.

2. What is PFN?

For a computer system, the entire physical address space should be from 0, to the actual system can support the maximum physical space up to a certain address space. In arm systems, assuming that the physical address is 32 bit, then its physical address space is 4G, in the ARM64 system, if the number of physical addresses supported is 48, then its physical address space is 256T. Of course, such a large physical address space is not always used for memory, and some are I/O space (of course, some CPU arch has its own independent IO address spaces). Therefore, the physical address space occupied by the memory should be a finite interval, and cannot cover the entire physical address space. However, due to the increasing memory, for 32-bit systems, the physical address space of 4G has been unable to meet the memory requirements, so there will be the concept of high memory, followed by detailed description.

PFN is the abbreviation of page frame number, the so-called page frame, is for physical memory, the physical memory into a page size of the area, and to each page numbering, this number is PFN. Assuming that the physical memory starts at 0 addresses, then the page frame that PFN equals 0 is the one that begins with the 0 address (physical address). Assuming that physical memory starts with an X address, the first page frame number is (X>>page_shift).

3. What is NUMA?

There are two options when designing a memory architecture for multiprocessors systems: one is Uma (Uniform memory access), all processor in the system share a unified, consistent physical memory space, The access time for the memory address is the same regardless of which processor the access is initiated. Unlike the NUMA (Non-uniform memory access) and Uma, the access to a memory address is related to the relative position between the DIMM and the processor. For example, for processor on a node, accessing local memory is faster than accessing those remote memory.

Iii. three types of memory model in the Linux kernel

1. What is flat memory model?

If from the point of view of any processor in the system, when it accesses the physical memory, the physical address space is a continuous, no empty address space, then the memory model of this computer system is flat memories. This memory model, the management of physical memory is relatively simple, each physical page frame will have a page data structure to abstract, so there is a struct page array (mem_map), each array entry point to an actual physical page frame (page frame). In the case of flat memory, the relationship between PFN (page frame number) and the Mem_map array index is linear (with a fixed offset, if the memory corresponds to a physical address equal to 0, then PFN is the array index). So from PFN to the corresponding page data structure is very easy, and vice versa, you can refer to the definition of PAGE_TO_PFN and pfn_to_page. In addition, for the flat memory model, there is only one node (struct pglist_data) (the same mechanism is used for the discontiguous memory model). The picture below depicts the flat memory:

It should be emphasized that the memory occupied by the struct page is located in the direct mapping (directly mapped) interval, so the operating system does not need to create a page table for it.

2. What is discontiguous Memory Model?

If the CPU is accessing the physical memory, its address space is empty and discontinuous, then the memory model of the computer system is discontiguous. In general, the memory model of the NUMA architecture computer system is the choice of discontiguous memory, but these two concepts are actually different. NUMA emphasizes the positional relationship between memory and processor, which is in fact not related to the RAM model, except that there is a more tightly coupled relationship (faster access) between the memories and processor on the same node, which requires multiple node management. Discontiguous memory is essentially an extension of the flat memories model, and the entire physical memory address space is large chunks of memory, with some voids in the middle, with every memory address Space belongs to a node (if confined within a node, its memory model is flat). The picture below depicts the discontiguous memory:

Thus, in this memory model, the node data (struct Pglist_data) has multiple, and the macro definition Node_data can get the struct pglist_data of the specified node. Instead, the physical memory managed by each node is stored in the Node_mem_map member of the struct PGLIST_DATA data structure (conceptually similar to Mem_map in flat memory). At this point, the conversion from PFN to the concrete struct page is a little more complicated, we first want to get the node ID from PFN, and then find the Pglist_data data structure based on this ID, and then find the corresponding page array, then the method is similar to flat Memory.

3. What is sparse Memory Model?

The memory model is also an evolutionary process, at first, using the flat memory to abstract a contiguous address space (mem_maps[]), after Numa, the entire discontinuous space is divided into a number of node, Each node is a contiguous memory address space, which means that the original single mem_maps[] becomes a number of mem_maps[]. Everything seems to be perfect, but the advent of the memory HotPlug makes the original perfect design imperfect, because even a node in mem_maps[] can be discontinuous. In fact, after the emergence of the sparse memory, the discontiguous memories model is not so important, supposedly sparse memory can eventually replace discontiguous memory, the replacement process is in progress, The 4.4 kernel is still available with 3 memory models to choose from.

Why say sparse memory can eventually replace discontiguous memory? In fact, under the sparse memory model, the contiguous address space is divided into sections (for example, 1G), each of which is hotplug, so that the memory address space can be sliced more finely sparse memories, Support for more discrete discontiguous memory. In addition, NUMA and discontiguous memory are always cut and messed up before sparse memory is present: NUMA does not stipulate the continuity of its memory, and discontiguous The memory system is not necessarily a NUMA system, but both of these configurations are multi node. With sparse memory, we can finally peel off the continuity and the concept of NUMA: A NUMA system can be flat memory, or sparse memory, and a sparse memory system can be NUMA, It can also be Uma's.

The following picture illustrates how sparse memory manages page frame (configured with Sparsemem_extreme):

(Note: One of the mem_section pointers should point to a page, and a page has several struct mem_section data units)

The entire contiguous physical address space is cut off according to a section, each of which has a continuous memory (that is, flat memory), so mem_ The page array of map is attached to the section structure (struct mem_section) instead of the node structure (struct pglist_data). Of course, no matter what kind of memory model, you need to deal with the correspondence between PFN and page, but sparse memory more than a section concept, let the conversion into pfn<--->Section<---> Page

Let's start with a look at how to convert from PFN to page structure: Kernel statically defines a mem_section array of pointers, often including multiple page in a section, so you need to shift PFN to section number by right-moving Use section number as index in the mem_section pointer array to find the PFN corresponding section data structure. Once the section is found, the corresponding page data structure can be found along its section_mem_map. Incidentally, in the beginning, sparse memory uses a one-dimensional memory_section array (not an array of pointers), which is a very wasteful implementation for particularly sparse (config_sparsemem_extreme) systems. In addition, it is convenient to save pointers to hotplug, and pointers equal to NULL means that the section does not exist. The above picture depicts the case of a one-dimensional mem_section pointer array (configured with Sparsemem_extreme), and for non-sparsemem_extreme configurations, the concept is similar, and you can read the code yourself.

From page to pfn a little bit of a hassle, actually the PFN is divided into two parts: Section index, and the other part is the offset of page in that section. We need to get the section index from page first, we get the corresponding memory_section, know memory_section also know the page in Section_mem_map, You will also know the offset of page in this section, and finally can synthesize pfn. For PAGE to section index conversion, sparse memory has 2 scenarios, we first look at the classic scenario, that is, saved in the Page->flags (configured with Section_in_page_flags). The biggest problem with this approach is that the number of bits in page->flags is not necessarily sufficient because the flag carries too much information, and the various page Flag,node Id,zone IDs now add a section ID, Is there a common algorithm for algorithms that cannot achieve consistency in different architecture? This is Config_sparsemem_vmemmap. The specific algorithm can refer to:

(The picture above has a bit of a problem, vmemmap only in the case of Phys_offset equals 0 only to point to the first struct page array, generally, there should be an offset, but, too lazy to change, haha)

For the classic sparse memory model, a section of the struct page array is occupied by the directly mapped zone, and the page table is set up when it is initialized, assigning a page frame that is assigned a virtual address. However, for Sparsemem_vmemmap, the virtual address is allocated from the beginning, is the beginning of Vmemmap a contiguous virtual address space, each page has a corresponding struct page, of course, only the virtual address, no physical address. Therefore, when a section is found, you can immediately find the corresponding struct page virtual address, of course, you also need to assign a physical page frame, and then set up pages table what, so for this sparse memory, The overhead will be slightly larger (the process of building a map is more).

Iv. Code Analysis

Our code analysis is primarily conducted through include/asm-generic/memory_model.h.

1, flat memory. The code is as follows:

#define __pfn_to_page (PFN) (Mem_map + ((PFN)-Arch_pfn_offset))
#define __PAGE_TO_PFN (page) ((unsigned long) ((page)-Mem_map) + Arch_pfn_offset)

As the code shows, the PFN and struct page arrays (MEM_MAP) index is a linear relationship, and a fixed offset is arch_pfn_offset, which is related to the estimated architecture. For ARM64, defined in the arch/arm/include/asm/memory.h file, this definition is, of course, related to the physical address space occupied by memory (that is, the definition of phys_offset).

2, discontiguous Memory Model. The code is as follows:

#define __pfn_to_page (PFN)             \
({     unsigned long __pfn = (PFN);        \
    unsigned Long __nid = Arch_pfn_to_nid (__PFN);  \
    node_data (__nid)->node_mem_map + Arch_local_ Page_offset (__PFN, __nid);
})

#define __PAGE_TO_PFN (PG)                          \
({    const struct PAGE *__PG = (PG) ;                     \
    struct Pglist_data *__pgdat = Node_data (Page_to_nid (__PG));    \
& nbsp;   (unsigned long) (__PG-__pgdat->node_mem_map) +             \
     __pgdat->node_start_pfn;                     \
})

Discontiguous memory model needs to get the node ID, just find the node ID, everything is good, than the flat Memory model is OK. Therefore, for the definition of __pfn_to_page, you can first convert the PFN to Node ID through Arch_pfn_to_nid, and the PGLIST_DATA data structure of the node can be found by Node_data macro definition, which Node_ START_PFN records the first page frame number of the node, so that it can also get the offset of its corresponding struct page in Node_mem_map. __PAGE_TO_PFN similar, we can analyze it by ourselves.

3, Sparse Memory Model. The code of the classic algorithm we will not look, together look at the configuration of the Sparsemem_vmemmap code, as follows:

#define __pfn_to_page (PFN) (Vmemmap + (PFN))
#define __PAGE_TO_PFN (page) (unsigned long) ((page)-Vmemmap)

Simple and clear, PFN is the index of VMEMMAP, the struct page array. For ARM64, VMEMMAP is defined as follows:

#define VMEMMAP (struct page *) Vmemmap_start-\
Section_align_down (memstart_addr >> page_shift))

There is no doubt that we need to allocate an address in the virtual address space to place the struct page array, which contains all the physical memory span space page, which is the definition of Vmemmap_start.

Linux memory model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.