Linux Memory management model

Source: Internet
Author: User

First, preface

3 memory models are supported in the Linux kernel, namely the flat memory Model,discontiguous memory model and the sparse. The configuration options for the three memory models corresponding to the Linux kernel are: config_flatmem,config_discontigmem,config_sparsemem, defined in include/asm-generic/ The memory-model.h. The memory model is aimed at the distribution of physical memories, mainly involving the transformation of PFN and page structures.

Ii. terminology related to memory models

1. What is page frame?

In the Linux operating system, application sees and uses memory addresses, which we often call virtual addresses. The memory address that the Linux kernel actually manages, which we call the physical address. Applictation tells the kernel virtual address that the kernel is converted to a physical address through the MMU, and the CPU accesses physical memory through the physical address. Physical memory is managed by page size, and the specific page size is related to hardware and Linux system configuration, and 4k is the most classic setting. Therefore, for physical memory, we divide it into page size by page, and the memory area of page size in each physical memory we call the page frame. We set up a struct page data structure for each physical page frame to track the usage of each of the physical pages: Is it used for the kernel body segment? or a page table for a process?   is used for various file caches or in Free State ... Each page frame has a one by one corresponding page data structure, and macros defined in the system for PAGE_TO_PFN and pfn_to_page are used to convert between page frame number and page data structure.

2. What is PFN?

For a computer system, the entire physical address space should be from 0, to the actual system can support the maximum physical space up to a certain address space. In arm systems, assuming that the physical address is 32 bit, then its physical address space is 4G, in the ARM64 system, if the number of physical addresses supported is 48, then its physical address space is 256T. Of course, such a large physical address space is not always used for memory, and some are I/O space. Therefore, the physical address space occupied by the memory should be a finite interval, and cannot cover the entire physical address space.

PFN is the abbreviation of page frame number, the so-called page frame, is for physical memory, the physical memory into a page size of the area, and to each page numbering, this number is PFN. Assuming that the physical memory starts at 0 addresses, then the page frame that PFN equals 0 is the one that begins with the 0 address (physical address). Assuming that physical memory starts with an X address, the first page frame number is (X>>page_shift).

3. What is NUMA?

There are two options when designing a memory architecture for multiprocessors systems: one is Uma (Uniform memory access), all processor in the system share a unified, consistent physical memory space, The access time for the memory address is the same regardless of which processor the access is initiated. Unlike the NUMA (Non-uniform memory access) and Uma, the access to a memory address is related to the relative position between the DIMM and the processor. For example, for processor on a node, accessing local memory is faster than accessing those remote memory.

Iii. three types of memory model in the Linux kernel

1. What is flat memory model?

If from the point of view of any processor in the system, when it accesses the physical memory , the physical address space is a continuous, no empty address space , then the memory model of this computer system is Flat memories . This memory model, the management of physical memory is relatively simple, each physical page frame will have a page data structure to abstract, so there is a struct page array (mem_map), each array entry point to an actual physical page frame (page frame). In the case of flat memory, the relationship between PFN (page frame number) and the Mem_map array index is linear (with a fixed offset, if the memory corresponds to a physical address equal to 0, then PFN is the array index). So from PFN to the corresponding page data structure is very easy, and vice versa, you can refer to the definition of PAGE_TO_PFN and pfn_to_page. In addition, for the flat memory model, there is only one node (struct pglist_data) (the same mechanism is used for the discontiguous memory model). The picture below depicts the flat memory:

It should be emphasized that the memory occupied by the struct page is located in the direct mapping (directly mapped) interval, so the operating system does not need to create a page table for it.

2. What is discontiguous Memory Model?

If the CPU is accessing the physical memory , its address space is empty and discontinuous , then the memory model of the computer system is discontiguous. In general, the memory model of the NUMA architecture computer system is the choice of discontiguous memory, but these two concepts are actually different. NUMA emphasizes the positional relationship between memory and processor, which is in fact not related to the RAM model, except that there is a more tightly coupled relationship (faster access) between the memories and processor on the same node, which requires multiple node management. Discontiguous memory is essentially an extension of the flat memories model, and the entire physical memory address space is large chunks of memory, with some voids in the middle, with every memory address Space belongs to a node (if confined within a node, its memory model is flat). The picture below depicts the discontiguous memory:

Thus, in this memory model, the node data (struct Pglist_data) has multiple, and the macro definition Node_data can get the struct pglist_data of the specified node. The physical memory managed by each node is stored in the Node_mem_map member of the data structure struct Pglist_data (concept similar to Mem_map in flat memory). At this point, the conversion from PFN to the concrete struct page is a little more complicated, we first want to get the node ID from PFN, and then find the Pglist_data data structure based on this ID, and then find the corresponding page array, then the method is similar to flat Memory.

3. What is sparse Memory Model?

The memory model is also an evolutionary process, at first, using the flat memory to abstract a contiguous address space (mem_maps[]), after Numa, the entire discontinuous space is divided into a number of node, Each node is a contiguous memory address space, which means that the original single mem_maps[] becomes a number of mem_maps[]. Everything seems to be perfect, but the advent of the memory HotPlug makes the original perfect design imperfect, because even a node in mem_maps[] can be discontinuous. In fact, after the emergence of the sparse memory, the discontiguous memories model is not so important, supposedly sparse memory can eventually replace discontiguous memory, the replacement process is in progress, The 4.4 kernel is still available with 3 memory models to choose from.

Why say sparse memory can eventually replace discontiguous memory? In fact, under the sparse memory model, the contiguous address space is divided into sections (for example, 1G), each of which is hotplug, so that the memory address space can be sliced more finely sparse Support for more discrete discontiguous memory. In addition, NUMA and discontiguous memory are always cut and messed up before sparse memory is present: NUMA does not stipulate the continuity of its memory, and discontiguous The memory system is not necessarily a NUMA system, but both of these configurations are multi node. With sparse memory, we can finally peel off the continuity and the concept of NUMA: A NUMA system can be flat memory, or sparse memory, and a sparse memory system can be NUMA, It can also be Uma's.

The following picture illustrates how sparse memory manages page frame (configured with Sparsemem_extreme):

(Note: One of the mem_section pointers should point to a page, and a page has several struct mem_section data units)

The entire contiguous physical address space is cut off according to a section, each of which has a continuous memory (that is, flat memory), so mem_ The page array of map is attached to the section structure (struct mem_section) instead of the node structure (struct pglist_data). Of course, no matter what kind of memory model, you need to deal with the correspondence between PFN and page, but sparse memory more than a section concept, let the conversion into pfn<--->Section<---> Page

Let's start with a look at how to convert from PFN to page structure: Kernel statically defines a mem_section array of pointers, often including multiple page in a section, so you need to shift PFN to section number by right-moving Use section number as index in the mem_section pointer array to find the PFN corresponding section data structure. Once the section is found, the corresponding page data structure can be found along its section_mem_map. Incidentally, in the beginning, sparse memory uses a one-dimensional memory_section array (not an array of pointers), which is a very wasteful implementation for particularly sparse (config_sparsemem_extreme) systems. In addition, it is convenient to save pointers to hotplug, and pointers equal to NULL means that the section does not exist. The above picture depicts the case of a one-dimensional mem_section pointer array (configured with Sparsemem_extreme).

From page to pfn a little bit of a hassle, in fact the PFN is divided into two parts: Section index, and the page offset in that section. We need to get the section index from page first, we get the corresponding memory_section, know memory_section also know the page in Section_mem_map, You will also know the offset of page in this section, and finally can synthesize pfn. For PAGE to section index conversion, sparse memory has 2 scenarios, we first look at the classic scenario, that is, saved in the Page->flags (configured with Section_in_page_flags). The biggest problem with this approach is that the number of bits in page->flags is not necessarily sufficient because the flag carries too much information, and the various page Flag,node Id,zone IDs now add a section ID, Is there a common algorithm for algorithms that cannot achieve consistency in different architecture? This is Config_sparsemem_vmemmap. The specific algorithm can refer to:

For the classic sparse memory model, a section of the struct page array is occupied by the directly mapped zone, and the page table is set up when it is initialized, assigning a page frame that is assigned a virtual address. However, for Sparsemem_vmemmap, the virtual address is allocated from the beginning, is the beginning of Vmemmap a contiguous virtual address space, each page has a corresponding struct page, of course, only the virtual address, no physical address. Therefore, when a section is found, you can immediately find the corresponding struct page virtual address, of course, you also need to assign a physical page frame, and then set up pages table what, so for this sparse memory, The overhead will be slightly larger (the process of building a map is more).

Linux Memory management model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.