Linux Kernel Memory Management architecture

Source: Internet
Author: User

The memory management subsystem may be the most complex subsystem in the Linux kernel, which has many functions, such as page map, page allocation, page recycling, page exchange, hot and cold pages, emergency pages, page fragmentation management, page caching, page statistics, etc., and also has high performance requirements. This article from the Memory Management hardware architecture, address space division and memory management software architecture three aspects, attempts to memory management hardware and software architecture to do some macro analysis summary.

Memory Management hardware Architecture

Because memory management is the core of the kernel of a function, for memory management performance optimization, in addition to software optimization, hardware architecture has done a lot of optimization design. is a memory hierarchy design scheme for the current mainstream processor.

As you can see, the hardware has designed 3 optimized paths for read-write memory.

1) First L1 cache support virtual address addressing, to ensure that the CPU out of the virtual address (VA) does not need to convert to physical address (PA) can be used to directly find the L1 cache, improve the cache lookup efficiency. Of course, using VA to find the cache, there are security and other defects, which requires the CPU to do some special design to make up, specifically can read the "Computer Architecture: Quantitative Research methods" to understand the relevant details.

2) If the L1 cache is not hit, this requires an address translation to convert the VA to Pa. Linux Memory Map Management is implemented by the page table, but the page table is in memory, if each time the address conversion process needs to access memory, its efficiency is very low. Here the CPU accelerates the address translation through the TLB hardware unit.

3) After obtaining the PA, look for the cached data in the L2 cache. The L2 cache is generally an order of magnitude larger than the L1 cache, and its search hit rate is also higher. If you hit the data, you can avoid accessing memory and improve access efficiency.

It can be seen that in order to optimize memory access efficiency, modern processors introduce multi-level cache, TLB and other hardware modules (as a 8-core MIPS processor hardware block diagram). There is a lot of design detail inside each of the hardware modules, which is no longer in depth, as interested in reading "Computer Architecture: Quantitative Research Methods" and other books to learn more.

Memory-Mapped Space partitioning

Depending on the memory usage and usage scenarios, the kernel divides the memory-mapped address space into multiple parts, each with its own start and end addresses, allocation interfaces, and usage scenarios. is a common 32-bit address space partitioning structure.

  • DMA memory dynamically allocates address space: Some DMA devices cannot access all memory space because of their own addressing capabilities. If the earlier ISA device can only perform DMA in the 24-bit address space, it can only access the first 16MB of memory. So we need to partition the DMA memory dynamic allocation space, namely DMA zone. Its allocation is applied by adding the Kmalloc interface to the Gfp_atomic control.
  • Direct memory dynamically allocates address space: because of the access efficiency and other reasons, the kernel uses a simple linear mapping to memory, but because the 32-bit CPU's addressing capability (4G size) and the start of the kernel address space (3G), the kernel's address space resources are insufficient, when the memory is greater than 1GB, You cannot directly map all of the memory. The part of the address space that cannot be mapped directly, that is, Highmem zone. The zone in the middle of the DMA zone and Highmem Zone is the normal zone, which is used primarily for dynamic memory allocation of the kernel. Its allocation is applied through the Kmalloc interface.
  • High-end memory dynamically allocated address space: High-end memory allocation of memory is a virtual address continuous and physical address discontinuous memory, generally used for kernel dynamically loaded modules and drivers, because the kernel may run for a long time, memory page fragmentation is serious, if you want to request large contiguous address of memory pages will be more difficult, easy to cause allocation failure. High-end memory allocation provides multiple interfaces depending on your application needs:
    • Vmalloc: Specifies the allocation size, the page position and the virtual address are implicitly assigned;
    • Vmap: Specifies the page position array, the virtual address is implicitly assigned;
    • Ioremap: Specifies the physical address and size, and the virtual address is implicitly assigned.
  • Persistent Map address space: The kernel context switch is accompanied by a TLB flush, which can cause performance degradation. But some modules that use high-end memory also have high performance requirements. The persistent mapping space is not flushed when the kernel context switches, so high-end address space for their mappings is more efficient. Its allocation is applied through the Kmap interface. The difference between Kmap and Vmap is that VMAP can map a set of page, that is, the page is discontinuous, but the virtual address is contiguous, and Kmap can only map a page to the virtual address space. Kmap is mainly used in modules that have higher performance requirements for high-end memory access, such as FS and net.
  • Fixed mapped address space: The problem with persistent mappings is that it is possible to hibernate, not available in scenarios where the interrupt context, the spin lock critical area, and so on cannot be blocked. To solve this problem, the kernel also divides the fixed mappings, and its interfaces do not hibernate. The fixed mapping space is mapped through the Kmap_atomic interface. The use of kmap_atomic is similar to Kmap, which is mainly used in modules such as MM, FS, net for high-end memory access with high performance requirements and cannot sleep.

Different CPU architectures differ in the partition of the address space, but in order to ensure that the CPU system difference is not visible to the external module, the semantics of the allocation interface of the memory address space are consistent.

Because the 64-bit CPU generally does not need high-end memory (also can support), in the address space division and 32-bit CPU difference is large, is a MIPS64 CPU kernel address space partition diagram.

Memory Management software Architecture

The core of kernel memory management is the allocation and recovery management of memory, which is divided into 2 systems: page management and Object management. Page management system is a level two hierarchy, Object management system is a level three hierarchy, the allocation of costs and operations on the CPU cache and TLB negative impact, rising from the top down.

    • Page Management hierarchy: A level two structure consisting of a hot and cold cache, a partner system. Responsible for caching, allocation, and recycling of memory pages.
    • Object Management hierarchy: A level three structure consisting of the PER-CPU cache, the slab cache, and the partner system. Responsible for the cache, allocation, and collection of objects. The object here refers to a block of memory smaller than a page size.

In addition to memory allocations, memory deallocation operates in this hierarchy. If the object is disposed, it is released to the PER-CPU cache, released to the slab cache, and finally released to the partner system.

The block diagram has three main modules, the partner system, the slab allocator, and the PER-CPU (hot and cold) cache. Their comparative analysis is as follows.

--Finish--

Linux Kernel Memory Management architecture

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.