Distribution of internal nuclear physics memory space in Linux

Source: Internet
Author: User

Linux operating systems and drivers run in kernel space, applications run in user space, both cannot simply use pointers to pass data, because the virtual memory mechanism used by Linux, user-space data may be swapped out, and when kernel space uses user-space pointers, the corresponding data may not be in memory.

Linux Kernel address space partitioning

Typically, the 32-bit Linux kernel address space is partitioned 0~3g to user space and 3~4g to kernel space. Note that this is a 32-bit kernel address space partition, and the 64-bit kernel address space partition is different.

1, x86 Physical address space layout:

The following space at the top of the physical address space is occupied by the I/O memory mapping of the PCI device, and their size and layout are determined by the PCI specification. 640k~1m This address space is occupied by the BIOS and VGA adapters.

When the Linux system is initialized, a Page object is created for each physical page based on the actual physical memory size, and all Page objects form a mem_map array.

Further, for different purposes, the Linux kernel divides all physical pages into 3 types of memory management areas, respectively, Zone_dma,zone_normal,zone_highmem.

The ZONE_DMA range is 0~16m, and the physical pages for this area are dedicated to the DMA use of I/O devices. The physical pages of DMA are required to be managed separately because DMA uses physical addresses to access memory, does not go through the MMU, and requires a contiguous buffer, so in order to provide a physically contiguous buffer, a region must be specifically partitioned from the physical address space for DMA.

The scope of the Zone_normal is 16m~896m, and the physical pages of the region are directly available to the kernel.

The scope of the ZONE_HIGHMEM is 896m~ end, the area is high-end memory, the kernel cannot be used directly.

2, Linux virtual address kernel space distribution

Under kernel image, there is 16M of kernel space for DMA operation. The 128M address at the high end of the kernel space is composed of 3 parts, Vmalloc area, persistent kernel mapping zone, and temporary kernel mapping area.

Because of the direct mapping between zone_normal and the kernel linear space, the kernel places frequently used data such as kernel code, GDT, IDT, PGD, mem_map arrays, etc. in Zone_normal. Instead of placing infrequently used data, such as user data, page tables (PT), in Zone_ Highmem, a mapping Relationship (Kmap ()) is established only when you want to access that data. For example, when the kernel accesses the I/O device storage space, it uses ioremap () to map the Mmio area memory at the high end of the physical address to the Vmalloc zone in the kernel space, and then disconnects the mapping when it is exhausted.

3, Linux virtual address user space distribution

The code area of the user process typically starts with the 0x08048000 of the virtual address space, which is to facilitate checking for null pointers. Above the code area is the data area, uninitialized data area, heap area, stack area, and parameters, Global environment variables.

4. Relationship between Linux virtual address and Physical address mapping

Linux divides the linear address space of the 4G into 2 parts, 0~3g the user space,3g~4g to kernel space.

Because the paging mechanism is turned on and the kernel wants to access the physical address space, it must first establish a mapping relationship and then access it through the virtual address. In order to access all the physical address space, it is obviously impossible to map all the physical address space to the 1G kernel linear space. The kernel then maps the physical address space of the 0~896m to its own linear address space, so that it can access the physical pages in ZONE_DMA and zone_normal at any time, and the remaining 128M linear address space in the kernel is not sufficient to fully map all the Zone_ Highmem,linux takes a dynamic mapping method that maps the physical pages in the Zone_highmem to the last 128M linear address space in kernel space, and releases the mappings after use for other physical pages to map. Although this is a matter of efficiency, the kernel, after all, can access all the physical address space normally.

5, the understanding of Buddyinfo

Cat/proc/buddyinfo displays as follows:

Node 0, Zone DMA 0 4 5 4 4 3 ...

Node 0, Zone Normal 1 0 0 1 101 8 ...

Node 0, Zone highmem 2 0 0 1 1 0 ...

Where node represents the number of nodes in the NUMA environment, where only one node 0;zone represents the region under each node, typically DMA, normal, and hignmem three regions, and the following columns represent the free page blocks for each order in the partner system. For example, for the second column of Zone DMA (starting from 0), the number of free pages is 5*2^4 and the available memory is 5*2^4*page_size.

The calculation method is:

The number of columns in the *2^ column *page_size where the number of columns is calculated from 0, that is, the first column is the number of the current column *2^0*page_size

Problems:

1. Does the user space (process) have a high-end memory concept?

User processes do not have high-end memory concepts. Only high-end memory exists in kernel space. A user process can access up to 3G of physical memory, and the kernel process can access all physical memory.

2. Is there high-end memory in the 64-bit kernel?

Currently, there is no high-end memory in the 64-bit Linux kernel because the 64-bit kernel can support more than 512GB of memory. If the machine installs more physical memory than the kernel address space, there will be high-end memory.

3. How much physical memory can the user process access? How much physical memory can the kernel code access?

The 32-bit system user process can access up to 3GB, and the kernel code can access all physical memory.

The 64-bit system user process can access more than 512GB, and the kernel code can access all physical memory.

4, high-end memory and physical address, logical address, linear address relationship?

High-end memory is only related to the logical address, not the logical address, the physical address is not directly related.

5. Why not allocate all the address space to the kernel?

If all the address space is given to memory, then how does the user process use memory? How to ensure that the kernel uses memory and user processes do not conflict?

Because the total virtual address space for all user processes is much larger than the available physical memory, only the most common parts are associated with physical page frames. This is not a problem, because most programs use only a fraction of the actual available memory.

When mapping data on a disk to the virtual address space of a process, the kernel must provide a data structure that establishes the association between the region of the virtual address space and the location of the associated data. For example, when you map a text file, the mapped virtual memory area must be associated to the area on the file system's hard disk where the contents of the file are stored. :

Of course, given is a simplified diagram, because the file data on the hard disk storage is usually not continuous, but distributed to a number of small areas. The kernel uses ADDRESS_SPACE data structures to provide a set of methods for reading data from the backing store. For example, read from the file system. Thus address_space forms an auxiliary layer that represents the mapped data as a continuous linear region and is provided to the memory management subsystem.

The On demand and fill pages are called on-demand paging methods. It is based on the interaction between the processor and the kernel, using a variety of data structures:

The process is as follows:

The CPU translates the address in one virtual memory space into a physical address, which requires two steps (for example):

First, it will be given a logical address (in fact, the offset within the paragraph, this must be understood!!!) ), the CPU uses its segment memory management unit to convert a logical address into a thread address,

Second, use its page memory management unit to convert to the final physical address.

Doing this two conversions is really cumbersome and unnecessary because you can directly draw a linear address to the process. The reason for this redundancyis that Intel is completely compatible.

    • The process attempts to access a memory address in the user's address space, uses the linear address above to find the page table, determines the corresponding physical address, but the page table used cannot determine the physical address (there is no associated page in physical memory)
    • The processor then triggers a missing pages exception, which is sent to the kernel.
    • The kernel checks the process address space data structure responsible for the fault area, finds the appropriate fallback memory, or confirms that the access is actually incorrect (unmapped, unused)
    • Allocate physical memory pages and read the required data fills from the backing memory.
    • The application resumes execution by incorporating the physical memory pages into the address space of the user process with the help of the page table.

These actions are transparent to the user process. In other words, the process does not notice whether the page is actually in physical memory or needs to be loaded via on-demand paging.

The following issues may need to be addressed throughout the process:

1) How the system perceives the current required page of the process is not in main memory ( page table mechanism);
2) When a page fault is found, how to put the missing pages into main memory (fault-pages interrupt mechanism);
3) in the replacement page, according to what strategy to choose the page to be eliminated ( displacement algorithm).

Page Table mechanism

Status bit (interrupt bit): Identifies whether the page is in memory (0 or 1);
Access bit: Identifies the number of recent visits or times of the page (swap out);
Modify bit: Identifies whether this page has been modified in memory;
External memory address: Records the address of the page on the external memory, which is the physical block number (external memory rather than memory).

Missing pages interrupt mechanism

When the program executes, it first checks the page table, and when the status bit indicates that the page is not in main memory, it causes a fault to occur, where the break execution process is the same as the general interrupt:
Protect the site (CPU environment);
Interrupt processing (Interrupt handler loading page);
Resume the scene and return to the breakpoint to continue execution.

Distribution of internal nuclear physics memory space in Linux

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.