MMU Paging principle

Source: Internet
Author: User

MMU Paging principle

The paging mechanism of modern processing is often very complex, especially the first impression is particularly complicated. In order to support the flexible and efficient management of memory by the operating system, the paging mechanism seems to be quite complex, however, its essential algorithm is quite concise and clear. According to my understanding, from the memory paging mechanism of the junior high school to the reality is such a few steps to form:

1. Virtual memory space. In order for each process to have its own separate memory space, the CPU needs to do an address transformation at the time of the visit: ra=f (VA).

2. Optimize time efficiency. Visit very frequently, mapping function f must be simple, otherwise it will seriously affect efficiency, so a simple look-up table algorithm (page table), similar to the array subscript--> elements such a transformation.

3. Optimize space efficiency. One by one mapping is certainly unacceptable, and then according to a certain granularity (4KB) mapping, the relative address within the granularity is unchanged (low 12 bit unchanged).

4. Optimize space efficiency. If you use only one page table, memory waste will be relatively large (4MB per table), so the remaining 20-bit address into two parts (10-bit VS10 bit), repeat 3 of the method to form a two-level page table. If still not satisfied (for 64-bit systems), you can subdivide them again.

5. Optimize time efficiency again. The page table is too large to fit all the processors and can only be placed in RAM. In this way, the address needs to be visited many times, the performance is not acceptable, so the cache with similar caching mechanism recently translated address.

After such a few steps, the complex paging mechanism in MMU was released. However, when we look at the nature of the phenomenon, we can see how complex it looks, and its essence is simply look-up table mapping. Whether a 32-bit system's two-level page table or a 64-bit system's three-level, four-level page table, its abstract process can be represented by a mapping of an array to an element: Ra=f[va].

Overall, address paging mapping is nothing more than the virtual address cut into several parts, the last part of the page offset, the remaining n part of the query N-level a table. Different processors have different segmentation methods for addresses, and the same processor may support a variety of different segmentation methods, such as x86 processors that support PAE, which support more than 4GB of memory. The following is a summary of the virtual address segmentation method for common processors:

Number

Processor

Page size

Number of bits used for addressing

Page-level

Virtual Address Rating

1

x86

4KB

32

2

10+10+12

2

x86 (Extended)

4MB

32

1

10+22

3

x86 (PAE)

4KB

36

3

2+9+9+12

4

x86 (PAE ext)

2MB

36

2

2+9+21

5

Alpha

8KB

43

3

10+10+10+13

6

Ia64

4KB

39

3

9+9+9+12

7

Ppc64

4KB

41

3

10+10+9+12

8

Sh64

4KB

41

3

10+10+9+12

9

X86-64

4KB

48

4

9+9+9+9+12

Note: 1 and 2 can coexist, 3 and 4 can coexist, 5 with

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.