"Computer operating System" summary three (memory management)

Source: Internet
Author: User
Tags compact relative advantage
Memory ManagementIncludes memory management and virtual memory management.

Memory management includes memory management concepts, exchange and coverage, continuous allocation management and non-contiguous allocation management methods (paging management, segmented management, and section-page management).

Virtual memory management includes virtual memory concepts, request paging management, page replacement algorithms, page assignment policies, working sets, and jitter.
3.1 Concept of memory managementMemory Management is one of the most important and complex aspects of operating system design. Although the computer hardware has been developing rapidly and the memory capacity is increasing, it is still impossible to put all the program and data required by all user processes and systems into main memory, so the operating system must allocate the space rationally and efficiently. The partition and dynamic allocation of memory by operating system is the concept of memory management.

Effective memory management is important in the design of multi-channel programs, which not only facilitates the user to use memory, improves memory utilization, but also logically expands the memory through virtual technology.

Memory management features: the allocation and recovery of memory space: the operating system to complete the main memory space allocation and management, so that programmers out of the trouble of storage allocation, improve programming efficiency. Address translation: In a multi-channel program environment, the logical address in the program and the physical address in memory can not be consistent, so storage management must provide address transformation function, the logical address into the corresponding physical address. Expansion of memory space: Logically augment memory with virtual storage technology or automatic overlay technology. Storage protection: Each operation is guaranteed to operate within its own storage space.
Before you perform specific memory management, you need to understand the fundamentals and requirements of the process running. program loading and linkingThe creation process first loads the program and data into memory. Turning the user source program into a program that can execute in memory usually requires the following steps: Compile: The compiler compiles the user source code into several target modules. Link: A complete loading module is formed by linking the compiled set of target modules and the required library functions by the linker. Mount: Load the module into memory run by the loader.
The three-step process is shown in Figure 3-1.



Figure 3-1 Processing steps for a user program
There are three ways to link the program: Static Link: Before the program runs, the target modules and their required library functions are linked into a complete executable program, will not be opened later. Dynamic Link loading: After compiling the user source program, a set of target modules is compiled, and when the memory is loaded, the side-loading link is used. Runtime dynamic linking: A link to some target module, which is a link to the target module when it is needed in the execution of the program. The advantage is that it is easy to modify and update, so it is easy to share the target module.
The memory loading module also has the following three ways of loading the memory:

1) Absolute loading. At compile time, if you know that the program will reside somewhere in memory, the compiler will produce the target code for the absolute address. The absolute loader loads the program and data into memory according to the address in the loading module. Since the logical address in the program is exactly the same as the actual memory address, it is not necessary to modify the address of the program and data.

The absolute load mode is only available for single-lane programming environments. In addition, the absolute address used in the program can be given when compiling or compiling, or it can be given directly by the programmer. Typically, a symbolic address is used in a program, and then converted to an absolute address when compiling or assembling.

2) relocatable mount. In a multi-channel program environment, the starting address of multiple target modules is usually starting from 0, the other addresses in the program are relative to the starting address, at this time should be used to avoid the use of relocatable loading mode. Mount the module into the appropriate location in memory, depending on the current memory situation. The process of modifying the instructions and data in the target program is called relocation, and the address transformation is usually done once at the time of loading, so it is also called static relocation, as shown in Figure 3-2 (a).


Figure 3-2 Redirection Type
The feature of static relocation is that when a job is loaded into memory, it must allocate all of its required memory space and cannot mount the job if there is not enough memory. In addition, once the job enters memory, it cannot move around in memory during the entire run and can no longer request memory space.

3) Dynamic operation of fashion, also known as dynamic relocation, the program in memory if the move, you need to use dynamic loading mode. Instead of converting the relative address in the loaded module to an absolute address immediately after loading the loaded module into memory, the loader defers the conversion of the address to the actual execution of the program. Therefore, all addresses that are loaded into memory are relative addresses. This approach requires support for a relocation register, as shown in Figure 3-2 (b).

Dynamic relocation is characterized by the ability to assign the program to a discontinuous storage area, before the program can be loaded only part of its code to run, and then during the program run, as needed to dynamically request allocation of memory, easy to share the program segment, you can provide users with a much larger than the storage space of the address space. logical address space and physical address spaceAfter compiling, each target module is addressed from unit No. 0, which is referred to as the relative address (or logical address) of the target module. When the linker links the various modules into a complete executable target program, the linker order, in turn, forms a unified logical address space starting from unit No. 0 by the relative address of each module. User programs and programmers only need to know the logical address, and the specific mechanism of memory management is completely transparent, they only the system programmer will be involved. Different processes can have the same logical address, because these same logical addresses can be mapped to different locations in main memory.

The physical address space refers to the collection of physical units in memory, which is the final address of the address translation, where the process executes the instruction and accesses the data at run time and is then accessed from main memory through the physical address. When the loader loads executable code into memory, it must translate the logical address into a physical address through address translation, a process known as address relocation. Memory ProtectionBefore memory allocation, it is necessary to protect the operating system from user processes, while protecting user processes from other user processes. This protection is achieved by using the relocation register and the boundary address register. The relocation register contains the minimum physical address value, and the boundary address register contains the logical address value. Each logical address value must be less than the bounded address register, and the memory Management Authority dynamically compares the logical address to the bounded address register, and if the address is not out of bounds, it is mapped to a physical address after the value of the relocation register, and then to the memory unit, as shown in Figure 3-3.

When the CPU scheduler chooses the process execution, the dispatch program initializes the relocation register and the interface address register. Each logical address needs to be checked against the two registers to ensure that the operating system and other user programs and data are not affected by the operation of the process.



Figure 3-3 Hardware support for relocation and boundary address registers
3.2 Memory overwrite and memory swapCovering and switching techniques are two ways to expand memory in a multi-channel program environment. Memory OverwriteIn the early computer system, the main memory capacity is very small, although the main memory only a user program, but the storage space does not fit the user process phenomenon also occurs frequently, this contradiction can be solved by covering technology.

The basic idea of coverage is that the user space can be divided into a fixed area and several coverage areas, since the program does not always have to access the various parts of the program and data (especially the large program). Place the frequently active part in the fixed area, and the remainder by the call relationship segment. First, the segments that will be visited are placed in the overlay, the other segments are placed in the external memory, and the system is then transferred into the coverage area before the call is called, replacing the segments in the coverage area.

The overriding technique is characterized by breaking the limit of the need to load all the information of a process into main memory before it can be run, but not when the amount of code that runs the program at the same time is larger than main memory. Memory ExchangeThe basic idea of switching is to move the program from memory to secondary storage in a waiting state (or to be deprived of the right to run under CPU scheduling), and to free up the memory space, which is called swapping out; the process of preparing a competing CPU to run from the secondary to the memory is called swapping in. Intermediate scheduling is the use of switching technology.

For example, there is a multi-channel program environment in which the CPU takes time-slice rotation scheduling algorithm. In the time slice, the memory manager swapped out the process that was just executed, swapping another process into the memory space that was just freed. At the same time, the CPU scheduler can allocate time slices to other processes that are already in memory. Each process runs out of time slices and is exchanged with another process. Ideally, the memory manager's exchange process is fast enough, and there are always processes in memory that can be executed.

There are a few issues to note about swapping: The Exchange requires backup storage, usually a fast disk. It must be large enough and provide direct access to these memory images. In order to effectively use the CPU, each process is required to execute longer than the swap time, while the main effect of switching time is the transfer time. The transfer time is proportional to the amount of memory being exchanged. If you swap out a process, you must make sure that the process is completely idle. Swap space is often used as a whole block of disks and is independent of the file system, so it can be very fast to use. Swapping typically starts when there are many processes running and the memory space is tight, while the system load decreases. Ordinary switching is not much used, but some variants of the switching strategy still work in many systems, such as UNIX systems.
Switching techniques are primarily performed between different processes (or jobs), while overrides are used in the same program or process. Because the coverage technology requires the coverage structure between the program segments, making it opaque to users and programmers, so for the main memory can not store user program contradictions, the modern operating system is through the virtual memory technology to solve, covering technology has become a history, and exchange technology in the modern operating system still has a strong vitality.

3.3 Memory Continuous allocation management modeContinuous allocation means allocating a contiguous amount of memory space to a user program. It mainly includes single continuous allocation, fixed partition allocation and dynamic partition allocation. Single Continuous distributionIn this way the memory is divided into the system area and the user area, the system area is provided only to the operating system, usually in the low address part; The user area is the memory space provided by the user, except the system area. There is no need for memory protection in this way.

The advantage of this approach is that it is simple, no external fragmentation, can be used to cover the technology, no need for additional technical support. The disadvantage is that it can only be used in single-user, single-tasking operating systems, with internal fragmentation and very low memory utilization. Fixed partition allocationFixed partition allocation is the simplest way of managing a multi-channel program, which divides the user's memory space into several fixed-size areas, each of which is loaded into one job. When you have an idle partition, you can then load the partition from the fallback job queue in external memory, selecting the appropriate size, so that it loops.



Fig. 3-4 Two methods of fixed partition allocation
Fixed partition allocation there are two different approaches when partitioning partitions, as shown in Figure 3-4. Partition size equality: For situations where a computer is used to control multiple identical objects, a lack of flexibility is available. Partition size: Divided into a number of smaller partitions, moderate moderate partitions and a small number of large partitions.
To facilitate memory allocation, partitions are typically queued by size and a partition description table is created, where each table entry includes the starting address, size, and state (whether allocated) for each partition, as shown in Figure 3-5 (a). When a user program is loaded, the table is retrieved to find the appropriate partition to assign and its status to "allocated", and the user program is denied memory allocation if no suitable partition is found. The allocation of storage space is shown in Figure 3-5 (b).

This partitioning method has two problems: one is that the program may be too large to put into any partition, then the user has to use the overlay technology to use memory space, and the second is the low utilization of main memory, when the program is less than the fixed partition size, also occupies a full partition space, so that there is space waste inside the partition, This behavior is called internal fragmentation.

Fixed partition is the simplest storage allocation that can be used in multi-channel program design, no external fragment, but cannot realize multi-process sharing one main memory area, so storage space utilization is low. Fixed partition allocation is rarely used in today's universal operating systems, but it still plays a role in some control systems that control multiple identical objects.



Figure 3-5 Fixed partition description table and memory allocation Dynamic Partition allocationDynamic partition allocation, also known as variable partition allocation, is a partitioning method for dynamic partitioning of memory. This partitioning method does not pre-partition the memory, but dynamically establishes partitions based on the size of the process when the process is loaded into memory, and makes the size of the partition just right for the needs of the process. Therefore, the size and number of partitions in the system are variable.



Figure 3-6 Dynamic Partitioning
As shown in Figure 3-6, the system has 64MB of memory space, where low 8MB is fixed to the operating system and the rest is user-available memory. The first three processes are started, and after they are allocated to the required space, only 4MB of memory is left, and process 4 cannot be loaded. At some point, there is no ready process in memory, the CPU is idle, the operating system swapped out process 2 and swapped in process 4. Since process 4 is smaller than Process 2, this creates a 6MB block of memory in main storage. After the CPU is idle, and main memory cannot accommodate process 2, the operating system swapped out process 1 and swapped in Process 2.

Dynamic partitioning is good at the beginning of allocations, but it can cause a lot of small chunks of memory to appear in memory. Over time, more and more fragments are generated in memory (the last 4MB and the middle 6MB in Figure 3-6, and as the process is swapped in/out, more and more smaller chunks of memory are likely to occur), and the utilization of memory decreases. These small chunks of memory are called external fragments, meaning that the storage space outside of all partitions becomes more and more fragmented, just as opposed to the internal fragments in the fixed partition. Overcoming external debris can be addressed through compact (compaction) technology, where the operating system moves and collates processes from time to times. However, this requires the support of dynamic relocation registers and is relatively time-consuming. The compact process is actually similar to the disk grooming program in Windows systems, except that the latter is compact for external storage space.

If there are multiple free blocks in memory that are large enough to be allocated to the process when the process is mounted or swapped for the primary, the operating system must determine which memory block to use for the processes, which is the allocation strategy for dynamic partitioning, and consider the following algorithms: First fit algorithm: The free partition is linked in the order of the address increment. Allocate memory in sequential lookups, and find the first free partition with a size that meets the requirements. Best Fit algorithm: The free partition increments the partition chain by capacity, and finds the first free partition that satisfies the requirement. Worst-fit (worst fit) algorithm: Also known as the maximum fit (largest fit) algorithm, the free partition is linked in descending order of capacity. Find the first free partition that satisfies the requirement, that is, to pick out the largest partition. Proximity adaptation (Next fit) algorithm: Also known as the first cycle adaptation algorithm, by the first adaptation algorithm evolved. The difference is that when allocating memory, the search continues from where the last lookup ended.
In these methods, the first adaptation algorithm is not only the simplest, but also usually the best and fastest. In the initial version of the UNIX system, the first adaptation algorithm was used to allocate memory space for the process, which was implemented using the array's data structure, not the linked list. However, the first adaptation algorithm causes many small free partitions in the low address portion of the memory, and each time the allocation is made, it passes through these partitions, thus increasing the cost of the lookup.

The proximity adaptation algorithm attempts to solve this problem, but in practice it often leads to the allocation of space at the end of the memory (because in a scan, the memory is not allocated when it is used before it is released, it is split into small fragments). It is usually worse than the result of the first adaptation algorithm.

Best-fit algorithms, although called "best," often have poor performance because each time the best allocation leaves a small, hard-to-use block of memory that produces the most external fragmentation.

The worst-fit algorithm, in contrast to the best-fit algorithm, chooses the largest available block, which appears to be the least likely to fragment, but divides the largest contiguous memory, quickly leading to the absence of large chunks of memory, and therefore very poor performance.

Kunth and shore have simulated experiments on the utilization of memory space in the first three methods, and the results show that:

The first adaptive algorithm may be better than the best adaptive method, and both of them must be better than the maximum adaptive method. It is also noted that in the implementation of the algorithm, the best adaptive method and the maximum adaptation method in the allocation operation need to sort or traverse the available blocks, while the first adaptation method and the neighboring adaptive method simply need to be searched; In the recovery operation, when the reclaimed block is adjacent to the original free block (three adjacent cases are more complex), the blocks need to be When the algorithm is implemented, it is managed using arrays or linked lists. In addition to the utilization of memory, the algorithm overhead here is also a factor in operating system design considerations.

Table 3-Comparison of 13 types of memory partition management methods to to of

number of work lanes Internal
Fragment
External
Fragment
hardware support available empty
Management
solve broken
Sheet method
solve the null
The lack
improvement between
Number of trade lanes
Pipettes Continuous
Distribution
1 Yes No Boundary address register, out-of-bounds
Inspection agencies
-- -- Covered Exchange
Multi-Channel fixation
Continuous distribution
<=n
(User space Stroke
is n blocks)
Yes No Upper and lower bounds register, out-of-bounds check mechanism base register, length register, dynamic address translation mechanism -- --
Multi-channel variable continuous distribution No Yes Array linked list Compact

The above three memory partition management methods have a common feature, that is, user processes (or jobs) in main memory are continuous storage. They are compared and summarized here, as shown in table 3-1.

3.4 Memory non-contiguous allocation management modeNon-contiguous allocations allow a program to be randomly mounted into nonadjacent memory partitions, divided into paging storage management and segmented storage management depending on the size of the partition.

The paging storage management method is divided into basic paging storage management and request paging storage management, depending on whether you want to load all the pages of the job into memory when you run the job. The basic paging storage management method is described below. Basic Paging Storage Management methodFixed partitioning creates internal fragmentation, and dynamic partitioning creates external fragmentation, both of which are less efficient for memory. We hope that the use of memory to avoid fragmentation, which introduces the idea of paging: the main memory space divided into equal size and fixed blocks, the block is relatively small, as the basic unit of main memory. Each process is also partitioned in blocks, where the process executes, applying blocks to block space in the main memory.

The pagination approach is formally based on the fixed partitioning technique, which is partitioned equally, and paging management does not generate external fragmentation. But it also has the essence of different points: the size of the block is much smaller than the partition, and the process is divided according to the block, the process is run by block request main memory free space and execute. In this way, the process only generates main memory fragments when it requests a main memory block space for the last incomplete block, so although internal fragmentation is generated, the fragments are small relative to the process, and each process produces an average of half-size internal fragments (also called in-page fragmentation). 1) Several basic concepts of paging storage① page and page size. A block in a process is called a page, and a block in memory is called a page box (page frame). External memory are also divided in the same units, directly called blocks. The process will need to request the main memory space when executing, that is, to allocate the available page boxes in main memory for each page, which results in a one by one correspondence between the page and the page box.

To facilitate address translation, the page size should be an integer power of 2. At the same time the page size should be moderate, if the page is too small, it will make the process of too many pages, so that the page table is too long, occupy a lot of memory, but also increase the cost of hardware address translation, reduce the page swap/swap efficiency; The page is too large to increase the amount of memory in the page. So the size of the page should be moderate, taking into account the tradeoff between DA efficiency and time efficiency.

② address structure. The logical address structure for paging storage management is shown in Figure 3-7.



Figure 3-7 The address structure of the paging storage management
The address structure consists of two parts: the first part is the page number p, and the latter part is the in-page offset W. The address length is 32 bits, where the 0~11 bit is the in-page address, that is, the 4kb;12~31 bit per page size is the page number, and the address space allows for up to 2^20 pages.

③ page table. In order to find the physical block corresponding to each page of the process in memory, the system establishes a page table for each process, records the corresponding physical block number of the page in memory, and the page table is generally stored in memory.

After the page table is configured, when the process executes, it finds the physical block number of each page in memory by locating the table. As you can see, the role of the page table is to implement address mappings from page numbers to physical block numbers, as shown in Figure 3-8.



Figure 3-8 The role of the page table 2) Basic address transformation mechanismThe task of the address transformation mechanism is to translate the logical address into the physical address in memory, and the address transformation is implemented by the page table. Figure 3-9 shows the address transformation mechanism in the paging storage management system.



Figure 3-9 Address transformation mechanism for paging storage management
A page Table register (PTR) is usually set up in the system, and the page table is stored in the memory's initial address F and the page table length m. When the process is not executed, the start and length of the page table are stored in the process control block, and the page table start and length are saved in the page table register when the process executes. Set the page size to L, and the transformation of logical address A to Physical address E is as follows: Calculates the page number P (P=A/L) and the in-page offset W (w=a%l). Compare page number p and page table length m, if P >= m, generates an out-of-bounds interrupt, or continue execution. Page table in page number p corresponding Page table entry address = page Table start address F + page number p * Page table item length, remove the page table item content B, that is, the physical block number. Computes the e=b*l+w, using the resulting physical address E to access the memory.
The entire address transformation process is done automatically by the hardware.

For example, if the page size L is 1K bytes, the corresponding physical block of page number 2 is b=8, the process of calculating the physical address E of the logical address a=2500 is as follows: p=2500/1k=2,w=2500%1k=452, find the block number of the physical block corresponding to page number 2 is 8,e=8*1024+452 = 8644.

The following is a discussion of the two main problems of paging management: each time the operation requires a logical address to the physical address of the transformation, the address conversion process must be fast enough, or the speed of the visit will be reduced; Each process introduces a page table for storing the mapping mechanism, the page table cannot be too large, or the memory utilization is reduced. 3) Address transformation mechanism with fast tableThe address transformation process described above shows that if the page table is all in memory, access to a data or an instruction to access at least two memory: one is to access the page table, determine the data or instructions to access the physical address, the second time to access data or instructions based on that address. Obviously, this approach is half as slow as the usual execution of instructions.

In order to speed up the process of address transformation, a high-speed buffer memory-fast table, also known as a Lenovo Register (TLB), is added to the address transformation mechanism to store the current access to several page table items. In this case, the page table in main memory is often called the slow table, and the address transformation mechanism with the fast table is shown in Figure 3-10.



Figure 3-10 An address transformation mechanism with a fast table
In a paging mechanism with a fast table, the transformation process of the address: After the CPU gives the logical address, the address is translated by the hardware and the page number is fed into the cache register, and the page number is compared to all the page numbers in the fast table. If a matching page number is found, indicating that the page table entry to be accessed is in a fast table, the page box number that corresponds to the page is fetched directly from it, and the offset in the page forms a physical address. In this way, access data is only available once a visit is done. If it is not found, it needs to access the page table in main memory, and after reading the page table entry, it should be stored in a fast table so that it can be accessed again later. However, if the fast table is full, the old page table entries must be replaced by a certain algorithm.
Note: Some processors are designed for both fast and slow tables, and if successful lookups in a fast table, the lookup of the slow table is terminated.

The average speed ratio of the fast table can reach more than 90%, so that the rate of paging caused by the loss of less than 10%. The validity of the fast table is based on the famous local principle, which will be discussed in detail later in the virtual memory. 4) Level two page tableSecond problem: Because of the introduction of paging management, the process does not need to push all pages into the In-memory page box when it executes, as long as the page table that holds the mappings is paged into memory. But we still need to consider the size of the page table. For example, to implement a process mapping of all logical address spaces with 32-bit logical address space, page size 4KB, page table item size 4 B, each process requires 2^20, about 1 million page table entries. That is, it is obviously impractical for each process to have only a page table that requires 4MB of main memory space. Even if you do not consider mapping the entire logical address space, a process with a slightly larger logical address space, the page table size may be too large. In the case of a 40MB process, the page table entry is 40KB, and if all the page table item contents are saved in memory, 10 memory page boxes are required to hold the entire page table. The entire process size is about 10,000 pages, while the actual execution requires only dozens of pages into the memory page box to run, but if the page table that requires 10 page size must all go to memory, which is relative to the actual execution of the dozens of process page size, it is certainly reduced memory utilization; On the other hand, This 10 page table entry does not need to be kept in memory at the same time, because in most cases, the page table entries required for the mapping are in the same page of the page table.

Further extending the idea of page table mapping, you can get two levels of paging: 10 pages of page tables are also mapped, and a previous Level page table is created to store the mapping of page tables. There are only 10 page table entries required to map 10 pages of a page table, so it is sufficient for the previous page table to be 1 pages long (you can store 2^10=1024 page table entries). When the process executes, only the 1 pages of the previous page table into memory, the page table of the process and the process itself, can be in the subsequent execution of the memory in the next week.

As shown in Figure 3-11, this is the address translation process for the hardware paging of the Intel processor 80x86 series. In a 32-bit system, all 32-bit logical address spaces can be divided into 2^20 (4gb/4kb) pages. These pages can further build the top-level page table, need to 2^10 a top-level page table entries to index, which is exactly the size of a page, so create a Level two page table.



Figure 3-11 Hardware Paging address translation
For example, process paging in a 32-bit system: assuming that the kernel has allocated a logical address space for a running process is 0x20000000 to 0X2003FFFF, this space consists of 64 pages. While the process is running, we do not need to know the physical address of the page box for all of these pages, most of which are not in main memory. Here we only pay attention to how the hardware calculates the physical address of the page box for this page when the process is running to a page. Now the process needs to read the byte contents in the logical address 0x20021406, which is processed as follows:
Logical Address: 0x20021406 (0010 0000 0000 0010 0001 0100 0000 0110 B)
Top Page table field: 0x80 (0000 B)
Level two page table field: 0x21 (0010 0001B)
In-page offset field: 0x406 (0100 0000 0110 B)

The 0x80 of the top-level table field is used to select the 0x80 table entry for the top-level page tables, which points to the two-level page table related to the page of the process, and the two-level page table field 0x21 to select the 0x21 table entry for the two-level page sheet, which points to the page box that contains the page you want The last in-page offset field, 0x406, is used to read the bytes in the target page box with an offset of 0x406.

This is a more practical example of a 32-bit system. Seemingly more complex examples, help to understand more deeply, I hope readers can do their own calculation of the conversion process.

The goal of establishing a multi-level page table is to build an index so that it does not waste memory space to store useless page table entries, nor does it blindly search the page table entries in a sequential way, and the requirement for indexing is that the top-level page table item does not exceed one page size. In 64-bit operating systems, the Division of the page table needs to be reconsidered, which is a common topic in many textbooks and tutorials, but many of them give incorrect analysis and need attention.

We assume that the 4KB page size is still being used. Offset field 12 bits, assuming the page table item size is 8B. Thus, when the previous level of paging, each page box can only store 4kb/8b page table entries, and no longer 210, so the previous level of the table field is 9 bits. Follow the same same continuation of paging. 64=12+9+9+9+9+9+7, so you need 6 levels of paging to achieve indexing. Many of the books are still analyzed according to the 4B page table, although the same results of 6-level paging, but obviously wrong. This gives the paging level of the two actual 64-bit operating systems (note: There is no use of all 64-bit addressing, but due to the design of the address byte alignment, still using the 8B size of the page table entries), understand the classification method in table 3-2, I believe that the multilevel paging is very clear.

Table 3-2 Classification methods of two kinds of systems
Platform Page Size number of addressing bits Paging Series Specific Grading
Alpha 8KB 43 3 13+10+10+10
x86_64 4 KB 48 4 12+9+9+9+9
Basic Segmented Storage Management methodPaging management is designed from a computer perspective to improve memory utilization, improve the performance of the computer, and paging through the hardware mechanism to achieve full transparency to the user;

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.