Linux Memory Management-Virtual memory summary

Source: Internet
Author: User

The interview was asked about virtual memory, and the answer felt inadequate. So I'll summarize it again.

The requirements of the program Ape for Memory are: private, fast, unlimited capacity.

Corresponding to the current situation: the use of a piece of physical memory, limited capacity, and limited speed, requires CPU multilevel cache.

Physical memory exposure to the process has a problem: 1. If each byte of memory can be addressed, the operating system and other processes are easily compromised

Workaround 1

Via Base register and boundary register

The predecessor saves the starting physical address, which saves the address length and resolves the problem of address privatization.

The switching technique still uses two registers, which creates a memory hole, and it consumes CPU time even when using memory crunch technology (moving process space);

Virtual Memory:

Core idea: Each process has a separate virtual address space, is divided into multiple pages, mapped to physical memory, but not all pages in memory to run the program, if the program refers to a memory address space, such as a variable in the heap memory, the virtual address is not directly sent to the address bus, Instead, it is sent to the MMU, the hardware performs the necessary mapping, the transformed address is sent to the bus, and if the MMU determines that the requested data is no longer in RAM, a page break is generated, the CPU is trapped in the kernel, the missing pages are loaded into physical memory, the MMU mappings are modified, and the failed instructions are re-executed.

If the MMU does not start, the memory address sent by the CPU will be transmitted directly to the bus and accepted by the physical memory chip.

Therefore, the MMU is transparent to the program and does not need to fall into the kernel state to work, which is done directly by the hardware.

So the essence of virtual memory is to create a new abstract concept-address space, to abstract physical memory, similar to the process is the abstraction of the CPU, the implementation is to break the virtual address space into a page. and map each page to a page box in physical memory or to a temporary contact map.

A piece can explain:


650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M00/7E/10/wKioL1b2my6QoFcyAABdtusch2k051.png "title=" Image.png "alt=" Wkiol1b2my6qofcyaabdtusch2k051.png "/>

So, for Linux, these swapped out pages will be on the disk, and for Linux, the user swap disk space must be a separate partition, which is the swap partition, the swap partition is also the file system, you need to use Fdisk to adjust the partition type of 82, The Mkswap command creates a swap partition in order to be used, usually open files can be exchanged, and the code snippets and variables cannot be exchanged, the process of this exchange is called Pageout, Pagein corresponds to swap, is called swap in Swaps out. Linux allows the use of memory overload in the form of exchange analysis. Because disk IO performance and memory is not an order of magnitude, the use of disk as a swap partition will inevitably lead to reduced system performance, however, it is very dangerous for Linux to not allow overloading, once the memory is full, it is not possible for the operating system to use the free memory page to complete the necessary work, It kills some of the processes that are causing the service to be unavailable.

Use the free command

650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M00/7E/14/wKiom1b2myvjCN-uAAAZrYKfrQA960.png "title=" 11.png "alt=" Wkiom1b2myvjcn-uaaazrykfrqa960.png "style=" Float:none;/>





There are two important parameters: buffers, and cached

Cache (cached) is to save the read data, when re-read if the hit (find the required data) is not

To read the hard drive, if not hit the hard drive. The data will be organized according to the frequency of reading, the most frequent read

The content is placed in the most easily found location, the content is no longer read to the back row, until removed from it.

Buffer (buffers) is based on the disk read-write design, the decentralized write operations centralized, reduce disk fragmentation and hard disk repeatedly seek, thereby improving system performance. Linux has a daemon that periodically empties buffered content (that is, writes like a disk) or manually empties the buffer via the Sync command. For example: I have a ext2 u disk here, I went to the inside CP a 3M MP3, but the U disk's light does not beat, after a while (or manually input sync) u disk light will beat up. The buffering is emptied when the device is uninstalled, so there are times when uninstalling a device takes a few seconds. So with the addition of buffer and cached, free will increase, such as

So, caching cache is a condition that matches producers slower than consumers, buffer buffers are matched to producers faster than consumers.

Also, virtual memory involves another concept, using the top command

650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M00/7E/10/wKioL1b2m8ey44ZzAAAxLtc8Jkk969.png "title=" 22.png "alt=" Wkiol1b2m8ey44zzaaaxltc8jkk969.png "style=" Float:none;/>

Virt and the Res corresponds to the virtual memory mentioned above.

VIRT : Virtual Memory Usage

Process "required" virtual memory size, including library, code, data, etc. used by the process

If the process requests 100m of memory, but actually uses only 10m, then it will grow 100m instead of actual usage.

Res:resident memory usage The resident RAM is worth the physical page actually occupied.

At that time there was a problem, supposedly, virt=res+swap, but here is obviously not Virt has 4 G, and swap for 0,res 400m, obviously not, this problem needs further study.



Linux Memory Management-Virtual memory summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.