Linux Memory management mechanism

Source: Internet
Author: User

In Linux often found that the free memory is very small, it seems that all the memory is occupied by the system, the surface of the memory is not enough to use, it is not. This is an excellent feature of Linux memory management, which differs from Windows memory management in this regard. The main feature is that no matter how large the physical memory is, Linux is fully utilized to read some program-called hard disk data into memory, using the high-speed features of memory read and write to improve the data access performance of Linux system. Instead, Windows allocates memory for the application only when it needs memory, and does not take full advantage of the large capacity of the memory space. In other words, with each additional physical memory, Linux will be able to take full advantage of the benefits of hardware investment, and Windows will only do it as a device, even if the increase of 8GB or even larger.

This feature of Linux, mainly uses the free physical memory, divides the part space, as the cache, buffers, improves the data access performance.

The page cache (cache) is a primary disk cache implemented by the Linux kernel. It is primarily used to reduce I/O operations on disks. Specifically, by caching the data in the disk into physical memory, access to the disk becomes access to physical memory.

The value of the disk cache is two: first, accessing the disk is much slower than accessing the memory, so accessing the data from memory is faster than accessing it from disk. Second, once the data is accessed, it is likely to be accessed again in the short term.

Here's a look at the Linux memory management mechanism:

One, physical memory and virtual memory

We know that reading and writing data directly from physical memory is much faster than reading and writing data from a hard disk, so we want all of the data read and written to be done in memory, and memory is limited, which leads to the concept of physical and virtual memory.

Physical memory is the amount of memory provided by the system hardware, is the real memory, relative to the physical memory, under Linux there is a virtual memory concept, virtual memory is to meet the lack of physical memory of the proposed strategy, it is the use of disk space virtual out of a piece of logical memory, The disk space used as virtual memory is called swap space.

As an extension of physical memory, Linux will use the virtual memory of the swap partition when physical memory is low, in more detail, the kernel will write the memory block information that is temporarily unused to the swap space, so that the physical memory has been released, this memory can be used for other purposes, when the original content needs to be used, This information is re-read into the physical memory from the swap space.

Linux memory management is a paging access mechanism, in order to ensure that the physical memory can be fully utilized, the kernel will be in the physical memory of infrequently used data blocks automatically swapped into virtual memory, and the information often used to retain the physical memory.

To learn more about Linux memory operating mechanisms, you need to know the following:

    1. The Linux system will occasionally make a paging operation to keep as much free physical memory as possible, even if there is nothing to do with memory, and Linux will swap out the memory pages that are temporarily unused. This avoids the time required to wait for the interchange.

    2. Linux for the page exchange is conditional, not all the pages are swapped to virtual memory when not in use, the Linux kernel based on the "most Frequently used" algorithm, only some infrequently used paging files to virtual memory, sometimes we will see a phenomenon: Linux physical memory is still many, But the swap space is also used a lot. In fact, this is not surprising, for example, a very large memory of the process run, it takes a lot of memory resources, there will be some infrequently used paging file is swapped into virtual memory, but later this memory resource-intensive process ended and released a lot of memory, The page file that was swapped out is not automatically swapped into physical memory, unless it is necessary, then the system physical memory will be idle a lot at the moment, while the exchange of space is also being used, there is a phenomenon just said. Don't worry about that, just know what's going on.

    3. The swap space pages are swapped to physical memory first, and if there is not enough physical memory to accommodate them, they will be swapped out immediately, so there may not be enough space in the virtual memory to store the swap pages, which can eventually lead to a fake crash, service exception, etc. While Linux can recover itself over time, the restored system is largely unusable.

Therefore, it is very important to plan and design the use of Linux memory rationally.

Second, the memory monitoring

As a Linux system administrator, monitoring memory usage status is very important, through monitoring to help understand the use of memory state, such as memory consumption is normal, memory shortage, and so on, the most commonly used to monitor memory commands have free, top, etc., the following is the output of a system free:


[[email protected] ~]# free               total       used        free     shared    buffers      cached mem:       3894036    3473544      420492          0       72972    1332348 -/+ buffers/cache:    2068224     1825812 Swap:      4095992      906036    3189956


Meaning of each option:

First line:

Total: Overall size of physical memory

Used: The amount of physical memory already in use

Free: idle physical memory size

Shared: The amount of memory that multiple processes share

buffers/cached: Size of the disk cache


Second line of Mem: representing physical memory usage

Third row (-/+ buffers/cached): Represents the disk cache usage status

Line four: Swap indicates swap space memory usage state


The memory state of the free command output can be viewed in two angles: one from the kernel point of view, one from the application layer perspective.

View the state of memory from the perspective of the kernel

Is that the kernel can now be directly assigned to, do not need additional operations, that is, the output of the above free command of the second mem value, it can be seen that this system physical memory 3894036K, free memory only 420492K, that is, 40M a little more, we do a calculation such as:

3894036–3473544 = 420492

In fact, the total physical memory minus the physical memory that has been used to get the free physical memory size, note that the available memory value of 420492 does not contain the memory size in the buffers and cached states.

If you think that the system is too small, you are wrong, in fact, the kernel completely control the use of memory, Linux will need to memory, or when the system is running progressively, the buffers and cached state memory into the Free State of memory for the system to use.

Use state of system memory from the perspective of the application layer

That is, the amount of memory that an application running on Linux can use, that is, the output of the third line of the free command-/+ buffers/cached, and you can see that the memory used by this system is 2068224K, and the idle memory reaches 1825812K, and continues to do such a calculation:

420492+ (72972+1332348) =1825812

This equation shows that the physical memory value available to an application is the sum of the free value of the MEM item plus the buffers and cached values, that is, the value of this value is buffers and cached item size, and for Applications, buffers/ Cached occupied memory is available because buffers/cached is designed to improve the performance of file reads, and when applications need to use memory, buffers/cached is quickly recycled for application use.

Similarities and differences of buffers and cached

In the Linux operating system, when the application needs to read the data in the file, the operating system allocates some memory, reads the data from the disk into the memory, and then distributes the data to the application, and when the data needs to be written to the file, the operating system allocates the memory to receive the user data first. The data is then written from memory to disk. However, if there is a large amount of data that needs to be read from disk to memory or written to disk by memory, the read and write performance of the system becomes very low, because either reading from disk or writing data to disk is a very time-consuming and resource-intensive process, in which case Linux introduces the buffers and cached mechanisms.

Buffers and cached are memory operations that are used to save files and file attribute information that have been opened by the system, so that when the operating system needs to read some files, it will first look in the buffers and cached memory areas, and if found, read them directly to the application. If you do not find the data needed to read from disk, this is the operating system caching mechanism, through the cache, greatly improve the performance of the operating system. But the content of buffers and cached buffer is different.

buffers is used to buffer the block device, it only records the file system metadata (metadata) and tracking in-flight pages, and cached is used to buffer the file . More commonly said: buffers is mainly used to store content in the directory, file attributes and permissions and so on. and cached is used directly to memorize the files and programs we have opened.

In order to verify our conclusion is correct, you can open a very large file by VI, look at the change of cached, and then again VI this file, feel how the speed of two times to open the similarities and differences, is not the second opening speed significantly faster than the first time?

Then execute the following command:


Find/*-name *.conf

See if the value of the buffers changes, and then repeat the Find command to see how the two times the display speed is different.

Linux operating system memory operation Principle, is largely based on the needs of the server design, such as the system's buffering mechanism will be used to cache files and data in cached, Linux is always trying to cache more data and information, so that the need for this data can be directly from the memory Without the need for a lengthy disk operation, this design approach improves the overall performance of the system.


This article is from the "-it commune" blog, please be sure to keep this source http://guangpu.blog.51cto.com/3002132/1548862

Linux Memory management mechanism

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.