Introduction to CentOS Memory Management Mechanism

Source: Internet
Author: User

I often encounter some new users who are new to Linux who will ask how much memory is occupied?

In Linux, we often find that the idle memory is very small. It seems that all the memory is occupied by the system. On the surface, it seems that the memory is not enough. This is an excellent feature of Linux memory management, which is different from Windows Memory Management. The main feature is that, no matter how large the physical memory is, Linux will fully utilize it and read the hard disk data called by some programs into the memory, improve the data access performance of Linux systems by utilizing the high-speed features of memory read/write. In Windows, memory is allocated to applications only when memory is needed, and large memory space cannot be fully utilized. In other words, Linux will be able to make full use of each additional physical memory, giving full play to the benefits of hardware investment, while Windows only uses it as a decoration, even if it increases by 8 GB or even greater.

This feature of Linux mainly uses idle physical memory to divide a part of space as cache and buffers to improve data access performance.

Page cache is a major disk cache implemented by Linux kernel. It is mainly used to reduce disk I/O operations. Specifically, by caching data in the disk to the physical memory, you can change access to the disk to access the physical memory.

The value of high-speed disk cache lies in two aspects: first, the access speed to the disk is much lower than the access speed to the memory. Therefore, the access speed from memory is faster than that from the disk. Second, once the data is accessed, it is likely to be accessed again in a short period of time.

The following describes the Linux memory management mechanism:

I. Physical memory and virtual memory

We know that reading and writing data directly from the physical memory is much faster than reading and writing data from the hard disk. Therefore, we hope that all data can be read and written in the memory, while the memory is limited, this introduces the concept of physical memory and virtual memory.

Physical memory is the memory size provided by the system hardware and is the real memory. Compared with physical memory, there is also a concept of virtual memory in Linux, virtual Memory is a strategy proposed to meet the shortage of physical memory. It is a logical memory virtualized by the disk space. The disk space used as the virtual memory is called the swap space (SwapSpace ).

As an extension of the physical memory, Linux uses the virtual memory of swap partitions when the physical memory is insufficient. In more details, the kernel will write the memory block information that is not used for the moment to the swap space, in this way, the physical memory has been released, and this memory can be used for other purposes. When you need to use the original content, this information will be re-read from the swap space into the physical memory.

Linux memory management adopts a paging access mechanism. To ensure that the physical memory can be fully utilized, when appropriate, the kernel automatically exchanges infrequently used data blocks in the physical memory to the virtual memory, and retains frequently used information to the physical memory.

To learn more about the Linux memory running mechanism, you need to know the following aspects:

1. the Linux system will switch pages from time to maintain as much free physical memory as possible. Even if there is no need for memory, Linux will swap out memory pages that are temporarily unavailable. This avoids waiting for the switching time. 2. in Linux, page switching is conditional. Not all pages are switched to the virtual memory when they are not used. The Linux kernel uses the "most recently used" algorithm, swap some infrequently used page files to the virtual memory. Sometimes we can see that there is still a lot of Linux physical memory, but the swap space is also used. In fact, this is not surprising. For example, a process that occupies a large amount of memory needs to consume a lot of memory resources, and some infrequently used page files will be exchanged to the virtual memory, however, when the process that occupies a lot of memory resources ends and releases a lot of memory, the page files that have just been exchanged will not be automatically exchanged into the physical memory unless necessary, now the physical memory of the system will be much idle and the swap space will be used. You don't have to worry about this. You just need to know what it is. 3. the page of the swap space will be first exchanged to the physical memory when used. If there is not enough physical memory to accommodate these pages, they will be immediately exchanged, there may not be enough space in the virtual memory to store these swap pages, which will eventually lead to issues such as false crashes and service exceptions in Linux. Although Linux can be restored by itself within a period of time, however, the recovered system is basically unavailable.

Therefore, reasonable planning and design of Linux memory usage is very important.

2. Memory monitoring

As a Linux system administrator, it is very important to monitor the memory usage status. It helps you to understand the memory usage status, such as whether the memory usage is normal and whether the memory is in short supply, the most commonly used commands for monitoring memory include free and top. below is the output of a system free:

 
 
  1. [root@linuxeye~]#free

  2. totalusedfreesharedbufferscached

  3. Mem:389403634735444204920729721332348

  4. -/+buffers/cache:20682241825812

  5. Swap:40959929060363189956

Meaning of each option:

The first line:

Total: total physical memory size

Used: used physical memory size

Free: idle physical memory size

Shared: memory size shared by multiple processes

Buffers/cached: disk cache size

Row 2 Mem: physical memory usage

Row 3 (-/+ buffers/cached): indicates the disk cache usage status.

Row 4: Swap indicates the memory usage status of Swap space

The memory status output by the free command can be viewed from two perspectives: one is from the kernel perspective and the other is from the application layer perspective.

View the memory status from the kernel perspective

That is, the kernel can be directly allocated to it. No additional operation is required, that is, the value of the Mem entry in the second line of the free command output above. It can be seen that the physical memory of the system is 3894036 kb, the idle memory is only 420492 kb, that is, a little more than 40 MB. Let's do this calculation:

3894036-3473544 = 420492

In fact, the total physical memory minus the used physical memory gets the idle physical memory size. Note that the available memory value 420492 does not include the memory size in the buffers and cached states.

If you think that the idle memory of this system is too small, you will be wrong. In fact, the kernel fully controls the memory usage. When Linux needs the memory, or change the memory in the buffers and cached state to the free State when the system is running for the system to use.

System memory usage from the application layer perspective

That is, the memory size that can be used by applications running on Linux, that is, the output of the free command line 3-/+ buffers/cached, the memory used by the system is only 2068224 kb, And the idle memory reaches 1825812 kb. Continue with this calculation:

420492 + (72972 + 1332348) = 1825812

Through this equation, we can see that the physical memory value available for the application is the free value of the Mem item plus the sum of the buffers and cached values. That is to say, the free value includes the size of the buffers and cached items, for applications, the memory occupied by buffers/cached is available, because buffers/cached is used to improve the file reading performance. When the application needs to use the memory, buffers/cached will be quickly recycled for use by the application.

Similarities and differences between buffers and cached

In Linux, when an application needs to read data from a file, the operating system first allocates some memory to read data from the disk into the memory, then, the data is distributed to the application. When you need to write data to a file, the operating system first allocates memory to receive user data, and then writes the data from the memory to the disk. However, if a large amount of data needs to be read from the disk to the memory or written into the disk by the memory, the system's read/write performance becomes very low, because whether it is reading data from the disk, writing data to a disk is a very time-consuming and resource-consuming process. In this case, the buffers and cached mechanisms are introduced in Linux.

Both buffers and cached are in-memory operations to save the files that have been opened by the system and the file attribute information. In this way, when the operating system needs to read some files, it will first search in the buffers and cached memory areas, if the data is found, read the data directly and send it to the application. If the data is not found, the data is read from the disk. This is the operating system cache mechanism, which greatly improves the operating system performance through caching. However, the buffer content of buffers and cached is different.

Buffers is used to buffer Block devices. It only records metadata (metadata) and trackingin-flightpages of the file system, while cached is used to buffer files. More broadly speaking, buffers is mainly used to store contents in directories, file attributes, and permissions. Cached is used directly to remember the files and programs we opened.

To verify whether our conclusion is correct, you can open a very large file through vi to see the changes in cached, and then vi the file again, I feel the similarities and differences between the two open speeds, is the second open faster than the first one?

Run the following command:

 
 
  1. find/*-name*.conf

Check whether the buffers value has changed, and then execute the find command again to see if the display speed is different twice.

The memory operating principle of the Linux operating system is designed to a large extent based on the requirements of the server. For example, the system buffer mechanism caches frequently used files and data in cached, linux always tries to cache more data and information, so that you can directly retrieve the data from the memory when you need it again, without a long disk operation, this design improves the overall performance of the system.

Reference: http://ixdba.blog.51cto.com/2895551/541355

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.