Can the cache in Linux memory really be recycled?

Source: Internet
Author: User

In Linux systems, we often use the free command to view the state of the system's memory usage. On a RHEL6 system, the display of the free command is probably a state:

[[email protected] ~]# free             total       used       free     shared    buffers     cachedMem:     132256952   72571772   59685180          0    1762632   53034704-/+ buffers/cache:   17774436  114482516Swap:      2101192        508    2100684

Here the default display unit is KB, my server is 128G of memory, so the numbers appear relatively large. This command is almost always a command for anyone who has ever used Linux, but the more such commands seem to really understand the fewer people (I mean the smaller the ratio). In general, the understanding of this command output can be divided into several levels:

    1. Don't know. The first reaction of such people is: God, memory with a lot of, 70 more than G, but I almost do not run what big program AH? Why is that? Linux is good for memory!
    2. I think I know it very well. Such a person general self-study evaluation will say: Well, according to my professional eyes to see, the memory only used about 17G, there are a lot of remaining memory available. Buffers/cache occupy more, indicating that there are processes in the system have read and write files, but it does not matter, this part of the memory is when idle to use.
    3. Really know. The reaction of this kind of person is that people feel the least understanding of Linux, their response is: "The free display is this, OK, I know." God horse? You ask me these memory enough, of course I do not know! How do I know how your program is written?

Based on the current technical documentation on the Web, I believe the vast majority of people who know a bit of Linux should be on the second level. It is generally believed that the memory space occupied by buffers and cached can be released as free space when the memory pressure is large. But is that really the case? Before we demonstrate this topic, let's briefly introduce what buffers and cached mean:

What is Buffer/cache?

Buffer and cache are two nouns that are used in computer technology, which can have different meanings in a non-context. In Linux memory management, the buffer here refers to the Linux memory: buffer cache. The cache here refers to the memory of Linux: Page cache. Translated into Chinese can be called buffer cache and page cache. Historically, one (buffer) was used as a cache for IO devices, while the other (cache) was used as a read cache for IO devices, where IO devices mainly referred to block device files and ordinary files on file systems. but now, the meaning of them is different. in the current kernel, page cache, as its name implies, is a cache of memory pages, which, in a way, can be managed using page cache as its cache if the memory is managed with page allocation. Of course, not all of the memory is managed by pages (page), and many are managed against blocks (block), which are used in buffer cache if the cache function is to be used. (from this point of view, is it better to rename the buffer cache called block cache?) However, not all blocks have a fixed length, the length of the block on the system is determined primarily by the block device used, and the page length is 4k on the X86 either 32-bit or 64-bit.

By understanding the differences between these two sets of caching systems, you can understand exactly what they can be used to do.

What is page cache

Page cache is used primarily as a cache of file data on the file system, especially when the process has read/write operations on the file. If you think about it, as a system call that can map files to memory: Mmap is it natural that you should also use page cache? In the current system implementation, the page cache is also used as a caching device for other file types, so in fact page cache is responsible for most of the block device file caching work.

What is buffer cache

The buffer cache is designed to be used by a system that caches chunks of data when the system is read and written to a block device. This means that some operations on blocks are cached using buffer cache, such as when we are formatting the filesystem. In general, two cache systems are used together, such as when we write to a file, the contents of the page cache are changed, and the buffer cache can be used to mark the page as a different buffer, and to record which buffer was modified. This way, the kernel does not have to write the entire page back when it performs subsequent write-back (writeback) of the dirty data, only to write back the modified part.

How do I recycle the cache?

The Linux kernel triggers memory recycling when memory is about to run out, freeing up memory for use in memory-hungry processes. In general, the primary memory release in this operation comes from the release of Buffer/cache. In particular, more cache space is used. Since it is mainly used for caching, only when the memory is sufficient to speed up the process of file read and write speed, then in the case of large memory pressure, of course, it is necessary to empty the cache, as free space for the relevant process use. So in general, we think that Buffer/cache space can be released, and this understanding is correct.

But the job of clearing the cache is not without cost. Understand what the cache does. The cache must be guaranteed to be consistent with the data in the corresponding file in order to release the caches. so with the cache cleanup behavior, the system IO is generally high. because the kernel wants to compare the data in the cache with the data on the corresponding hard disk file, if the inconsistency needs to be written back, it can be recycled.

We can also use the following file to manually trigger the cache cleanup in addition to the memory being exhausted in the system:

[[email protected] ~]# cat /proc/sys/vm/drop_caches 1

The method is:

echo 1 > /proc/sys/vm/drop_caches

Of course, this file can be set to a value of 1, 2, 3, respectively. The meanings they represent are:
echo 1 >/proc/sys/vm/drop_caches: Indicates clear Pagecache.

echo 2 >/proc/sys/vm/drop_caches: Represents the purge of objects in the slab allocator (including the catalog item cache and the Inode cache). The slab allocator is a mechanism for managing memory in the kernel, where many cache data implementations are pagecache.

echo 1 >/proc/sys/vm/drop_caches: Indicates that the cache objects in the Pagecache and slab allocator are cleared.

Can the cache be recycled?

We analyzed the cache can be recycled, then there is no can not be recycled cache? Of course. Let's look at the first case:

Tmpfs

It is known that Linux provides a "temporary" file system called TMPFS, which can take part of the memory space as a file system, so that the memory space can be used as a directory file. The vast majority of Linux systems now have a TMPFS directory called/DEV/SHM, which is one such existence. Of course, we can also create our own tmpfs by hand, by the following methods:

  [[email protected] ~]# mkdir/tmp/tmpfs[[email protected] ~]# mount-t tmpfs-o Size=20G none/tmp/t Mpfs/[[email protected] ~]# dffilesystem           1k-blocks      used Availa ble use% mounted on/dev/sda1             10325000   3529604   6270916  37%// Dev/sda3             20646064   9595940  10001360  49%/usr/local/dev/mapper/ Vg-data  103212320  26244284  71725156  27%/datatmpfs             &NBSP ;   66128476  14709004  51419472  23%/dev/shmnone               &NBS P  20971520         0  20971520   0%/TMP/TMPFS  

So we created a new TMPFS, the space is 20G, we can create a file within 20G in the/TMP/TMPFS. If the file that we create actually occupies memory, then what part of the memory space should this data occupy? According to the Pagecache implementation function can be understood, since it is a file system, then naturally use pagecache space to manage. Let's try that, isn't it?

[[email protected] ~]# free-g             Total       used   &NBSP ;   Free     GKFX    buffers     CACHEDMEM:           126 &nbs P       $         &NB          0        . Sp;1         19-/+ Buffers/cache:                111swap: & nbsp          2          0          2[[EMAIL PR  Otected] ~]# dd if=/dev/zero of=/tmp/tmpfs/testfile bs=1g count=1313+0 Records in13+0 records out13958643712 bytes (GB) Copied, 9.49858 S, 1.5 gb/s[[email protected] ~]# [[email protected] ~]# free-g         &N Bsp   Total       used       free     GKFX    buffers     C Achedmem:           126         [       ]     &NBSP ;    0          1         32-/+ Buffers/cache:       & nbsp        110swap:            2          0 &nbs P        2

We created a 13G file in the Tmpfs directory, and through the comparison between the front and back free commands, we found that cached grew 13G, indicating that the file was actually in memory and the kernel was using the cache as the storage. Then look at the indicator we care about:-/+ Buffers/cache the line. We found that in this case the free command still prompts us to have 110G of memory available, but is there really so much? We can manually trigger a memory recovery to see how much memory we can reclaim now:

[[email protected] ~]# echo 3 > /proc/sys/vm/drop_caches[[email protected] ~]# free -g             total       used       free     shared    buffers     cachedMem:           126         43         82          0          0         29-/+ buffers/cache:         14        111Swap:            2          0          2

As you can see, the space occupied by cached is not completely released as we imagined, where 13G of space is still occupied by files in/tmp/tmpfs. Of course, there are other non-free caches in my system that occupy the rest of the 16G memory space. So when will the cache space occupied by Tmpfs be released? is when the file is deleted. If you do not delete the file, regardless of how much memory is exhausted, the kernel will not automatically help you to remove the files in the TMPFS to free the cache space.

[[email protected] ~]# rm /tmp/tmpfs/testfile [[email protected] ~]# free -g             total       used       free     shared    buffers     cachedMem:           126         30         95          0          0         16-/+ buffers/cache:         14        111Swap:            2          0          2

This is the first type of cache that we have analyzed that cannot be recycled. There are other situations, such as:

Shared memory

Shared memory is a common inter-process communication (IPC) method provided to us by the system, but this type of communication cannot be applied and used in the shell, so we need a simple test program, because the number of words on the public platform is limited, the code should be read in the original text of my blog.

The program function is very simple, is to apply for a period of less than 2G of shared memory, and then open a child process to do an initialization of this shared memory, the parent process, and so on after the initialization of the child process to output the contents of the shared memory, and then exit. However, the shared memory was not deleted before exiting. Let's take a look at the memory usage before and after the program executes:

[[email protected] ~]# free-g             Total       used   &NBSP ;   Free     GKFX    buffers     CACHEDMEM:           126 &nbs P               &NB          0         Sp;0         16-/+ Buffers/cache:                111swap: & nbsp          2          0          2[[EMAIL PR Otected] ~]#./SHM shmid:294918shmsize:2145386496shmid:294918shmsize: -4194304hello! [[email protected] ~]# free-g             Total       used   &NBSP ;   Free     GKFX    buffers     CACHEDMEM:           126 &nbs P                        0          0         18-/+ buffer S/cache:         +        111swap:            2 &NB Sp        0          2

The cached space has risen from 16G to 18G. So can this cache be recycled? To continue testing:

[[email protected] ~]# echo 3 > /proc/sys/vm/drop_caches[[email protected] ~]# free -g             total       used       free     shared    buffers     cachedMem:           126         32         93          0          0         18-/+ buffers/cache:         14        111Swap:            2          0          2

The result is still not recyclable. As you can see, this shared memory is stored in the cache for long periods of time, even when no one is in use, until it is deleted. There are two ways to delete a method, one is to use SHMDT () in a program, and the other is to use the IPCRM command. Let's try to delete it:

[[email protected] ~]# ipcs-m------Shared Memory Segments--------key        shmid   &NBSP ;  owner      perms      bytes      nattch     status   &NB Sp   0x00005feb 0          root       666        12000      4                       0x00005fe7 32769   &NBSP ;  root       666        524288     2           & nbsp            0x00005fe8 65538      root       666        2097152    2                       0X00038C0E 1310      Root       777        2072       1         &NBsp             0x00038c14 163844     root       777     &NBS P  5603392    0                       0X00038C09 19661 3     Root       777        221248     0       &NBSP ;               0x00000000 294918     root         &NBS P    2145386496 0                       [Email protec Ted] ~]# ipcrm-m 294918[[email protected] ~]# ipcs-m------Shared Memory Segments--------key        shmid      owner      perms      bytes      nattch &nbsp ;   Status      0x00005feb 0          root       666   &nbs P    12000      4                       0X00005FE7 32769      root       666        524288     2     & nbsp                  0x00005fe8 65538      root       666        2097152    2                     and nbsp 0x00038c0e 131075     root       777        2072       1 &nbsp ;                     0x00038c14 163844     root     & nbsp 777        5603392    0                   &NBSP ;   0x00038c09 196613     root       777        221248     0 &NB Sp;                     [[email protected] ~]# free-g     &N Bsp       Total       used       free     GKFX    buffers & nbsp   CACHEDMEM:           126                 &NB Sp        0          0         16-/+ Buffers/cache:   & nbsp            111swap:            2       &NBSP ;  0          2

After the shared memory is removed, the cache is released normally. This behavior is similar to the logic of TMPFS. The kernel is using TMPFS when implementing memory storage for the posix:xsi IPC mechanisms of Shared memory (SHM), Message Queuing (msg), and semaphore Arrays (SEM). This is why the operating logic of shared memory is similar to TMPFS. Of course, SHM consumes more memory in general, so we emphasize the use of shared memory here. When it comes to shared memory, Linux gives us another way to share memory:

Mmap

Mmap () is a very important system call, which is only visible from the function description of the mmap itself. Literally, Mmap is a file that is mapped into the virtual memory address of the process, and then the contents of the file can be manipulated by manipulating the memory. But the use of this call is actually very extensive. When malloc requests memory, the small segment of the memory core uses SBRK processing, and the large segment of memory uses MMAP. When the system calls the Exec family function execution, because it essentially loads an executable file into memory execution, the kernel is naturally able to handle it using the Mmap method. We will only consider a situation here, that is, when using mmap for shared memory applications, is it possible to use the same cache as Shmget ()?

Again, we need a simple test program:

  [[email protected] ~]# cat mmap.c #include <stdlib.h> #include <stdio.h> #include < strings.h> #include <sys/mman.h> #include <sys/stat.h> #include <sys/types.h> #include < fcntl.h> #include <unistd.h> #define memsize 1024*1024*1023*2#define mpfile "./mmapfile" int main () {   void *ptr;    int FD;   &NBSP;FD = open (Mpfile, O_RDWR);    if (FD < 0) {       perror ("open ()"),        exit (1),   &NBS P;}   &NBSP;PTR = mmap (NULL, Memsize, prot_read| Prot_write, map_shared| Map_anon, FD, 0);    if (ptr = = NULL) {       perror ("malloc ()"),        exit (1);    }    printf ("%p\n", PTR);    bzero (PTR, memsize);    sleep (100);    munmap (PTR, memsize);    close (FD);    exit (1);}  

This time we simply do not have a parent-child process of the way, on a process, to apply for a 2G of mmap shared memory, and then initialize the space after waiting for 100 seconds, and then release the innuendo so we need to sleep this 100 seconds to check our system memory usage, to see what space it used? Of course, before you do that, create a 2G file./mmapfile. The results are as follows:

[[email protected] ~]# dd if=/dev/zero of=mmapfile bs=1G count=2[[email protected] ~]# echo 3 > /proc/sys/vm/drop_caches[[email protected] ~]# free -g             total       used       free     shared    buffers     cachedMem:           126         30         95          0          0         16-/+ buffers/cache:         14        111Swap:            2          0          2

Then execute the test program:

[[email protected] ~]#./mmap &[1] 191570x7f1ae3635000[[email protected] ~]# free-g       & nbsp     Total       used       free     GKFX    buffers   & nbsp Cachedmem:           126                   &NB Sp      0          0         18-/+ Buffers/cache:     & nbsp          111swap:            2         &NBSP ; 0          2[[email protected] ~]# echo 3 >/proc/sys/vm/drop_caches[[email   Protected] ~]# free-g             Total       used       FREE     GKFX    buffers     CACHEDMEM:           126     & nbsp &nbsP          0         &N          0     Bsp    18-/+ Buffers/cache:                111swap:           &NBSP;2          0         &NBSP;2

We can see that during the execution of the program, the cached has been 18G, which has risen by 2G, and the cache is still not recoverable at this time. Then we wait 100 seconds before the program ends.

[[email protected] ~]# [1]+  Exit 1                  ./mmap[[email protected] ~]# [[email protected] ~]# free -g             total       used       free     shared    buffers     cachedMem:           126         30         95          0          0         16-/+ buffers/cache:         14        111Swap:            2          0          2

After the program exits, the space occupied by the cached is freed. So we can see that using the MMAP request flag State is map_shared memory, the kernel is also used by the cache to store. The cache cannot be released properly until the process has released the associated memory. In fact, the memory requested by the Mmap Map_shared method is also implemented by TMPFS in the kernel. From this we can also speculate that because the read-only part of the shared library is managed in memory in the Mmap map_shared way, in fact they are also used to occupy the cache and cannot be released.

At last

With three test examples, we found that the cache in Linux system memory is not released as free space in all cases. It is also clear that even if the cache can be released, it is not cost-free for the system. Summing up the main points, we should remember these points:

    1. When the cache is released as a file buffer, the IO is raised, which is the cost of the cache to speed up file access.
    2. Files stored in the TMPFS consume cache space, and the cache is not automatically freed unless the file is deleted.
    3. Shared memory requested using the Shmget method consumes cache space, unless the shared memory is IPCRM or SHMDT, and the associated cache space is not automatically freed.
    4. Memory for the MAP_SHARED flag requested using the Mmap method consumes cache space, unless the process munmap the memory, otherwise the associated cache space is not automatically freed.
    5. In fact, Shmget, Mmap shared memory, in the kernel layer is implemented through TMPFS, TMPFS implementation of the storage is the cache.

When we understand this, I hope that we can understand the free command to reach the third level we say. We should understand that the use of memory is not a simple concept, the cache is not really can be used as free space. If we are to truly understand the appropriateness of the memory on your system, it is necessary to understand a lot more detailed knowledge and make more detailed judgments about the implementation of the relevant business. Our current experimental scenario is the CentOS 6 environment, different versions of Linux free reality may not be the same state, you can find different reasons for themselves.

Of course, this is not the case with all caches not being released. So, in your application scenario, are there any scenarios where the cache can't be released?

Can the cache in Linux memory really be recycled?

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.