Can the cache in Linux memory really be recycled?

Source: Internet
Author: User

Do you really know the free command of Linux?

In Linux systems, we often use the free command to view the state of the system's memory usage. On a RHEL6 system, the display of the free command is probably a state:

Here the default display unit is KB, my server is 128G of memory, so the numbers appear relatively large. This command is almost always a command for anyone who has ever used Linux, but the more such commands seem to really understand the fewer people (I mean the smaller the ratio).

In general, the understanding of this command output can be divided into several levels:

  1. Don't know. The first reaction of such people is: God, memory with a lot of, 70 more than G, but I almost do not run what big program AH? Why is that? Linux is good for memory!

  2. I think I know it very well. Such a person general self-study evaluation will say: Well, according to my professional eyes to see, the memory only used about 17G, there are a lot of remaining memory available. Buffers/cache occupy more, indicating that there are processes in the system have read and write files, but it does not matter, this part of the memory is when idle to use.

  3. Really know. The reaction of this kind of person is that people feel the least understanding of Linux, their response is: "The free display is this, OK, I know." God horse? You ask me these memory enough, of course I do not know! How do I know how your program is written?

Based on the current technical documentation on the Web, I believe the vast majority of people who know a bit of Linux should be on the second level. It is generally believed that the memory space occupied by buffers and cached can be released as free space when the memory pressure is large.

But is that really the case?

Before we demonstrate this topic, let's briefly introduce what buffers and cached mean:

What is Buffer/cache?

Buffer and cache are two nouns that are used in computer technology, which can have different meanings in a non-context.

In Linux memory management, the buffer here refers to the Linux memory: buffer cache. The cache here refers to the memory of Linux: Page cache. Translated into Chinese can be called buffer cache and page cache.

Historically, one (buffer) was used as a cache for IO devices, while the other (cache) was used as a read cache for IO devices, where IO devices mainly referred to block device files and ordinary files on file systems.

But now, the meaning of them is different.

In the current kernel, page cache, as its name implies, is a cache of memory pages, which, in a way, can be managed using page cache as its cache if the memory is managed with page allocation.

Of course, not all of the memory is managed by pages (page), and many are managed against blocks (block), which are used in buffer cache if the cache function is to be used.

(from this point of view, is it better to rename the buffer cache called block cache?) However, not all blocks have a fixed length, the length of the block on the system is determined primarily by the block device used, and the page length is 4k on the X86 either 32-bit or 64-bit.

By understanding the differences between these two sets of caching systems, you can understand exactly what they can be used to do.

What is page cache?

Page cache is used primarily as a cache of file data on the file system, especially when the process has read/write operations on the file.

If you think about it, as a system call that can map files to memory: Mmap is it natural that you should also use page cache?

In the current system implementation, the page cache is also used as a caching device for other file types, so in fact page cache is responsible for most of the block device file caching work.

What is buffer cache?

The buffer cache is designed to be used by a system that caches chunks of data when the system is read and written to a block device.

This means that some operations on blocks are cached using buffer cache, such as when we are formatting the filesystem.

In general, two cache systems are used together, such as when we write to a file, the contents of the page cache are changed, and the buffer cache can be used to mark the page as a different buffer, and to record which buffer was modified.

This way, the kernel does not have to write the entire page back when it performs subsequent write-back (writeback) of the dirty data, only to write back the modified part.

How do I recycle the cache?

The Linux kernel triggers memory recycling when memory is about to run out, freeing up memory for use in memory-hungry processes.

In general, the primary memory release in this operation comes from the release of Buffer/cache. In particular, more cache space is used. Since it is mainly used for caching, only when the memory is sufficient to speed up the process of file read and write speed, then in the case of large memory pressure, of course, it is necessary to empty the cache, as free space for the relevant process use.
So in general, we think that Buffer/cache space can be released, and this understanding is correct.

But the job of clearing the cache is not without cost. Understand what the cache does. The cache must be guaranteed to be consistent with the data in the corresponding file in order to release the caches.

so with the cache cleanup behavior, the system IO is generally high. because the kernel wants to compare the data in the cache with the data on the corresponding hard disk file, if the inconsistency needs to be written back, it can be recycled.

We can also use the following file to manually trigger the cache cleanup in addition to the memory being exhausted in the system:

The method is:

Of course, this file can be set to a value of 1, 2, 3, respectively. The meanings they represent are:

echo 1 >/proc/sys/vm/drop_caches: Indicates clear Pagecache.

echo 2 >/proc/sys/vm/drop_caches: Represents the purge of objects in the slab allocator (including the catalog item cache and the Inode cache). The slab allocator is a mechanism for managing memory in the kernel, where many cache data implementations are pagecache.

echo 3 >/proc/sys/vm/drop_caches: Clears the cached object from the page cache and the slab allocator.

Can the cache be recycled?

We analyzed the cache can be recycled, then there is no can not be recycled cache? Of course. Let's look at the first case:

Tmpfs

It is known that Linux provides a "temporary" file system called TMPFS, which can take part of the memory space as a file system, so that the memory space can be used as a directory file.

The vast majority of Linux systems now have a TMPFS directory called/DEV/SHM, which is one such existence. Of course, we can also create our own tmpfs by hand, by the following methods:

So we created a new TMPFS, the space is 20G, we can create a file within 20G in the/TMP/TMPFS.

If the file that we create actually occupies memory, then what part of the memory space should this data occupy?

According to the Pagecache implementation function can be understood, since it is a file system, then naturally use pagecache space to manage. Let's try that, isn't it?

We created a 13G file in the Tmpfs directory, and through the comparison between the front and back free commands, we found that cached grew 13G, indicating that the file was actually in memory and the kernel was using the cache as the storage.

Then look at the indicator we care about:-/+ Buffers/cache the line.

We found that in this case the free command still prompts us to have 110G of memory available, but is there really so much? We can manually trigger a memory recovery to see how much memory we can reclaim now:

As you can see, the space occupied by cached is not completely released as we imagined, where 13G of space is still occupied by files in/tmp/tmpfs. Of course, there are other non-free caches in my system that occupy the rest of the 16G memory space.

So when will the cache space occupied by Tmpfs be released? When the file is deleted, if you do not delete the file, no matter how much memory is exhausted, the kernel will not automatically help you to remove the files in the TMPFS to free the cache space.

This is the first type of cache that we have analyzed that cannot be recycled. There are other situations, such as:

Shared memory

Shared memory is a common inter-process communication (IPC) approach provided to us by the system, but this type of communication cannot be applied and used in the shell, so we need a simple test program with the following code:

[[email protected] ~]# cat shm.c #include <stdio.h> #include <stdlib.h> #include <unistd.h># Include <sys/ipc.h> #include <sys/shm.h> #include <string.h> #define Memsize 2048*1024*1023intmain () {   int shmid;    char *ptr;    pid_t pid;    struct Shmid_ds buf;    i NT RET;    shmid = Shmget (Ipc_private, memsize, 0600);    if (shmid<0) {       perror ("Shmget ()"),        exit (1);   & nbsp;}    ret = Shmctl (Shmid, Ipc_stat, &buf);   &NBSP;IF (Ret < 0) {       perror ("Shmctl ()"),        exit (1);   & nbsp;}    printf ("Shmid:%d\n", Shmid);    printf ("Shmsize:%d\n", BUF.SHM_SEGSZ);   &NBSP;BUF.SHM_SEGSZ *= 2;    ret = Shmctl (Shmid, Ipc_set, &buf);   &NBSP;IF (Ret < 0) {       perror ("Shmctl ()");        exit (1);    }    ret = Shmctl (Shmid, Ipc_set, &buf);   &NBSP;IF (Ret < 0) {       perror ("Shmctl ()"),        exit (1);   & nbsp;}    printf ("Shmid:%d\n", Shmid);    printf ("Shmsize:%d\n", BUF.SHM_SEGSZ);    pid = fork ();    if (pid<0) {       perror ("fork ()"),        exit (1);   &NBSP ;}    if (pid==0) {       ptr = Shmat (shmid, NULL, 0),        if (ptr== (VO id*)-1) {           perror ("Shmat ()"),            exit (1); &N Bsp     &NBSP;}        bzero (PTR, memsize);        strcpy (PTR, "hello!");        exit (0);   &NBSP;} else {       wait (null),        ptr = Shmat (shmid, NULL, 0); &nbsp ; &nbSp     if (ptr== (void*)-1) {           perror ("Shmat ()"),            exit (1);        }        puts (PTR);        exit (0);    }}

The program function is very simple, is to apply for a period of less than 2G of shared memory, and then open a child process to do an initialization of this shared memory, the parent process, and so on after the initialization of the child process to output the contents of the shared memory, and then exit. However, the shared memory was not deleted before exiting.

Let's take a look at the memory usage before and after the program executes:

The cached space has risen from 16G to 18G. So can this cache be recycled? To continue testing:

The result is still not recyclable. As you can see, this shared memory is stored in the cache for long periods of time, even when no one is in use, until it is deleted. There are two ways to delete a method:

    1. Use Shmctl () to Ipc_rmid in the program

    2. Using the IPCRM command

Let's try to delete it:

After the shared memory is removed, the cache is released normally. This behavior is similar to the logic of TMPFS.

The kernel is using TMPFS when implementing memory storage for the posix:xsi IPC mechanisms of Shared memory (SHM), Message Queuing (msg), and semaphore Arrays (SEM). This is why the operating logic of shared memory is similar to TMPFS.

Of course, SHM consumes more memory in general, so we emphasize the use of shared memory here. When it comes to shared memory, Linux gives us another way to share memory:

Mmap

Mmap () is a very important system call, which is only visible from the function description of the mmap itself. Literally, Mmap is a file that is mapped into the virtual memory address of the process, and then the contents of the file can be manipulated by manipulating the memory. But the use of this call is actually very extensive.

When malloc requests memory, the small segment of the memory core uses SBRK processing, and the large segment of memory uses MMAP. When the system calls the Exec family function execution, because it essentially loads an executable file into memory execution, the kernel is naturally able to handle it using the Mmap method.

We will only consider a situation here, that is, when using mmap for shared memory applications, is it possible to use the same cache as Shmget ()?

Again, we need a simple test program:

  [[email protected] ~]# cat mmap.c #include <stdlib.h> #include <stdio.h> #include < strings.h> #include <sys/mman.h> #include <sys/stat.h> #include <sys/types.h> #include < fcntl.h> #include <unistd.h> #define memsize 1024*1024*1023*2#define mpfile "./mmapfile" int main () {   void *ptr;    int FD;   &NBSP;FD = open (Mpfile, O_RDWR);    if (FD < 0) {       perror ("open ()"),        exit (1),   &NBS P;}   &NBSP;PTR = mmap (NULL, Memsize, prot_read| Prot_write, map_shared| Map_anon, FD, 0);    if (ptr = = NULL) {       perror ("malloc ()"),        exit (1);    }    printf ("%p\n", PTR);    bzero (PTR, memsize);    sleep (100);    munmap (PTR, memsize);    close (FD);    exit (1);}  

This time we simply do not have a parent-child process of the way, on a process, to apply for a 2G of mmap shared memory, and then initialize the space after waiting for 100 seconds, and then release the innuendo so we need to sleep this 100 seconds to check our system memory usage, to see what space it used?

Of course, before you do that, create a 2G file./mmapfile. The results are as follows:

Then execute the test program:


We can see that during the execution of the program, the cached has been 18G, which has risen by 2G, and the cache is still not recoverable at this time. Then we wait 100 seconds before the program ends.

After the program exits, the space occupied by the cached is freed.

So we can see that using the MMAP request flag State is map_shared memory, the kernel is also used by the cache to store. The cache cannot be released properly until the process has released the associated memory.

In fact, the memory requested by the Mmap Map_shared method is also implemented by TMPFS in the kernel. From this we can also speculate that because the read-only part of the shared library is managed in memory in the Mmap map_shared way, in fact they are also used to occupy the cache and cannot be released.

At last

With three test examples, we found that the cache in Linux system memory is not released as free space in all cases. And it's clear that even if you can release the cache, it's not a cost to the system. Summing up the main points, we should remember these points:

  1. When the cache is released as a file buffer, the IO is raised, which is the cost of the cache to speed up file access.

  2. Files stored in the TMPFS consume cache space, and the cache is not automatically freed unless the file is deleted.

  3. Shared memory requested using the Shmget method consumes cache space, unless the shared memory is IPCRM or shmctl is used to ipc_rmid, otherwise the associated cache space is not automatically freed.

  4. Memory for the MAP_SHARED flag requested using the Mmap method consumes cache space, unless the process munmap the memory, otherwise the associated cache space is not automatically freed.

  5. In fact, Shmget, Mmap shared memory, in the kernel layer is implemented through TMPFS, TMPFS implementation of the storage is the cache.

When we understand this, I hope that we can understand the free command to reach the third level we say.

We should understand that the use of memory is not a simple concept, the cache is not really can be used as free space.

If we are to truly understand the appropriateness of the memory on your system, it is necessary to understand a lot more detailed knowledge and make more detailed judgments about the implementation of the relevant business.

Our current experimental scenario is the CentOS 6 environment, different versions of Linux free reality may not be the same state, you can find different reasons for themselves.

Of course, this is not the case with all caches not being released. So, in your application scenario, are there any scenarios where the cache can't be released?

Original

Can the cache in Linux memory really be recycled?

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.