Manually release Linux memory --/proc/sys/Vm/drop_caches

Source: Internet
Author: User
There are always many friends who have doubts about Linux memory management. In the core of the new version, it seems that a new solution has been provided for this problem, so we can refer to it here. Finally, I have attached my comments on this method. You are welcome to discuss it together.

When files are frequently accessed in Linux, the physical memory will soon be used up. When the program ends, the memory will not be released normally, but will always be used as caching. It seems that many people are asking this question, but they have not seen any good solutions. Let me talk about this.

I. general situation
Let's talk about the free command:
Reference
[Root @ server ~] # Free-m
Total used free shared buffers cached
Mem: 249 163 86 0 10 94
-/+ Buffers/cache: 58 191
Swap: 511 0 511

Where:
Reference
Total memory
Used memory used
Free idle memory
Total memory shared by multiple processes
Buffers buffer cache and cached page cache disk cache size
-Buffers/cache memory: Used-Buffers-cached
+ Buffers/cache memory: Free + buffers + cached

Available memory = free memory + buffers + cached.

With this foundation, we can know that used is 163 MB, free is 86 MB, buffer and cached are 10 MB and 94 MB respectively.
Let's take a look at the memory changes if I copy the file.
Reference
[Root @ server ~] # Cp-r/etc ~ /Test/
[Root @ server ~] # Free-m
Total used free shared buffers cached
Mem: 249 244 4 0 8 174
-/+ Buffers/cache: 62 187
Swap: 511 0 511

After I run the command, used is 244 MB, free is 4 MB, buffers is 8 MB, and cached is 174 MB. Don't be nervous. This is to improve the efficiency of File Reading.

In order to improve disk access efficiency, Linux has made some careful designs, in addition to caching dentry (for VFS, accelerating the conversion of file path names to inode ), two major cache methods are also adopted: Buffer
Cache and page
Cache. The former is used to read and write disk blocks, and the latter is used to read and write inode files. These caches have effectively shortened
The time when the I/O system calls (such as read, write, getdents.

Some people have said that Linux will automatically release the memory used in a certain period of time. After waiting for a while, let's try again with free to see if there is any release?
Reference
[Root @ server test] # Free-m
Total used free shared buffers cached
Mem: 249 244 5 0 8 174
-/+ Buffers/cache: 61 188
Swap: 511 0 511

There seems to be no change. (In practice, memory management is also related to swap)

Can I manually release the memory? The answer is yes!

Ii. manually release the cache
/Proc is a virtual file system. We can use its read/write operations as a means to communicate with the kernel object. In other words, you can modify the file in/proc to adjust the current kernel behavior. Then we can release the memory by adjusting/proc/sys/Vm/drop_caches. The procedure is as follows:
Reference
[Root @ server test] # Cat/proc/sys/Vm/drop_caches
0

First, the value of/proc/sys/Vm/drop_caches is 0 by default.
Reference
[Root @ server test] # Sync

Run the sync command manually (Description: Sync command to run sync
Subroutine. If you must stop the system, run the sync command to ensure the integrity of the file system. Sync
Command to write all unwritten system buffers to the disk, including modified I-node and delayed block I/O
And read/write ing files)
Reference
[Root @ server test] # echo 3>/proc/sys/Vm/drop_caches
[Root @ server test] # Cat/proc/sys/Vm/drop_caches
3

Set/proc/sys/Vm/drop_caches to 3
Reference
[Root @ server test] # Free-m
Total used free shared buffers cached
Mem: 249 66 182 0 0 11
-/+ Buffers/cache: 55 194
Swap: 511 0 511

Run the free command again. The current used is 66 MB, free is 182 MB, buffers is 0 MB, and cached is 11 Mb. The buffer and cache are effectively released.

◎ The usage of/proc/sys/Vm/drop_caches is described below
Reference
/Proc/sys/Vm/drop_caches (since Linux 2.6.16)
Writing to this file causes the kernel to drop clean caches,
Dentries and inodes from memory, causing that memory to become
Free.

To free pagecache, use Echo 1>/proc/sys/Vm/drop_caches;
Free dentries and inodes, use echo 2>/proc/sys/Vm/drop_caches;
To free pagecache, dentries and inodes, use echo 3>
/Proc/sys/Vm/drop_caches.

Because this is a non-destructive operation and dirty objects
Are not freeable, the user shocould run sync first.

3. My Opinions
The above article has long been a "Intuitive" Reply to many users' questions about Linux memory management. I feel a bit more like a compromise from the core development team.
I have a reserved opinion on whether to use this value or to mention it to the user:
Reference
1. man can see that this value is provided only in core Versions later than 2.6.16, that is, the operating system of the old version, such as Hongqi DC.
No versions earlier than RHEL 5.0 and RHEL 4. x;
2. If I observe whether the system memory is sufficient, I would like to check the swap usage and Si/so values;

The user's common question is, why is the memory not released after the application is closed because the free space is so small?
But in fact, we all know that this is because Linux's memory management is different from Windows's. The small value of free does not mean that the memory is not enough. We should look at the last value of free in the second row:
Reference
-/+ Buffers/cache: 58 191

This is the available memory size of the system.
The actual project tells us that if the application has problems such as memory leakage and overflow, the usage of swap can be quickly determined, but it is difficult to view the free version.
On the contrary, if at this time, we tell the user to modify a value of the system, "yes" to release the memory, and free will increase. What do users think? I don't think there is a problem with the operating system?
Therefore, I think that since the core is to quickly clear the buffer or cache, it is not difficult (this can be seen from the above operations ), but the core does not (the default value is 0), so we should not change it.
In general, the application runs stably on the system, and the free value is also kept in a stable value, although it may seem small.
When the memory is insufficient, the application cannot obtain the available memory, or the OOM error occurs, we should analyze the reasons for the application, if the user volume is too large, leading to insufficient memory or application memory overflow, otherwise, the buffer is cleared and the free size is forcibly released. The problem may be temporarily blocked.

In my opinion, in addition to excluding insufficient memory, unless in the software development stage, the buffer needs to be cleared temporarily to determine the memory usage of the application; or the application no longer provides support, even if the application has a problem with the memory, and it cannot be avoided, it is necessary to regularly clear the buffer. (Unfortunately, such applications usually run on the old operating system version, and the above operations cannot be solved ). O (∩) O Haha ~
Visitor
2009/02/25 :28
There is no need to do this. Even if the memory is full and the cache is full, run a large program and there will be no "insufficient memory, the amount of memory required is the amount of memory released from the cache.

The benefits of cache can reduce the access to a lot of hard disks. When I use Windows, every time I open a program hard disk, it will sound like a boom, compared with a tractor, but the hard disk is very quiet in Linux!
Linuxing replied to: 2009/02/26
O (partition _ tables) O Haha ~, Unfortunately, not everyone understands the Linux issue, as I finally said: "I feel a bit more like a compromise from the core development team ".
However, this post is not meaningless.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.