Manual release of Linux memory--/proc/sys/vm/drop_caches

Source: Internet
Author: User

--Manual release of Linux memory--/proc/sys/vm/drop_caches

There are always a lot of friends in the memory management of Linux have doubts, the previous log does not seem to be able to clear everyone's doubts. In the new version of the core, it seems to provide a new solution to this problem, special turn out for you to refer to. Finally, I am enclosing my comments on this method, and I welcome all of you to discuss it together.
When files are accessed frequently under Linux, the physical memory is quickly exhausted, and when the program is finished, the memory is not released normally, but is always used as a caching. This problem, seems to have a lot of people are asking, but did not see what is a good way to solve. Then I'll talk about it.

first , the usual situation is to say the free command: reference [[email protected] ~]# free-m total used free shared buffers cached mem:249 163 86 0 1 0 94-/+ buffers/cache:58 191 swap:511 0 511 Where: Total memory is referenced used the amount of memory that is already in use free memory total number of memory shared by multiple processes buffers Buffe R cache and cached Page cache size-buffers/cache Memory: used-buffers-cached +buffers/cache number of memory: Free + buffers + Cached The available Memory=free memory+buffers+cached.
with this foundation, you can be informed that I now used for 163mb,free for 86mb,buffer and cached respectively for 10MB,94MB. So let's take a look at what happens to memory if I perform a copy of the file. references [[email protected] ~]# cp-r/etc ~/test/[[email protected] ~]# free-m total used free shared buffers cached mem:249 244 4 0 8 174-/+ buffers/cache:62 187 swap:511 0 511 After the execution of my command, used for 244mb,free for 4mb,buffers for 8mb,cached 174MB, God, are cached Ate it. Relax, this is a way to improve the efficiency of file reading.
in order to improve disk access efficiency, Linux has done some careful design, in addition to the Dentry cache (for VFS, speed up the file path name to Inode conversion), but also adopted two main cache mode: Buffer cache and Page cache. The former is for the disk block read and write, the latter for the file inode read and write. These caches effectively shorten the time for I/O system calls (such as read,write,getdents).
then someone said that for a while, Linux will automatically release the memory used. After waiting for a while, we use free to try again to see if there is a release? references [[email protected] test]# free-m total used free shared buffers cached mem:249 244 5 0 8 174-/+ buffers/cache:61 188 swap:511 0 511 does not seem to change anything. (In fact, memory management is also related to swap)
so can I manually release the memory? The answer is YES!
Second, manual release of the cache/proc is a virtual file system, we can use its read and write operations as a means of communication with the kernel entities. This means that you can make adjustments to the current kernel behavior by modifying the files in the/proc. Then we can release the memory by adjusting the/proc/sys/vm/drop_caches. The operation is as follows: reference [[email protected] test]# cat/proc/sys/vm/drop_caches 0 First, the value of/proc/sys/vm/drop_caches, the default is 0. reference [[email protected] test]# sync manually perform the sync command (description: The sync command runs the Sync subroutine.) If you must stop the system, run the Sync command to ensure the integrity of the file system. The Sync command writes all the non-writable system buffers to the disk, including the modified I-node, deferred block I/O, and read-write mapping files) references [[email protected] test]# echo 3 >/proc/sys/vm/drop_caches [[email protected] test]# cat/proc/sys/vm/drop_caches 3 Set the/proc/sys/vm/drop_caches value to 3 references [[email protected] test]# free- M total used free shared buffers cached mem:249 182 0 0-/+ buffers/cache:55 194 swap:511 0 511 then run the free command and you will find the current The used is 66mb,free to 182mb,buffers for 0mb,cached to 11MB. Then the buffer and cache are released effectively.
The usage of/proc/sys/vm/drop_caches is described below for reference/proc/sys/vm/drop_caches (since Linux 2.6.16) Writing to this file causes T He kernel to the drop clean caches, dentries and inodes from memory, causing this memory to become free.
to free Pagecache, with echo 1 >/proc/sys/vm/drop_caches, to Free dentries and inodes, use echo 2 >/PROC/SYS/VM /drop_caches; To free Pagecache, dentries and inodes, use echo 3 >/proc/sys/vm/drop_caches.
Because This is a non-destructive operation and dirty objects was not freeable, the user should run sync first. third, my opinion the above article on the long-term many users of Linux memory management questions, gave a more "intuitive" reply, I feel a bit like the core development team compromise. I have reservations about whether or not to use this value, or to mention this value to the User: reference 1, from the man can see, this value from 2.6.16 after the core version is provided, that is, the old version of the operating system, such as Red Flag DC 5.0, RHEL 4.x before the version; 2, If the system memory is sufficient to observe, I would like to see the use of swap and si/so two value of the size; The user's common question is, why is free so small, the memory is not released after closing the app? But in fact, we all know that this is because Linux manages memory differently than Windows, free is not that memory is not enough, it should look at the last value of the second line of the number: reference-/+ buffers/cache:58 191 This is the amount of memory available to the system. The actual project tells us that if the application has a memory leak, overflow problem, from the swap usage can be relatively fast can be judged, but the free above is rather difficult to see. On the contrary, if at this time, we tell the user, modify a value of the system, "can" free memory, it is large. What will the user think? Don't you think the operating system is "in trouble"? So, I think since the core is a quick way to empty the buffer or cache, it's not hard to do (as you can see from the above), but the core doesn't do it (the default value is 0), and we shouldn't just change it. In general, the application runs stably on the system, and the free value remains at a stable value, although it may seem smaller. When there is not enough memory, the application can not get the available memory, oom error and other issues, or should be more to analyze the application of reasons, such as the user is too large to cause memory shortage, the application of memory overflow, etc., otherwise, empty buffer, force free the size of the release, may just the problem to temporarily block.
in my opinion, excluding out-of-memory situations, except in the software development phase, the need to temporarily clear buffer to determine the memory usage of the application, or the application is no longer providing support, even if the application of memory is indeed a problem, and can not be avoided, consider the timing to empty buffer. (Unfortunately, such applications are usually run on older versions of the operating system, and the above operations are not resolved). O (∩_∩) o haha ~

Transferred from

My own test.

[[email protected] ~]# uname-a Linux testserver 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT x86_64 x86_64 x86_ Gnu/linux [[email protected] ~]# free-m                total        used        Free       shared     buffers      Cached Mem:            2013        1661          352            0         223        1206-/+ buffers/cache:         231        1782 swap:          2047     &Nbsp;     0        2047 [[email protected] ~]# sync [[ email protected] ~]# sync [[email protected] ~]# cat/proc/sys/vm/drop_caches 0 [[email protected] ~]# echo 3 >/proc/sys/vm/drop_caches [[email protected] ~]# cat/proc/sys/vm/drop_caches       3 [[email protected] ~]# free-m                total        used        Free       shared     buffers      Cached Mem:            2013          100        1913            0           0         -/+ buffers/cache:          85         1927 swap:          2047            0        2047 [[Email  protected] ~]# Test succeeded

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.