Manually Freeing Linux memory

Source: Internet
Author: User

When files are accessed frequently under Linux, the physical memory is quickly exhausted, and when the program is finished, the memory is not released normally, but is always used as a caching. This problem, seems to have a lot of people are asking, but did not see what is a good way to solve. Then I'll talk about it.
First, the usual situation
First, say the free command:
Reference
[Email protected] ~]# free-m
Total used free shared buffers Cached
mem:249 163 86 0 10 94
-/+ buffers/cache:58 191
swap:511 0 511
which
Reference
Total Memory Totals
Used number of memory already in use
Free amount of memory
Shared memory totals for multiple processes
Buffers buffer cache and cached Page cache disk size
-buffers/cache Number of Memory: used-buffers-cached
+buffers/cache Number of Memory: Free + buffers + Cached
The available Memory=free memory+buffers+cached.
With this foundation, you can be informed that I now used for 163mb,free for 86mb,buffer and cached respectively for 10MB,94MB.
So let's take a look at what happens to memory if I perform a copy of the file.
Reference
[Email protected] ~]# cp-r/etc ~/test/
[Email protected] ~]# free-m
Total used free shared buffers Cached
mem:249 244 4 0 8 174
-/+ buffers/cache:62 187
swap:511 0 511
At the end of my command execution, used for 244mb,free 4mb,buffers for the 8mb,cached for the 174MB, God, all were cached eaten. Relax, this is a way to improve the efficiency of file reading.
In order to improve disk access efficiency, Linux has done some careful design, in addition to the Dentry cache (for VFS, speed up the file path name to Inode conversion), but also adopted two main cache mode: Buffer cache and Page cache. The former is for the disk block read and write, the latter for the file inode read and write. These caches effectively shorten the time for I/O system calls (such as read,write,getdents).
Then someone said that for a while, Linux will automatically release the memory used. After waiting for a while, we use free to try again to see if there is a release?
Reference
[Email protected] test]# free-m
Total used free shared buffers Cached
mem:249 244 5 0 8 174
-/+ buffers/cache:61 188
swap:511 0 511
There doesn't seem to be any change. (In fact, memory management is also related to swap)
So can I manually release the memory? The answer is YES!
Second, manually release the cache
/proc is a virtual file system, which can be used as a means of communicating with kernel entities through its read and write operations. This means that you can make adjustments to the current kernel behavior by modifying the files in the/proc. Then we can release the memory by adjusting the/proc/sys/vm/drop_caches. The operation is as follows:
Reference
[Email protected] test]# cat/proc/sys/vm/drop_caches
0
First, the value of/proc/sys/vm/drop_caches, which defaults to 0.
Reference
[[Email protected] test]# Sync
Perform the sync command manually (description: Sync command runs the Sync subroutine. If you must stop the system, run the Sync command to ensure the integrity of the file system. The Sync command writes all the non-writable system buffers to disk, including modified I-node, deferred block I/O, and read-write mapping files.
Reference
[Email protected] test]# echo 3 >/proc/sys/vm/drop_caches
[Email protected] test]# cat/proc/sys/vm/drop_caches
3
Set the/proc/sys/vm/drop_caches value to 3
Reference
[Email protected] test]# free-m
Total used free shared buffers Cached
mem:249 66 182 0 0 11
-/+ buffers/cache:55 194
swap:511 0 511
To run the free command again, you will find that the current used is 66mb,free to 182mb,buffers for 0mb,cached to 11MB. Then the buffer and cache are released effectively.
The usage of/proc/sys/vm/drop_caches is explained below.
Reference
/proc/sys/vm/drop_caches (since Linux 2.6.16)
Writing to this file causes the kernel to drop clean caches,
Dentries and inodes from memory, causing the memory to become
Free.
To free Pagecache with use echo 1 >/proc/sys/vm/drop_caches; To
Free dentries and inodes, use echo 2 >/proc/sys/vm/drop_caches;
To free Pagecache, dentries and inodes, use echo 3 >
/proc/sys/vm/drop_caches.
Because This is a non-destructive operation and dirty objects
is not freeable, the user should run sync first.
Third, my opinion
The above article on the long-term many users of Linux memory management questions, gave a more "intuitive" reply, I feel a bit like the core development team compromise.
I have reservations about the need to use this value, or to mention this value to the user:
Reference
1, from the man can see, this value from 2.6.16 after the core version is provided, that is, the old version of the operating system, such as Red Flag DC 5.0, RHEL 4.x before the version is not;
2, if the system memory is sufficient observation, I would like to see the use of swap and si/so two value of the size;
The common question for users is, why is free so small that the memory is not released after closing the app?
But in fact, we all know that this is because Linux manages memory differently than Windows, and free is not to say that memory is not enough, but to look at the last value of the second line of free:
Reference
-/+ buffers/cache:58 191
This is the amount of memory available to the system.
The actual project tells us that if the application has a memory leak, overflow problem, from the swap usage can be relatively fast can be judged, but the free above is rather difficult to see.
On the contrary, if at this time, we tell the user, modify a value of the system, "can" free memory, it is large. What will the user think? Don't you think the operating system is "in trouble"?
So, I think since the core is a quick way to empty the buffer or cache, it's not hard to do (as you can see from the above), but the core doesn't do it (the default value is 0), and we shouldn't just change it.
In general, the application runs stably on the system, and the free value remains at a stable value, although it may seem smaller.
When there is not enough memory, the application can not get the available memory, oom error and other issues, or should be more to analyze the application of reasons, such as the user is too large to cause memory shortage, the application of memory overflow, etc., otherwise, empty buffer, force free the size of the release, may just the problem to temporarily block.
In my opinion, excluding out-of-memory situations, except in the software development phase, the need to temporarily clear buffer to determine the memory usage of the application, or the application is no longer providing support, even if the application of memory is indeed a problem, and can not be avoided, consider the timing to empty buffer. (Unfortunately, such applications are usually run on older versions of the operating system, and the above operations are not resolved). O (∩_∩) o haha ~

Zhichyu
2009/04/24 17:24
My Linux cache is very slow when it fills up with RAM, and the system does not automatically release the cache. So known as the "cache for performance only benefits and no harm" is false!
linuxing reply at 2009/04/27 11:30
I think this depends on what you are actually using. The benefit of the cache is that it reduces the need to read and write hard drives frequently, that is, to reduce IO, which is especially common for applications on the server. Conversely, if the cache needs to be updated frequently, the problem you're talking about will appear.
Visitor
2009/02/25 11:28
There is absolutely no need to do this, even if the memory is full cache, run a large program to try, there will never be any "out of memory" situation, how much memory needs to be freed from the cache.
The benefits of the cache can be reduced by a lot of hard drive access, my piece of old hard disk, when using Windows every time you open a program hard drive is going to rattle a while, and the tractor has a ratio, but with Linux when the hard drive is very quiet

Manually Freeing Linux memory

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.