In Linux often found that there is little free memory, it seems that all the memory is occupied by the system, the surface is not enough memory, it is not. This is a good feature of Linux memory management, the main feature is that no matter how large the physical memory, Linux will be fully utilized, some programs called the hard disk data into memory (Buffer/cache), the use of memory to read and write high-speed features to improve the data access performance of Linux systems. In this respect, it differs from the memory management of Windows. This article starts with the memory management mechanism of Linux, and simply introduces how Linux uses memory, monitors memory, the difference between Linux and Windows memory Management, a major feature of Linux memory usage (buffer/cache similarities and differences).
one, Linux memory management mechanism
1, physical memory and virtual memoryWe know that reading and writing data directly from physical memory is much faster than reading and writing data from the hard disk, so we want all of the data to be read and written in memory, and memory is limited, which leads to the concept of physical memory and virtual memory.
Physical memory isSystem HardwareThe amount of memory provided is real memory, relative to physical memory, under Linux there is a concept of virtual memory, virtual memory is to meet the lack of physical memory and proposed strategy, it is to usedisk spaceA virtual piece of logical memory in which the disk space used as virtual memory is called swap space.
As an extension of physical memory, Linux will use the swap partition's virtual memory when there is insufficient physical memory (note that this condition, the quantitative analysis of this condition, refer to https://www.douban.com/note/349467816/), That is, the kernel will write the unused block of memory information to the swap space, so that the physical memory has been released, this memory can be used for other purposes, when the need to use the original content, the information will be back from the swap space to read into the physical memory.
The memory management of Linux takes the paging access mechanism (detailed reference http://www.linuxeye.com/Linux/1931.html), in order to ensure that the physical memory can be fully utilized, The kernel automatically swaps blocks of data that are infrequently used in physical memory into virtual memory at the appropriate time and retains the often-used information to physical memory.
To get an insight into the Linux memory runtime mechanism, you need to know the following aspects:
Linux system will be based on the configuration of the system to the page exchange operation, to maintain a certain amount of free physical memory, some configuration, even if there is nothing to require memory, Linux will also swap out the memory page temporarily unused. This avoids the time that is required to wait for the exchange. The related configuration has the vm.swappiness configuration in/etc/sysctl.conf (compounding method please refer to http://www.vcaptain.com/?id=17), the function of this parameter simply describes is "when swappiness content value is 0 o'clock , which indicates the maximum use of physical memory, the use of the swap partition when the physical memory is used, and the active use of the swap partition when the value of the swappiness content is 100, and the timely replacement of the data in memory to the swap partition. When the Linux system's initial installation completes, its default value is 60, which means that the memory substitution algorithm is enabled when free physical memory is less than 60%, and the infrequently used data in memory is replaced to the swap partition. "(specifically how it works please refer to https://www.douban.com/note/349467816/) Linux for page Exchange is conditional, not all pages are switched to virtual memory when not in use, the Linux kernel according to the" most recently used "algorithm, Only some infrequently used paging files are exchanged to virtual memory, and sometimes we see this phenomenon: there is a lot of Linux physical memory, but swap space is also used a lot. It's not surprising, actually, for example, a process that takes up a large amount of memory consumes a lot of memory resources, and there are some infrequently-used paging files that are swapped into virtual memory, but then the page that was swapped out when the process ended and freed up a lot of memory. File does not automatically swap into physical memory, unless this is necessary, then the system physical memory will be much more free, while the swap space is also being used, there is a phenomenon just mentioned. Don't worry about that, just know what's going on. A page in swap space is first exchanged to physical memory when it is used, if there is not enough physical memory at this time to accommodate these pages, they will be exchanged immediately, so there may not be enough space in the virtual memory to store these exchange pages, resulting in the Linux fake panic, service anomalies, and so on, Although Linux can recover itself over a period of time, the restored system is basically unusable. Allocating too much swap space will waste disk space, but there is too little swap space, the system will be an error. If the system is running out of physical memory, the system will run slowly, but it will still work; If the swap space runs out,There will be errors in the system. For example, a Web server can derive multiple service processes (or threads) based on the number of different requests, and if the swap space is exhausted, the service process will not start, and the "application is out of memory" error usually occurs, causing a deadlock in the service process. Therefore, the allocation of swap space is very important.
Therefore, it is very important to rationally plan and design the use of Linux memory.
second, memory monitoring as a Linux system administrator, monitoring the use of memory is very important, through monitoring to help understand the use of memory, such as the memory consumption is normal, memory is scarce, and so on, monitoring memory most commonly used commands have free, top, etc. The following is the output of a system free (the default unit of Free is KB):
[Root@linuxeye ~]# free
total used free shared buffers cached
Mem: 3894036 3473544 420492 0 72972 1332348
-/+ buffers/cache: 2068224 1825812
Swap: 4095992 906036 3189956
We use names such as Total1, Used1, Free1, Used2, and free2 to represent the values of the above statistics, 1, 2 representing the first row (regardless of the header row) and the second row (regardless of the header row). --------first row of data: A statistical--------that represents the kernel angle
Total1: Represents the total amount of physical memory.
Used1: Represents the total amount of physical memory that has been used (allocated), including the amount that is actually used and allocated to the cache (including buffers and cached).
FREE1: Physical memory not allocated.
SHARED1: Shared memory, the general system is not used, this is not discussed here.
Buffers1: The amount of memory that the system allocates to buffers.
Cached1: The amount of memory that the system allocates to cached. The difference between buffer and cache is seen in the back. --------second row of data: A statistical--------representing the application angle
USED2: Total amount of memory actually used.
Free2: The current actual available memory for the system, including unallocated memory and the sum of memory allocated to buffers and cached.
You can sort out the following equation:
Total1 = used1 + free1
Total1 = Used2 + free2
used1 = buffers1 + cached1 + used2
Free2 = buffers1 + cached1 + free1
The above mentioned free command output memory state, can be viewed in two angles: one from the kernel point of view, one from the perspective of the application layer:
1, viewing the state of memory from the kernel is that the kernel can now be assigned directly, no additional action is required, that is, the value of the second line of Mem in the free command output above, as can be seen This system has a physical memory of 3894036K, free memory only 420492KB, that is, 410MB more, we do one such calculation: total1-used1 = free1
3894036-3473544 = 420492
&nbs P is actually total physical memory minus the allocated physical memory is the amount of free physical memory, note that the available memory value 420492K does not contain the memory size in buffers and cached states. If you think this system has too little free memory, then you're wrong, in fact, the kernel completely controls the use of memory, Linux will be in need of memory, or when the system is running gradually, the buffers and cached state of memory into free State of memory for the system to use.
2, the use state of system memory from the perspective of the application layer that is, the memory size that can be used by applications running on Linux, that is, the third line of the free command (which contains the header row in the first row)-/+ buffers/cached output, you can see that this system has used the memory is only 2068224K, while the free memory reaches 1825812K, continue to do such a calculation: Free2 = buffers1 + cached1 + free1
420492+ (72972+1332348) =1825812
by this equation, The physical memory value available to an application is the sum of the free value of the MEM item (FREE1) plus buffers1 and cached1 values, that is, the null value includes the buffers and cached item sizes, and for the application, buffers/ The memory occupied by cached is available because buffers/cached is designed to improve the performance of file reads, and when the application needs memory, buffers/cached is quickly recycled for application use.
Three, Linux and Windows memory management differences Linux takes physical memory first, and Linux does not release memory when physical memory is idle. The memory-intensive program has been shut down (this memory is used for caching). In other words, you have a large amount of memory, and after a while, it will be filled up. The advantage of this is that starting the programs that you just started, or reading the data that you just accessed will be faster, and it's good for the server.
windows always leave a certain amount of free space for memory, the immediate existence of idle also allows the program to use some virtual memory, the advantage is that the start of a new program faster, directly to give it some free memory can be, and Linux under it. Because memory is often in full use, you need to clean out a piece of memory and then assign it to a new program, so the new program will start slower.
Four, buffers and cached 1, similarities and differences in the Linux operating system, when the application needs to read the data in the file, The operating system allocates some memory, reads the data from disk into these memory, and then distributes the data to the application, and when writing data to the file, the operating system allocates memory to receive the user data, and then writes the data from memory to disk. However, if there is a large amount of data that needs to be read from disk to memory or written to disk by memory, the system's read and write performance is very low because it is a time-consuming and resource-consuming process, whether it is reading data from disk or writing data to disk, in which case Linux introduces buffers and Cached mechanism.
buffers and cached are all memory operations that hold files and file property information that the system has opened, so that when the operating system needs to read some files, it first looks in the buffers and cached memory areas and, if found, Read directly to the application, if you do not find the required data to read from the disk, this is the operating system caching mechanism, through caching, greatly improve the performance of the operating system. But the content of buffers and cached buffering is different.
buffers is used to buffer block devices, which only log file system metadata (metadata) and tracking in-flight pages, and cached is used to buffer files. More commonly said: buffers is mainly used to store content in the directory, file properties and permissions and so on. and cached is directly used to memorize the files and programs we've opened.
In order to verify our conclusion is correct, you can open a very large file through VI, to see the changes in cached, and then again VI this file, feel the speed of two times to open the similarities and differences, is not the second open speed is significantly faster than the first time. Here is a small script for printing first and second open a large file (Catalina.logaa approx 2G) time consuming and cached/buffers changes:
#!/bin/bash Sync sync echo 3 >/proc/sys/vm/drop_caches echo-e "Memory usage (KB) after----------------------cache release:------------ ----------"Free cached1= ' free |grep Mem:|awk ' {print $} ' buffers1= ' free |grep Mem:|awk ' {print $} ' date1= ' date +"%y%m %d%h%m%s "' Cat Catalina.logaa >1 date2= ' date +"%y%m%d%h%m%s "' Echo-e"----------------------first read large files, memory usage (KB):-- --------------------"Free cached2= ' free |grep Mem:|awk ' {print $} ' buffers2= ' free |grep Mem:|awk ' {print $} ' #echo $d Ate1 #echo $date 2 interval_1= ' expr ${date2}-${date1} ' cached_increment1= ' expr ${cached2}-${cached1} ' Buffers_incremen t1= ' expr ${buffers2}-${buffers1} ' date3= ' date + '%y%m%d%h%m%s ' ' Cat catalina.logaa >1 ' date + ' date4= ' EC Ho-e "----------------------read large files again, memory usage (KB):----------------------" free cached3= ' free |grep Mem:|awk ' {print $} ' buffers3= ' free |grep Mem:|awk ' {print $} ' #echo $date 3 #echo $date 4 interval_2= ' expr ${date4}-${date3} ' CACHED_INCR ement2= ' expr ${cached3}-${cached2----------------------Statistical summary data for ' buffers_increment2= ' expr ${buffers3}-${buffers2} ' Echo-e ' is as follows:---------------------- "ECHO-E" first read large files, cached increments: ${cached_increment1}, Unit: KB "ECHO-E" read large files for the first time, buffers increment: ${buffers_increment1}, Unit: KB "echo- E "First read large files, time-consuming: ${interval_1}, Unit: s \ n" echo-e "read large files again, cached increment: ${cached_increment2}, Unit: KB" ECHO-E "read large files again, buffers increment: ${BUFFERS_INCREMENT2}, Unit: KB "ECHO-E" read large files again, time-consuming: ${interval_2}, Unit: S "
The results are as follows (there may be a slightly different calculation because of the time interval between the free commands that are printed and the values that are assigned to the parameters):
Then execute the following command: Find/*-name *.conf, see if the buffers value changes, and then repeat the Find command to see how the two display speed is different. such as the script (need to pay attention to use BC compute floating-point data need to install the corresponding software, my system is centos7.0, kernel 4.3.3 version, installed is bc-1.06.95-13.el7.x86_64 service):
#!/bin/bash Sync sync echo 3 >/proc/sys/vm/drop_caches echo-e "Memory usage (KB) after----------------------cache release:------------ ----------"Free cached1= ' free |grep Mem:|awk ' {print $} ' buffers1= ' free |grep Mem:|awk ' {print $} ' date1= ' date +%s.%n
' Find/*-name *.conf >2 date2= ' date +%s.%n ' echo-e '----------------------first query, Memory usage (KB):---------------------- Free cached2= ' free |grep Mem:|awk ' {print $} ' buffers2= ' free |grep Mem:|awk ' {print $} ' #echo $date 1 #echo $date 2 int Erval_1= ' echo ' scale=3; ${DATE2}-${date1} "| BC ' cached_increment1= ' expr ${cached2}-${cached1} ' buffers_increment1= ' expr ${buffers2}-${buffers1} ' date3= ' date +%s .%N ' Find/*-name *.conf >2 date4= ' date +%s.%n ' echo-e '----------------------Requery, Memory usage (KB):--------------------
--"Free cached3= ' free |grep Mem:|awk ' {print $} ' buffers3= ' free |grep Mem:|awk ' {print $} ' #echo $date 3 #echo $date 4 Interval_2= ' echo ' scale=3; ${DATE4}-${date3} "| BC ' cached_increment2= ' expr ${cached3}-${cached2} ' buffers_increment2= ' expr ${buffers3}-${buffers2} ' echo-e '----------------------statistical summary data is as follows:----------------------" ECHO-E "First query, Cached increment: ${cached_increment1}, Unit: KB" ECHO-E "first query, buffers increment: ${buffers_increment1}, Unit: KB" ECHO-E "first query, Time consuming: ${interval_1}, Unit: s \ n "echo-e" again query, cached increment: ${cached_increment2}, Unit: KB "echo-e" query again, buffers increment: ${buffers_ INCREMENT2}, Unit: KB "echo-e" Query again, time-consuming: ${interval_2}, Unit: S "
The results are as follows (the last one should be 0.470702440, when the 0 was removed using the BC calculation):
2, Memory released Linux System/proc is a virtual file system, we can through its read and write operations as a means of communication between the kernel entities. That is, you can make adjustments to the current kernel behavior by modifying the files in/proc. Then we can release the memory by adjusting the/proc/sys/vm/drop_caches. With regard to drop_caches, the official statement is:
Writing to this'll cause thekernel to drop clean caches, dentries
and inodes from memory, causing Thatmemory to Beco Me free.
To-free Pagecache:
echo 1 >/proc/sys/vm/drop_caches to free
dentries and inodes:
echo 2 >/proc/sys/vm/ Drop_caches
to free Pagecache, dentries andinodes:
echo 3 >/proc/sys/vm/drop_caches as-
a non-dest Ructiveoperation and dirty objects are not freeable, the
user should run ' sync '.
Http://www.kernel.org/doc/Documentation/sysctl/vm.txt
# cat/proc/sys/vm/drop_caches
0
The default is 0,1, which means emptying the page cache, 2 means emptying the inode and the directory tree cache, and 3 emptying all the caches
[root@hps103 ~]# Sync
[Root@hps103 ~]# Free-m
Total used free shared buffers Cached
mem:499 323 175 0 52 188
-/+ buffers/cache:82 416
swap:2047 0 2047
[root@hps103 ~]# echo 3 >/proc/sys/vm/drop_caches
[root@hps103 ~]# free-m//Discovery cache significantly reduced
Total used free shared buffers Cached
mem:499 83 415 0 1 17
-/+ buffers/cache:64 434
swap:2047 0 2047
v. SummaryThe memory running principle of Linux operating system, is largely based on the requirements of the server, such as the system's buffering mechanism to cache frequently used files and data in the cached, Linux is always trying to cache more data and information, so that once again the need for this data can be taken directly from the memory Without the need for a long disk operation, this design approach improves the overall performance of the system.
Main Reference article: http://www.linuxeye.com/Linux/1932.html