Linux/ubuntu free view of memory usage

Source: Internet
Author: User

For linux/ubuntu free, check the memory usage. For details, refer to my memory usage: $ free-mtotal used free shared buffers cachedMem: 1908 1844 64 0 56 689-/+ buffers/cache: 1098 810 Swap: 3904 0 3904 at the beginning, I was shocked. I opened a virtual machine and allocated MB of memory, if you don't have any other big programs, how can we make free 64 MB... Later, I checked the information and found out that this was the case! As follows: total physical memory: 1908 m, used 1844 m, shard: the memory shared by multiple processes is 0, and the disk cache size is 689MB, the second line (mem) the difference between used/free and used/free in the third line (-/buffers/cache. The difference between the two is that from the usage perspective, the second line is from the OS perspective, because for OS, buffers/cached is used, so its available memory is 15864KB, memory in use is 932kb, including buffers cached used by the kernel (OS) using Application (X, oracle, etc. the third line indicates that from the application perspective, for applications, buffers/cached is equivalent to available, because buffer/cached is used to improve file read performance, when the application needs to use the memory, buffer/cached will be quickly recycled. From the application perspective, available memory = system free memory buffers cached. mem: indicates the physical memory statistics-/+ buffers/cached: indicates the cache statistics of the physical memory Swap: indicates the usage of Swap partitions on the hard disk. The total physical memory of the system is 255268Kb (256 MB), but the actually available memory B of the system is not 16936Kb In the first line, which only indicates the unallocated memory. We use names such as total1, used1, free1, used2, and free2 to represent the values of the preceding statistics. 1 and 2 represent the data of the first and second rows respectively. Total1: total physical memory. Used1: indicates the total quantity allocated to the cache (including buffers and cache), but some of the caches are not actually used. Free1: unallocated memory. Shared1: shared memory, which is not used by the general system and is not discussed here. Buffers1: Number of buffers allocated by the system but not used. Cached1: Number of cache allocated by the system but not used. The difference between buffer and cache is described later. Used2: the total amount of actually used buffers and cache, which is also the total amount of actually used memory. Free2: The sum of unused buffers, cache, and unallocated memory, which is the actual available memory of the system. The following equations can be sorted out: total1 = used1 + free1 total1 = used2 + free2used1 = buffers1 + cached1 + used2free2 = buffers1 + cached1 + differences between free1 buffer and cache A buffer is something that has yet to be "written" disk. A cache is something that has been "read" from the disk and stored for later use. for more detailed explanations, see Difference Between Buffer and Cache. For Shared memory (Shared memory), it is mainly used to share data Between different processes in UNIX environments. It is a method for inter-process communication, common applications do not apply for sharing. Memory, I have not verified the impact of shared memory on the above equation. If you are interested, see: What is Shared Memory? The difference between cache and buffer: Cache: high-speed cache is a small but high-speed memory located between the CPU and the main memory. Because the CPU speed is much higher than the master memory, it takes a certain period of time for the CPU to directly access data from the memory. The Cache stores part of the data that has just been used by the CPU or is recycled, when the CPU uses this part of data again, it can be called directly from the Cache, which reduces the CPU wait time and improves the system efficiency. Cache is divided into Level 1 Cache (L1 Cache) and level 2 Cache (L2 Cache). L1 Cache is integrated into the CPU, and L2 Cache is usually soldered to the motherboard in the early stages, it is also integrated into the CPU. The common capacity is 256KB or 512KB L2 Cache. Buffer: a Buffer used to transmit data between devices with Different Storage speeds or between devices with different priorities. Through the buffer zone, mutual waits between processes can be reduced, so that when reading data from a slow device, the operation process of the fast device will not interrupt. Buffer and cache in Free: (they both occupy memory): buffer: the memory used as the buffer cache, which is the read/write buffer cache of Block devices: the memory used as the page cache, if the cache value of the file system is large, it indicates that the cache contains a large number of files. If files frequently accessed can be cached, the disk read IO bi will be very small. The difference between the Buffer and the Cache (cached) is to save the read data, and when re-reading, if hit (find the required data), do not read the hard disk, if the read hard disk is not hit. The data is organized according to the read frequency. The most frequently read content is placed in the easiest location, and the unread content is arranged until it is deleted. Buffers is designed based on disk read/write. Distributed write operations are performed in a centralized manner to reduce disk fragments and repeated seek paths on the hard disk, thus improving system performance. In linux, a daemon clears the buffer content (such as disk) at a specified time. You can also use the sync command to manually clear the buffer. For example, I have an ext2 USB flash drive. I cp A 3 m MP3 file, but the USB flash drive does not beat. After a while (or manually enter sync) the USB flash drive is beating. The buffer is cleared when the device is detached, so it takes several seconds to detach the device. Modify the number on the Right of vm. swappiness in/etc/sysctl. conf to adjust the swap Usage Policy at next boot. The value range is 0 ~ 100, the larger the number, the more inclined to use swap. The default value is 60. You can try it again. Both are data in RAM. In short, the buffer is about to be written to the disk, and the cache is read from the disk. Buffer is allocated by various processes and used in the input queue. In a simple example, a process requires multiple fields to be read. Before all fields are fully read, the process stores the previously read fields in the buffer. The cache is often used in disk I/O requests. If multiple processes need to access a file, the file is made into a cache to facilitate next access, this provides system performance. Therefore, linux memory used memory is normal, different from windows.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.