Ubuntu uses top/free to check the reason for high memory usage
Source: Internet
Author: User
Ubuntu uses top/free to check the memory usage. when I use free/top to check the memory usage, I was shocked. the machine has 4 GB of memory, it shows that the free idle memory is only over 300 MB. after checking the process, no applications that occupy large memory are found. I checked some information and shared it with you. Its... ubuntu uses top/free to check the memory usage. when I use free/top to check the memory usage, I was shocked. the machine has 4 GB of memory, it shows that the free idle memory is only over 300 MB. after checking the process, no applications that occupy large memory are found. I checked some information and shared it with you. In fact, the principle can be understood in one sentence: check that the memory used by used in the result indicates the total amount allocated to the cache (including buffers and cache, however, some of the caches are not actually used. Explanation of the free result: Mem: indicates the physical memory statistics-/+ buffers/cached: indicates the cache statistics of the physical memory Swap: indicates the usage of Swap partitions on the hard disk. We do not care about it here. The total physical memory of the system is 255268Kb (256 MB), but the actually available memory B of the system is not 16936Kb in the first line, which only indicates the unallocated memory. We use names such as total1, used1, free1, used2, and free2 to represent the values of the preceding statistics. 1 and 2 represent the data of the first and second rows respectively. Total1: total physical memory. Used1: indicates the total quantity allocated to the cache (including buffers and cache), but some of the caches are not actually used. Free1: unallocated memory. Shared1: Shared Memory, which is not used by the general system and is not discussed here. Buffers1: Number of buffers allocated by the system but not used. Cached1: number of cache allocated by the system but not used. The difference between buffer and cache is described later. The difference between cache and buffer: Cache: high-speed cache is a small but high-speed memory located between the CPU and the main memory. Because the CPU speed is much higher than the master memory, it takes a certain period of time for the CPU to directly access data from the memory. the Cache stores part of the data that has just been used by the CPU or is recycled, when the CPU uses this part of data again, it can be called directly from the Cache, which reduces the CPU wait time and improves the system efficiency. Cache is divided into Level 1 Cache (L1 Cache) and Level 2 Cache (L2 Cache). L1 Cache is integrated into the CPU, and L2 Cache is usually soldered to the motherboard in the early stages, it is also integrated into the CPU. the common capacity is 256KB or 512KB L2 Cache. Buffer: a Buffer used to transmit data between devices with different storage speeds or between devices with different priorities. Through the buffer zone, mutual waits between processes can be reduced, so that when reading data from a slow device, the operation process of the fast device will not interrupt. Buffers is designed based on disk read/write. distributed write operations are performed in a centralized manner to reduce disk fragments and repeated seek paths on the hard disk, thus improving system performance. In linux, a daemon clears the buffer content (such as disk) at a specified time. you can also use the sync command to manually clear the buffer. For example, I have an ext2 USB flash drive. I cp a 3 m MP3 file, but the USB flash drive does not beat. after a while (or manually enter sync) the USB flash drive is beating. The buffer is cleared when the device is detached, so it takes several seconds to detach the device. Modify the number on the right of vm. swappiness in/etc/sysctl. conf to adjust the swap usage policy at next boot. The value range is 0 ~ 100, the larger the number, the more inclined to use swap. The default value is 60. you can try it again. Both are data in RAM. In short, the buffer is about to be written to the disk, and the cache is read from the disk. Buffer is allocated by various processes and used in the input queue. in a simple example, a process requires multiple fields to be read. before all fields are fully read, the process stores the previously read fields in the buffer. The cache is often used in disk I/O requests. if multiple processes need to access a file, the file is made into a cache to facilitate next access, this provides system performance.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.