Linux memory count

Source: Internet
Author: User
Linux memory count details-Linux general technology-Linux programming and kernel information. The following is a detailed description. The cache mechanism for reading and writing files is different from that for windows, so that the value of cached for reading and writing large files is very large and remains high.

Using oracle in recent days, we have found that oracle uses a memory of up to 10 MB. Khan, quickly asked the manager to buy a new 1g memory to install, and then found that they could not recognize it. After working overtime for more than an hour, we found that the 386 kernel did not recognize the high-end memory (HIGHMEM), so the memory limit has been 896 MB. In the past, it was 1 GB memory, so I couldn't see it. Now I changed it to 1.5 GB.

Hurry Up With A Kernel 2.6.12-1-686, restart it, and recognize it. However, free is only about 32 MB. We were shocked to call oracle for consultation. The answer was to install the complete patch and use the oracle-certified server. What server does oracle authenticate? RedHatEnterpriseAS3/4, which charges fees and is definitely not cheap. Finally, I had a test of not starting oracle. Unexpectedly, mysql consumes most of the memory. What is the specific cause of this situation?

I checked the linux memory management materials and found that the memory management count in linux is very different from that in windows. The following lists the counting, viewing methods, and meanings.

Total mem, which can be viewed using top free.
Free mem, which can be viewed using top free vmstat.
Used mem, which can be viewed using top free.
Buffer mem, which can be viewed using top free vmstat.
Shared mem, which can be viewed in free mode.
Swap mem, which can be viewed using top.
Swap used, which can be viewed with top vmstat.
Cached mem, which can be viewed using top free vmstat.
Active mem, which can be viewed using free vmstat-a, that is, cached used.
Inactive mem, which can be viewed using free vmstat-a, that is, cached free.

Total mem is the available memory excluding the system, and the system accounts for about 1 MB. Then it is allocated to free mem and used mem. Used mem includes kernel table usage (such as GDT), program usage, buffer, and cached. So

Cached mem = active mem + inactive mem
Total mem = free mem + used mem
Used mem = kernel table usage + program use physical memory + buffer mem + cached mem

Skip the kernel table, which can be transformed into the following:

Total program memory = swap used + program physical memory
= Swap used + used mem-buffer mem-cached mem
= Total mem-free mem + swap used-buffer mem-cached mem

According to the equations of all system memory management:
Total memory used by the program + memory available at one time = total mem + swap used

We can calculate:
Memory can be applied at a time = free mem + buffer mem + cached mem (in fact, slightly smaller than this value)

Part of the swap used by the program occupies the total part, and the rest is the maximum value that can be applied for at a time. If this value is too small due to multiple requests, switch to swap.

First, explain the difference between buffer and cached. In general, buffer stores the object data structure, while cached stores the unstructured block data. Cached can buffer any standard block device without worrying about anything. This involves the concepts of writing and writing back. Let's take a look.

Then the concept of physical memory used by the program. The total memory of the program is equal to the swap part plus the physical memory used by the program. What is the relationship between the total memory used by the program and the memory used by each program? This also involves the issue of sharing pages.

Windows also has a similar concept. If the two pages have the same content, you can keep one copy in the memory. This is the theoretical basis of dynamic link library/dynamic shared library. Therefore, the shared mem of all processes only has one copy. The Data + Stack used by the process is the Data space, the code is the code space, and the sum of the two minus shared mem is the private space, which is also known as the process memory usage. Add the sum of all memory usage and the sum of shared mem to get the total memory of the program.

A big difference between cached in Linux and windows is that the disk cache in windows is a read/write cache queue. Write and pre-read operations are sorted in the queue. It is released after completion, mainly used to smooth the Read and Write bottleneck. The read prediction mechanism increases the hit weight. After the linux cached is read and written, it will not be released until the memory is insufficient. The release speed should be correct. After all, the data has been written, and only one data structure flag is modified. This mechanism mainly corresponds to a high hit rate. If the same file is used repeatedly (it should be the same offset of the same block device ). Therefore, the cache mechanism always needs to be read once, and the number of writes is far smaller than the expected number of writes.

Based on the calculation formula that can be applied for memory, you can know where the occasional death is.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.