Detailed analysis of Linux memory usage

Source: Internet
Author: User
Tags locale memory usage

The first two days a 128G memory of the Oracle host failed to trigger Kdump, eventually due to the lack of Var directory space, resulting in incomplete kdump generation. Combined with previous Redhat recommendations, crash set the space better than memory space. We do a simple calculation, that the Kdump host generated is running in the memory of the information, although the host has 128G of memory, but through the top view and calculate that I actually use only 7G much size, and use free-m to view the use of 80G around the memory. From the DBA's point of view, this part of memory is allocated to the SGA in advance, seemingly also can speak. I remember reading an analysis that Taobao Shangha wrote before. This is combined with this article to calculate.

Where did the memory go through the Shangha Linux used? We have learned that memory is mainly consumed in three aspects: 1. Process consumption. 2. Slab consumes 3.pagetable of consumption.

Because it is not easy to directly in the current network Oracle host operation, here on this blog cloud host for example to test.

First, view total used memory

[Root@91it ~]# Free-m
Total used free shared buffers Cached
mem:996 908 88 0 174 283
-/+ buffers/cache:450 546
swap:0 0 0


To used and available memory this has been a long talk about the problem. Here are the information you see below:

1, total memory 996M, used memory 908M

2, because buffers + cached memory is actually available memory, the memory can also be reclaimed through echo 3 >/proc/sys/vm/drop_caches Pagechae, Dentries and Inodes, So the memory actually used is 450M.

Note:

1, on the memory of the calculation method is not on the map, this can be referred to: http://www.redbooks.ibm.com/redpapers/pdfs/redp4285.pdf

2, the Linux memory forced recovery method can refer to: Linux forced to reclaim memory.

That is, the actual use of 450M memory, that 450M memory is how to allocate it?

Second, RSS memory (resident size)

The RSS memory under the PS command, the res memory under the top tool is the memory. Resident set Size is exactly how many pages of memory each process uses. Because the Linux system is using virtual memory, process code, libraries, heaps and stacks of memory used will consume memory, but the application of memory, as long as not really touch, is not counted, because there is no real distribution of physical pages, frankly speaking is really have "data page of physical memory." I'm using a piece of code that evolved from the Python psutil module to compute:

[Root@91it ~]# Cat vms.py
#!/usr/bin/python
Import OS
def GET_VMS (PID):
With open ("/PROC/%S/STATM"% pid, "RB") as F:
VMS = F.readline (). Split () [1]
return int (VMS)
PIDs = [Int (x) for x in Os.listdir (b '/proc ') if X.isdigit ()]
VMSS = [Get_vms (PID) for PID in PIDs]
Print sum (VMSS) * 4
[Root@91it ~]# python vms.py
386528
Note:

1,/PROC/PID/STATM The second column is the amount of RSS memory used page page, and Linux under the default page size is 4KB. So I calculated the sum above and then multiplied by 4, and the result I finally calculated was 386528KB.

2, this can also be summed up by the Vmrss item in the/proc/pid/status, because the item directly gives the KB value.

[root@91it ~]# cat/proc/998/status
name:   mingetty
state:  S (sleeping)
Tgid:    998
pid:    998
ppid:   1
tracerpid:      0
uid:    0       0       0        0
gid:    0       0        0       0
utrace:0
fdsize:64
Groups:
vmpeak: & nbsp;   4068 KB
vmsize:     4064 KB
vmlck:          0 KB
vmhwm:       556 KB
vmrss:         MB KB
... ....... ....... ....... ...... ..... The
can, of course, be evaluated using a shell:

$ cat rss.sh
#/bin/bash
for PROC in ' ls /proc/|grep ' ^[0-9] '
do
  If [-f/proc/$PROC/STATM ]; Then
      tep= ' cat/proc/$PROC/statm | awk ' {print ($)} '
    & nbsp rss= ' expr $RSS + $TEP '
  fi
Done
rss= ' expr $RSS * 4 '
echo $RSS "KB"
RSS Memory section to view the man proc manual or Kernl Introduction to the page, the following is a partial extraction of man proc:

/proc/[pid]/statm
Provides information about memory usage, measured in pages. The
Columns are:
Size Total program Size
(Same as Vmsize in/proc/[pid]/status)
Resident resident Set Size
(Same as Vmrss in/proc/[pid]/status)
Share shared pages (from shared mappings)
Text text (code)
LIB Library (unused in Linux 2.6)
Data data + stack
DT Dirty Pages (unused in Linux 2.6)


second, slab memory

The function of slab memory is that the kernel has a pool for high performance, each object that needs to be reused, this slab pool will cache a lot of commonly used objects, so it consumes a lot of memory. You can run the Slabtop command to view

Slab memory consumption We can figure out through the/proc/slabinfo file that the script is:

# echo ' Cat/proc/slabinfo |awk ' begin{sum=0} {sum=sum+$3*$4;} End{print sum/1024/1024} ' MB
74.7215 MB


Samsung, pagetables memory

This part of the memory I did not fine research, here directly pull Taobao on the argument with the bar: "Struct page also has a certain size (each page, 64bytes), if it is 2.6.32, each page also has a page_cgroup (32bytes), This means that 2.3% (96/4096) of the memory size will be fixed by the kernel. Struct page is the system boot will be based on the size of the memory allocated out of the 18 kernel is about 1.56%, 32 kernel due to the cgroup of the reason will be in 2.3%. ”

and the specific consumption can be obtained through the Pagetables item in/proc/meminfo, the script is as follows:

# echo ' grep pagetables/proc/meminfo | awk ' {print $} ' KB
4476 KB
The system does not account for much of the hard cost.

Iv. Calculation of General Ledger

The three add up to find more than 450M. Here we can easily view, and then run down the script:

$ cat cm.sh
#/bin/bash
For PROC in ' Ls/proc/|grep ' ^[0-9] '
Todo
If [-f/proc/$PROC/STATM]; Then
tep= ' cat/proc/$PROC/STATM | awk ' {print ($)} '
rss= ' expr $RSS + $TEP '
Fi
Done
rss= ' Expr $RSS * 4 '
Pagetable= ' grep pagetables/proc/meminfo | awk ' {print $} '
slabinfo= ' Cat/proc/slabinfo |awk ' begin{sum=0;} {sum=sum+$3*$4;} End{print sum/1024/1024} '
echo $RSS "KB", $PageTable "KB", $SlabInfo "MB"
printf "Rss+pagetable+slabinfo=%smbn" ' echo $RSS/1024 + $PageTable/1024 + $SlabInfo |BC '
Free-m
$./cm.sh
382048KB, 4528KB, 74.8561MB
Rss+pagetable+slabinfo=451.8561mb
Total used free shared buffers Cached
mem:996 842 154 0 106 376
-/+ buffers/cache:359 637
swap:0 0 0
I forced the recall of memory as shown above. At present, the actual use of memory for 359M, and the above three of us and is 451M, than the actual usage of the large 100M around.

This extra portion of memory is duplicated when we compute the RSS memory. Because RSS memory includes the various libraries and so shared modules that we use. You can use the PMAP directive to view detailed LIB libraries and used memory values for each process call. Take the simplest bash process here for example.

[Root@91it ~]# pmap ' pgrep bash '
1464:-bash
0000000000400000 848K r-x--/bin/bash
00000000006d3000 40K RW---/bin/bash
00000000006dd000 20K rw---[anon]
00000000008dc000 36K RW---/bin/bash
00000000011a4000 396K rw---[anon]
0000003ef9800000 128K r-x--/lib64/ld-2.12.so
0000003ef9a1f000 4K R----/lib64/ld-2.12.so
0000003ef9a20000 4K RW---/lib64/ld-2.12.so
0000003ef9a21000 4K rw---[anon]
0000003ef9c00000 1576K r-x--/lib64/libc-2.12.so
0000003ef9d8a000 2048K-----/lib64/libc-2.12.so
0000003ef9f8a000 16K R----/lib64/libc-2.12.so
0000003ef9f8e000 4K RW---/lib64/libc-2.12.so
0000003ef9f8f000 20K rw---[anon]
0000003efa400000 8K r-x--/lib64/libdl-2.12.so
0000003efa402000 2048K-----/lib64/libdl-2.12.so
0000003efa602000 4K R----/lib64/libdl-2.12.so
0000003efa603000 4K RW---/lib64/libdl-2.12.so
0000003efb800000 116K r-x--/lib64/libtinfo.so.5.7
0000003efb81d000 2048K-----/lib64/libtinfo.so.5.7
0000003efba1d000 16K RW---/lib64/libtinfo.so.5.7
00007f1a42fc8000 96836K R----/usr/lib/locale/locale-archive
00007f1a48e59000 48K r-x--/lib64/libnss_files-2.12.so
00007f1a48e65000 2048K-----/lib64/libnss_files-2.12.so
00007f1a49065000 4K R----/lib64/libnss_files-2.12.so
00007f1a49066000 4K RW---/lib64/libnss_files-2.12.so
00007f1a49067000 12K rw---[anon]
00007f1a4906b000 8K rw---[anon]
00007f1a4906d000 28K r--s-/usr/lib64/gconv/gconv-modules.cache
00007f1a49074000 4K rw---[anon]
00007fff48189000 84K rw---[Stack]
00007fff481ff000 4K r-x--[anon]
ffffffffff600000 4K r-x--[anon]
Total 108472K
2757:-bash
0000000000400000 848K r-x--/bin/bash
00000000006d3000 40K RW---/bin/bash
00000000006dd000 20K rw---[anon]
00000000008dc000 36K RW---/bin/bash
0000000001385000 396K rw---[anon]
0000003ef9800000 128K r-x--/lib64/ld-2.12.so
0000003ef9a1f000 4K R----/lib64/ld-2.12.so
0000003ef9a20000 4K RW---/lib64/ld-2.12.so
0000003ef9a21000 4K rw---[anon]
0000003ef9c00000 1576K r-x--/lib64/libc-2.12.so
0000003ef9d8a000 2048K-----/lib64/libc-2.12.so
0000003ef9f8a000 16K R----/lib64/libc-2.12.so
0000003ef9f8e000 4K RW---/lib64/libc-2.12.so
0000003ef9f8f000 20K rw---[anon]
0000003efa400000 8K r-x--/lib64/libdl-2.12.so
0000003efa402000 2048K-----/lib64/libdl-2.12.so
0000003efa602000 4K R----/lib64/libdl-2.12.so
0000003efa603000 4K RW---/lib64/libdl-2.12.so
0000003efb800000 116K r-x--/lib64/libtinfo.so.5.7
0000003efb81d000 2048K-----/lib64/libtinfo.so.5.7
0000003efba1d000 16K RW---/lib64/libtinfo.so.5.7
00007fda04cb1000 96836K R----/usr/lib/locale/locale-archive
00007fda0ab42000 48K r-x--/lib64/libnss_files-2.12.so
00007fda0ab4e000 2048K-----/lib64/libnss_files-2.12.so
00007fda0ad4e000 4K R----/lib64/libnss_files-2.12.so
00007fda0ad4f000 4K RW---/lib64/libnss_files-2.12.so
00007fda0ad50000 12K rw---[anon]
00007fda0ad54000 8K rw---[anon]
00007fda0ad56000 28K r--s-/usr/lib64/gconv/gconv-modules.cache
00007fda0ad5d000 4K rw---[anon]
00007fff0e9e0000 84K rw---[Stack]
00007fff0e9ff000 4K r-x--[anon]
ffffffffff600000 4K r-x--[anon]
Total 108472K
As you can see from the instructions above, the PID uses many of the same so files for 1464 and 27,572 processes, and its memory address is the same. Here the individual thinks that the memory of the RSS part can theoretically be fully calculated and accurate. The practice is to first traverse/proc under all the PID, and then pmap all the PID, all the output results are summarized after the weight. Then sum up the memory value of the second column (you can also consider starting with the/proc/pid/smaps file).

V. Other circumstances

In the fourth step calculation General Ledger, we see RRS memory + Slab memory + pagetables memory > actual used memory. But the situation is not absolute and there are exceptions. In the Oracle usage scenario I mentioned above, the Oracle SGA space (that is, memory space) around 70g was allocated beforehand. The sum of the three is much smaller than the actual physical memory used. How do you explain that? Then remove the cache and buffer space is much smaller than the actual use of memory.

[root@irora04s ~]# Free-m
Total used free shared buffers Cached
mem:129023 100024 28999 0 885 12595
-/+ buffers/cache:86543 42480
swap:24575 0 24575
[root@irora04s ~]# sh/tmp/mem.sh
4339696KB, 66056KB, 745.805MB
Rss+pagetable+slabinfo=5046.805mb
Total used free shared buffers Cached
mem:129023 100096 28926 0 885 12597
-/+ buffers/cache:86614 42409
swap:24575 0 24575
My personal understanding is that this part of the SGA memory is allocated in advance, most of which are empty page pages, and when unused, the space is occupied, but there is no data in the memory address. So once the machine triggers the kdump, the crash memory space occupies nearly rss+pagetable+slabinfo, less than the rss+pagetable+slabinfo+buffers+cached of the disk space.

Finally here also recommended to have time to see the source code of the Nmon. Because of the nmon of memory statistics, it is easier to understand the whereabouts of memory:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.