Ubuntu server memory overflow: out of memory

Source: Internet
Author: User
Tags server memory linux

Environment:

Ubuntu Server 12.04 i686

Problem Description:

 24G of memory, free of 20G or so. But the kernel always says this and kills the program. June 6 13:12:44 00098 kernel: [3112325.883069] out of Memory:kill process 2249 (nginx) score 1 o R Sacrifice Child June 6 13:12:44 00098 kernel: [3112325.922795] killed process 2831 (nginx) total-vm:21772kb, anon-rss:1 1048kB, file-rss:916kb June 6 12:43:18 00098 kernel: [3110562.214498] snmpd invoked oom-killer:gfp_mask=0x840d0, O  Rder=0, Oom_adj=0, oom_score_adj=0 June 6 12:43:18 00098 kernel: [3110562.214502] snmpd cpuset=/mems_allowed=0 June 6 12:49:57 00098 kernel: [3110960.995962] out of Memory:kill process 1858 (MYSQLD) score 1 or sacrifice child June 6 12 : 49:57 00098 kernel: [3110961.032675] killed process 1858 (mysqld) total-vm:140652kb, anon-rss:15492kb, file-rss:6100kb J  UL 6 12:49:57 00098 kernel: [3110961.103870] init:mysql main process (1858) killed by KILL signal June 6 12:49:57 00098 Kernel: [3110961.103899] init:mysql main process ended, respawning 

It's really depressing, isn't it? Set up a cacti, the system just has nginx, so on the use of Nginx, to provide PHP support, but also must install PHP5-FPM, installation completed, landing on the cacti, after a moment of refresh, hung off, just at the beginning thought it was nginx hung off, Don't care, after a while, monitoring the alarm said Nginx hung up, saw the next Nginx log, did not find abnormal, today again found MySQL hang, this lay pain, how the program is not just hanging! Careful analysis of the log, found that the kernel reported the above content, Oom-killer this thing, and kill the program, but the memory is not full, empty very.

Then the Internet to find the next data, 32-bit system, if low-memory depleted, it will lead to the emergence of this problem. Look at the low-memory, indeed very little!

Typically, out of memory killer (Oom-killer) can kill a process on a large memory (6gb+) server. In many cases, people are baffled to report that there is still memory left in the case, why Oom-killer also kill the process? The phenomenon is in the/var/log/messages log, with the following information:

Out of memory:killed process [PID] [process name].

In my own case, I upgraded various RHEL3 to RHEL4 in VMware, servers with 1 16Gb of memory, or killed the process by Oom-killer. Needless to say, it was very frustrating.

The reason for this problem is that the low memory is depleted. Quote Tom's words "The kernel uses the low memory to track all memory allocations, so that a 16GB memory system is more than 4 times times more low memory than a 4GB memory system. This extra pressure has been around since the moment you started the system, because the kernel structure has to be resized to potentially track four times times more memory allocations

Plainly OOM killer is a layer of protection mechanism, used to avoid Linux in the lack of time not too serious problem, the process of killing the insignificant, some of the meaning of broken wrist.

To learn some old knowledge, there are limits to addressing under 32-bit CPU architectures. The Linux kernel defines three zones:

# dma:0x00000000-0x00999999 (0-16 MB)

# lowmem:0x01000000-0x037999999 (16-896 MB)-SIZE:880MB

# highmem:0x038000000-< hardware-specific >

The Lowmem area (also called NORMAL ZONE) is 880 MB and cannot be changed (except with the Hugemem kernel). For high load systems, it is possible to cause OOM killer due to poor lowmem utilization. One possible reason is that lowfree is too small, and another reason is that lowmem are fragmented, requesting no contiguous memory area.

There are two ways to view the status of the low memory and high memory:

# egrep ' high| Low '/proc/meminfo 
hightotal:     5111780 kb 
highfree:         1172 kb 
lowtotal:       795688 KB 
lowfree :         16788 KB 
     
# free-lm 
             total       used       free     shared    buffers     cached 
Mem:          5769       5751          0          8 5267       Low
:           777        760          0          0          0
High:         4991       4990          1          0          0          0
-/+ buffers/cache:        475       5293
Swap:         4773          0       4773

When the low memory runs out, no matter how high memory is left, Oom-killer begins to kill the process to keep the system running properly.

There are two ways to solve this problem

1, if possible, please upgrade to 64-bit system.

This is the best solution because all of the memory will become low memory. If you deplete the low memory in this case, it's really out of memory.

2, if limited by the need to use 32-bit system, the best solution is to use the Hugemem kernel.

The kernel splits Low/high memory in different ways, and in most cases it provides a sufficient number of low memory to high memory mappings. In most cases, this is a simple fix: Install the HUGEMEM kernel RPM package and reboot.

If running the Hugemem kernel is not possible, you can try to set the/proc/sys/vm/lower_zone_protection value to 250 or more. This will allow the kernel to be willing to protect the low memory, allowing for more consideration of allocating memory from the high memory. As far as I know, this option has been available from the 2.6.x kernel. As necessary, you may need to experiment to find the most appropriate value in your system environment. You can quickly set and check for changes using the following methods:

# cat/proc/sys/vm/lower_zone_protection

# echo "/proc/sys/vm/lower_zone_protection" >

Add a setting to the/etc/sysctl.conf to take effect when it starts:

Vm.lower_zone_protection = 250

As a last effort, you can close Oom-killer. This option can cause the system to hang, so be careful to use it (at your own risk)!

To view the status of the current Oom-killer:

# Cat/proc/sys/vm/oom-kill

Close/Open Oom-killer:

# echo ' 0 ' >/proc/sys/vm/oom-kill

# echo ' 1 ' >/proc/sys/vm/oom-kill

When the process should be killed by Oom-killer but not killed, the relevant information will be recorded to/var/log/messages:

"Would have oom-killed But/proc/sys/vm/oom-kill is disabled"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.