Oom killer "out of memory: killed process" solutions/Summary

Source: Internet
Author: User
Tags vmware server

From: http://www.redhat.com/archives/taroon-list/2007-August/msg00006.html

Since this problem seems to popup on different lists, this message has
Been cross-posted to the General Red Hat discussion list, the rhel3
(Taroon) List and the RHEL4 (nahant) list. My apologies for not having
The time to post this summary sooner.

I wocould still be banging my head against this problem were it not
The generous dependency ance of Tom sightler <ttsig tuxyturvy COM> and Brian
Long <brilong Cisco COM>.

In general, the out of memory killer (Oom-killer) begins killing
Processes, even on servers with large amounts (6 GB +) of Ram. In bytes
Cases people report plenty of "free" Ram and are perplexed as to why
Oom-killer is whacking processes. indications that this has happened
Appear in/var/log/messages:
Out of memory: killed process [pid] [process name].

In my case I was upgrading various VMware servers from rhel3/Vmware
Gsx to RHEL4/VMware Server. One of the virtual machines on a server
With 16 GB of Ram kept getting whacked by the Oom-killer. Needless
Say, this was quite frustrating.

As it turns out, the problem was low memory exhaustion.Quoting Tom:
"The kernel uses low memory to track allocations of all memory thus
System with 16 GB of memory will use significantly more low memory than
System with 4 GB, perhaps as much as 4 times. This extra pressure
Happens from the moment you turn the system on before you do anything
All because the kernel structures have to be sized for the potential
Tracking allocations in four times as much memory ."

You can check the status of low & high memory a couple of ways:

# Egrep 'high | low'/proc/meminfo
Hightotal: 5111780 KB
Highfree: 1172 KB
Lowtotal: 795688 KB
Lowfree: 16788 KB

# Free-LM
Total used free shared buffers cached
Mem: 5769 5751 17 0 8 5267
Low: 777 760 16 0 0 0
High: 4991 4990 1 0 0 0
-/+ Buffers/cache: 475 5293
Swap: 4773 0 4773

When low memory is exhausted, it doesn't matter how much high memory is
Available, the Oom-killer will begin whacking processes to keep
Server Alive.

There are a couple of solutions to this problem:

If possible, upgrade to 64-bit Linux. This is the best solution because
* All * memory becomes low memory. If you run out of low memory in this
Case, then you're * really * out of memory.
(All the memory of 64bit machines is low memory, without high memory, 32bit high memory is not 0)

If limited to 32-bit Linux, the best solution is to run the hugemem
Kernel. This kernel splits low/High memory differently, and in most
Cases shoshould provide enough low memory to map high memory. In most
Cases this is an easy fix-simply install the hugemem kernel RPM &
Reboot.

If running the 32-bit hugemem kernel isn't an option either, you can try
Setting/proc/sys/Vm/lower_zone_protection to a value of 250 or more.
This will cause the kernel to try to be more aggressive in defending
Low zone from allocating memory that cocould potentially be allocated in
The high memory zone. As far as I know, this option isn't available
Until the 2.6.x kernel. Some experimentation to find the best setting
For your environment will probably be necessary. You can check & set
This value on the fly:
# Cat/proc/sys/Vm/lower_zone_protection
# Echo "250">/proc/sys/Vm/lower_zone_protection

To set this option on boot, add the following to/etc/sysctl. conf:
VM. lower_zone_protection = 250

As a last-ditch effort, you can disable the Oom-killer. This option can
Cause the server to hang, so use it with extreme caution (and at your
Own Risk )!
Check status of Oom-killer:
# Cat/proc/sys/Vm/Oom-kill

Turn Oom-killer off/on:
# Echo "0">/proc/sys/Vm/Oom-kill
# Echo "1">/proc/sys/Vm/Oom-kill

To make this change take effect at boot time, add the following
To/etc/sysctl. conf:
VM. Oom-kill = 0

For processes that wocould have been killed, but weren't because the Oom-
Killer is disabled, you'll see the following message
In/var/log/messages:
"Wocould have Oom-killed but/proc/sys/Vm/Oom-kill is disabled"

Sorry for being so long-winded. I hope this helps others who have
Struggled with this problem.

-Eric

--
Eric Sisler <esisler Westminster lib Co us>
Library Network specialist
Westminster Public Library
Westminster, Co USA

Linux-Don't Fear the penguin.
Want to know what we use Linux?
Visit http://wallace.westminster.lib.co.us/linux

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.