The error message is as follows:Container [pid=26845,containerid=container_1419056923480_0212_02_000001] is running beyond virtual memory Limits. Current usage:262.8 MB of 2 GB physical memory used; 4.8 GB of 4.2 GB virtual memory used. Killing container.
Analysis: Just started to think that is not enough memory, so constantly adjust the virtual memory, the problem is solved, but the actual running time will occasionally report this error.
The cause of the problem is:
set yarn.nodemanager.resource.memory-mb=2048;
Set YARN.APP.MAPREDUCE.AM.COMMAND-OPTS=-XMX2048m;
The two parameters are equal, and in fact the first parameter is all the memory that the node can get from yarn, and the second parameter is the JVM memory running on it, and there is room for the whole work in addition to the JVM. When the JVM takes up too much memory, it is likely to be larger than the critical point of the YARN.NODEMANAGER.RESOURCE.MEMORY-MB configuration and be Container killed. The recommended ratio is the 0.8 of the JVM as the actual memory
The memory configuration of map and reduce also has this problem, example configuration:
Mapred-site.xml
set mapreduce.map.memory.mb=1024;
set mapreduce.map.java.opts=-xmx819m;
set mapreduce.reduce.memory.mb=2048;
set mapreduce.reduce.java.opts=-xmx1638m;
Yarn-site.xml
set yarn.nodemanager.resource.memory-mb=2048;
set yarn.app.mapreduce.am.command-opts=-xmx1638m;
This article specifically explains the cause of the problem and the recommended configuration
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/ rpm-chap1-11.html?texttosearch=queue#
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Hive Job Oom Problem