標籤:contain current win cli for not running isa lin
在Hadoop 2.7.2叢集下執行如下命令:
spark-shell --master yarn --deploy-mode client
爆出下面的錯誤:
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
在Yarn WebUI上面查看啟動的Cluster狀態,log顯示為:
Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current
usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
這是由於虛擬記憶體大小超過了設定的數值,可以修改配置,進行規避。
There is a check placed at Yarn level for Vertual and Physical memory usage ratio. Issue is not only that VM doesn‘t have sufficient pysical memory. But it is because Virtual memory usage is more than expected for given physical memory.
Note : This is happening on Centos/RHEL 6 due to its aggressive allocation of virtual memory.
It can be resolved either by :
- Disable virtual memory usage check by setting yarn.nodemanager.vmem-check-enabled to false;
- Increase VM:PM ratio by setting yarn.nodemanager.vmem-pmem-ratio to some higher value(default value is 2.1).
Add following property in yarn-site.xml
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
<description>Whether virtual memory limits will be enforced for containers</description>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>4</value>
<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>
3.Then, restart yarn.
Reference:
http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/
http://blog.chinaunix.net/uid-28311809-id-4383551.html
http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits
執行”spark-shell –master yarn –deploy-mode client”,虛擬記憶體大小溢出,報錯