Reference Original: Http://blog.javachen.com/2015/06/09/memory-in-spark-on-yarn.html?utm_source=tuicool
Running the file has a few G large, the default spark memory settings will not work, need to reset. Have not seen spark source, can only search the relevant blog to solve the problem.
Spark on yarn has two modes: mode, mode, according to the driver distribution in the Spark application yarn-client
yarn-cluster
. When you run the spark job on yarn, each spark executor runs as a yarn container. Spark enables multiple tasks to run in the same container.
The files that are configured with Spark memory are the spark-env.sh files in the Spark settings, which detail the memory settings under single-machine, yarn-client mode, and Yarn-cluster mode.
The files that configure yarn memory are the Yarn-site.xml files in the Hadoop settings, and several more commonly used parameters are as follows:
Yarn.app.mapreduce.am.resource.mb:AM The maximum memory that can be requested, the default value is 1536MBYARN.NODEMANAGER.RESOURCE.MEMORY-MB : NodeManager can request the maximum memory, the default value is 8192MBYARN.SCHEDULER.MINIMUM-ALLOCATION-MB: When scheduling a container can request the minimum resources, The default value is 1024MBYARN.SCHEDULER.MAXIMUM-ALLOCATION-MB: The maximum resource that a container can request when scheduling, the default value is 8192MB
It is important to note that the master node and the individual slave nodes need to be configured separately, and can be configured dynamically according to the machine's situation. My configuration on the master node is:
<configuration><!--Site Specific YARN Configuration Properties--<property> < ;name>yarn.resourcemanager.hostname</name> <value>master</value> </property& Gt <property> <name>yarn.nodemanager.aux-services</name> <value>mapredu ce_shuffle</value> </property> <property> <name>yarn.app.mapreduce.am .resource.mb</name> <value>2048</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>81920</value> ; </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>2048</value> </property> <property> <name>ya Rn.sCheduler.maximum-allocation-mb</name> <value>81920</value> </property></c Onfiguration>
Configuring Spark on Yarn cluster memory