Spark-shell Start Error: Yarn application has already ended! It might has been killed or unable to launch application master

Source: Internet
Author: User

Spark-shell does not support yarn cluster and starts in Yarn client mode

Spark-shell--master=yarn--deploy-mode=client

Start the log with the following error message

where "neither Spark.yarn.jars nor Spark.yarn.archive is set, falling back to uploading libraries under Spark_home", was just a warning to the official The explanations are as follows:

Probably said: If Spark.yarn.jars and spark.yarn.archive are not configured, will $spar_home/jars all the following jars packaged into a zip file, upload to each work partition, so packaging distribution is automatic, It doesn't matter if these two parameters are not configured.

"Yarn application has already ended! It might has been killed or unable to launch application Master ", this is an exception, open the Mr Admin page, mine is http://192.168.128.130/8088,

Focus on the red box, 2.2g virtual memory actual value, more than the upper limit of 2.1g. That is, virtual memory overrun, so Contrainer was killed, live in the container dry, the container was killed, but also play a fart.

Solution Solutions

Yarn-site.xml Add configuration:

2 Configurations 2 Select one

1<!--The following is an additional configuration to solve the problem of Spark-shell running error in yarn client mode, and it is estimated that spark-summit will have this problem. 2 configurations Only one configuration can solve the problem, of course, the configuration is no problem--2<!--virtual memory settings are in effect, and if actual virtual memory is greater than the set value, spark may error when running in client mode."Yarn application has already ended! It might has been killed or unable to l"-3<property>4<name>yarn.nodemanager.vmem-check-enabled</name>5<value>false</value>6<description>whether virtual memory limits would be enforced forContainers</description>7</property>8<!--Configure the value of virtual memory/physical memory by default of 2.1, the physical memory should be 1g by default, so virtual memory is 2.1g-->9<property>Ten<name>yarn.nodemanager.vmem-pmem-ratio</name> One<value>4</value> A<description>ratio between virtual memory to physical memory when setting memory limits forContainers</description> -</property>
View Code

After modification, start Hadoop,spark-shell.

Spark-shell Start Error: Yarn application has already ended! It might has been killed or unable to launch application master

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.