Spark SQL Run error (Container killed on request. Exit Code is 143)

Source: Internet
Author: User

Error Description:

SQL three tables do join run error;

The following error is performed with hive:

Diagnostic Messages for this Task:
Container [pid=27756,containerid=container_1460459369308_5864_01_000570] is running beyond physical memory limits. Current usage:4.2 GB of 4 GB physical memory used; 5.0 GB of 16.8 GB virtual memory used. Killing container.
Container killed on request. Exit Code is 143
Container exited with a Non-zero exit code 143

Running the error with Spark is as follows:

Error:org.apache.spark.SparkException:Job aborted due to stage failure:task 369 on stage 1353.0 failed 4 times, most RE Cent Failure:lost task 369.3 in stage 1353.0 (TID 212351, cnsz033139.app.paic.com.cn): executorlostfailure (Executor 689 Exited caused by one of the running tasks) Reason:container marked as failed:container_1460459369308_2154_01_000906 on H ost:cnsz033139.app.paic.com.cn. Exit status:143. Diagnostics:container killed on request. Exit Code is 143
Container exited with a Non-zero exit code 143
Killed by external signal

Error analysis

The error from hive is due to the physical memory limit, causing container to be killed error.
From the time of error is the implementation of the reduce phase error, it is possible that the reduce processing stage container memory is not enough.

Solution Solutions

First look at the configuration about container memory:

Hive (default) >SETMapreduce. Map. Memory. MB;Mapreduce. Map. Memory. MB=4096Hive (default) >SETMapreduce. Reduce. Memory. MB;Mapreduce. Reduce. Memory. MB=4096Hive (default) >SETYarn. NodeManager. Vmem-pmem-ratio;Yarn. NodeManager. Vmem-pmem-ratio=4.2

Therefore, a single map and reduce allocate physical memory 4G; virtual memory limit 4*4.2=16.8g;

The limit of the amount of data processed by a single reduce exceeds memory 4G; mapreduce.reduce.memory.mb=8192

Reference:

Http://stackoverflow.com/questions/29001702/why-yarn-java-heap-space-memory-error?answertab=oldest#tab-top

There is memory settings that can is set at the Yarn container level and also on the mapper and reducer level. Memory is requested in increments of the Yarn container size. Mapper and reducer tasks run inside a container.

MAPREDUCE.MAP.MEMORY.MB and MAPREDUCE.REDUCE.MEMORY.MB

Above parameters describe upper memory limit for the Map-reduce task and if memory subscribed by this task exceeds this Li MIT, the corresponding container would be killed.

These parameters determine the maximum amount of memory that can is assigned to mapper and reduce tasks respectively. Let us look at an example:mapper are bound by a upper limit for memory which are defined in the configuration parameter ma Preduce.map.memory.mb.

However, if the value for YARN.SCHEDULER.MINIMUM-ALLOCATION-MB was greater than this value of MAPREDUCE.MAP.MEMORY.MB, then The YARN.SCHEDULER.MINIMUM-ALLOCATION-MB is respected and the containers of this size are given out.

This parameter needs to is set carefully and if not set properly, this could leads to bad performance or OutOfMemory errors .

Mapreduce.reduce.java.opts and Mapreduce.map.java.opts

The value needs to is less than the upper bound for Map/reduce task as defined in mapreduce.map.memory.mb/mapred UCE.REDUCE.MEMORY.MB, as it should fit within the memory allocation for the Map/reduce task.

How to PLAN and CONFIGURE YARN and MAPREDUCE 2 in HDP 2.0

Spark SQL Run error (Container killed on request. Exit Code is 143)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.