ERROR:GC Overhead limit exceeded solution

Source: Internet
Author: User
Tags mapr gc overhead limit exceeded log4j
Premise: Run Mr Hardware environment must meet, my i7 processor, 8G memory. In the execution of 2000W data, (Large tables and small table associations) as shown in the figure of the CPU:
Instantaneous cup reaches 99%, memory occupancy rate is 70%.
MP task exception in eclipse
http://blog.csdn.net/xiaoshunzi111/article/details/52882234
 I had a problem when run Hibench with hadoop-2.2.0, the wrong message list as below 14/03/07 13:54:53 INFO mapreduce . Job:map 19% reduce 0% 14/03/07 13:54:54 INFO mapreduce. Job:map 21% reduce 0% 14/03/07 14:00:26 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000020_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:00:2 7 INFO MapReduce. Job:map 20% reduce 0% 14/03/07 14:00:40 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000008_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:00:4 1 INFO mapreduce. Job:map 19% reduce 0% 14/03/07 14:00:59 INFO mapreduce. Job:map 20% reduce 0% 14/03/07 14:00:59 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000015_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:01:0 0 INFO MapReduce. Job:map 19% reduce 0% 14/03/07 14:01:03 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000023_0, status:failed ERROR:GC overhead limit exceeded 14/03/14:01:11 INFO MapReduce. Job:task Id:attempt_1394160253524_0010_m_000026_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:01:3 5 INFO MapReduce. Job:map 20% reduce 0% 14/03/07 14:01:35 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000019_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:01:3 6 INFO MapReduce. Job:map 19% reduce 0% 14/03/07 14:01:43 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000007_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:0 0 INFO MapReduce. Job:task Id:attempt_1394160253524_0010_m_000000_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:0 1 INFO mapreduce. Job:map 18% reduce 0% 14/03/07 14:02:23 INFO mapreduce. Job:task id:attempt_1394160253524_0010_m_000021_0, status:failed Error:java heap space 14/03/07 14:02:24 INFO mapr Educe. Job:map 17% reduce 0% 14/03/07 14:02:31 INFO mapreduce. Job:map 18% reduce 0% 14/03/07 14:02:33 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000029_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:3 4 INFO MapReduce. Job:map 17% reduce 0% 14/03/07 14:02:38 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000010_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:4 1 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000018_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:4 3 INFO MapReduce. Job:task Id:attempt_1394160253524_0010_m_000014_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:4 7 INFO MapReduce. Job:task id:attempt_1394160253524_0010_m_000028_0, status:failed Error:java heap space 14/03/07 14:02:50 INFO mapr Educe. Job:task id:attempt_1394160253524_0010_m_000002_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:5 1 INFO mapreduce. Job:map 16% reduce 0% 14/03/07 14:02:51 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000005_0, Status : FAILED ERROR:GC overhead limit exceeded 14/03/07 14:02:52 INFO mapreduce. Job:map 15% reduce 0% 14/03/07 14:02:55 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000006_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:5 7 INFO MapReduce. Job:task Id:attempt_1394160253524_0010_m_000027_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:02:5 8 INFO MapReduce. Job:map 14% reduce 0% 14/03/07 14:03:04 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000009_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:03:0 5 INFO MapReduce. Job:task Id:attempt_1394160253524_0010_m_000017_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:03:0 5 INFO MapReduce. Job:task id:attempt_1394160253524_0010_m_000022_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:03:0 6 INFO MapReduce. Job:map 12% reduce 0% 14/03/07 14:03:10 INFO mapreduce. Job:task ID:ATTEMPT_1394160253524_0010_M_000001_0, status:failed ERROR:GC overhead limit exceeded 14/03/07 14:03:11 INFO mapreduce. Job:map 13% reduce 0% 14/03/07 14:03:11 INFO mapreduce. Job:task Id:attempt_1394160253524_0010_m_000024_0, status:failed and then I add a parameter "Mapred.child.java.op

        TS "to the file" Mapred-site.xml "<property> <name>mapred.child.java.opts</name> <value>-Xmx1024m</value> </property> Then another error occurs as below 14/03/07 11:21:51 INFO m Apreduce. Job:map 0% reduce 0% 14/03/07 11:21:59 INFO mapreduce. Job:task id:attempt_1394160253524_0003_m_000002_0, status:failed Container [Pid=5592,containerid=container_ 1394160253524_0003_01_000004] is running beyond virtual memory limits. Current usage:112.6 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.

Killing container. Dump of the Process-tree for container_1394160253524_0003_01_000004: |-PID PPID pgrpid sessid cmd_name user_mode_time (Millis) system_time (Millis) vmem_usage (BYTES) rssmem_usage (PAGES) full_cmd_line | 5598 5592 5592 5592 (Java
) 563 2778632192 28520/usr/java/jdk1. 7.0_45/bin/java-djava.net.preferipv4stack=true-dhadoop.metrics.log.level=warn-xmx2048m-djava.io.tmpdir=/home/ HADOOP/TMP/NM-LOCAL-DIR/USERCACHE/HADOOP/APPCACHE/APPL ication_1394160253524_0003/container_1394160253524_0003_ 01_000004/tmp-dlog4j.configuration=container-log4j.properties-dyarn.app.container.log.dir=/var/log/hadoop/yarn /userlogs/application_13941 60253524_0003/container_1394160253524_0003_01_000004- Dyarn.app.container.log.filesize=0-dhadoop.root.logger=info,cla Org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000002_0 4 | 5592 4562 5592 5592 (bash) 0 0 108650496 300/bin/bash-c/usr/j

 

Ava/jdk1.7.0_45/bin/java-djava.net.preferipv4stack=true-dhadoop.metrics.log.level=warn-xmx2048m- Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/APPCACHE/APPL ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp-dlog4j.configuration= container-log4j.properties-dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941 60253524
_0003/container_1394160253524_0003_01_000004-dyarn.app.container.log.filesize=0-dhadoop.root.logger=info,cla Org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000002_0 4 1>/var/log/hadoop/ yarn/userlogs/application_1394160253524_0003/container_139 4160253524_0003_01_000004/stdout 2>/var/log/hadoop /yarn/userlogs/application_1394160253524_0003/container_139 4160253524_0003_01_000004/stderr container killed on re Quest. Exit code is 143 14/03/07 11:22:02 INFO MapReduce. Job:task id:attempt_1394160253524_0003_m_000001_0, status:failed Container [Pid=5182,containerid=container_ 1394160253524_0003_01_000003] is running beyond virtual memory limits. Current usage:118.5 MB of 1 GB physical memory used; 2.7 GB Of 2.1 GB virtual memory used.

Killing container. Dump of the Process-tree for container_1394160253524_0003_01_000003: |-PID PPID pgrpid sessid cmd_name user_mode _time (Millis) system_time (Millis) vmem_usage (BYTES) rssmem_usage (PAGES) full_cmd_line | 5182 4313 5182 5182 (bash ) 0 0 108650496 303/bin/bash-c/usr/java/jdk1.7.0_45/bin/java-djava.net.preferipv4stack=true- dhadoop.metrics.log.level=warn-xmx2048m-djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/ APPCACHE/APPL ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp-dlog4j.configuration= container-log4j.properties-dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941 60253524
_0003/container_1394160253524_0003_01_000003-dyarn.app.container.log.filesize=0-dhadoop.root.logger=info,cla Org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000001_0 3 1>/var/log/hadoop/ yarn/userlogs/application_1394160253524_0003/container_139 4160253524_0003_01_000003/stdout 2>/var/log/hadoop/yarn/userlogs/application_ 1394160253524_0003/container_139 4160253524_0003_01_000003/stderr |-5187 5182 5182 5182 (Java) 616 19 278392832
0 30028/USR/JAVA/JDK1. 7.0_45/bin/java-djava.net.preferipv4stack=true-dhadoop.metrics.log.level=warn-xmx2048m-djava.io.tmpdir=/home/ HADOOP/TMP/NM-LOCAL-DIR/USERCACHE/HADOOP/APPCACHE/APPL ication_1394160253524_0003/container_1394160253524_0003_ 01_000003/tmp-dlog4j.configuration=container-log4j.properties-dyarn.app.container.log.dir=/var/log/hadoop/yarn /userlogs/application_13941 60253524_0003/container_1394160253524_0003_01_000003- Dyarn.app.container.log.filesize=0-dhadoop.root.logger=info,cla Org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 ATTEMPT_1394160253524_0003_M_000001_0 3 Container killed on request.
 Exit Code is 143

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.