Java.lang.OutOfMemoryError:Java Heap Space Hadoop

Source: Internet
Author: User

Recently, with the rise of big data, Java's implementation of Hadoop has become a leader in this data field, whether HDFs, MapReduce, or hive has become a hot word. Also from the software ontology of Hadoop spawned a dependency on this big data domain, but also with the current hot no longer hot cloud computing pull the relationship.

Therefore, as a programmer, have to learn new skills and knowledge to ensure their own job, this is really a very hard job. So, start contacting Hadoop. The result is that there is no mistake.

"Hadoop Beginner's Guide," The guidelines, to build the environment, found that Hadoop has provided the installation version of Deb, so it saves a lot of redundant work, directly in the Ubuntu Software Center after double-click Install.

After the installation is complete, try to execute the first sample program.

Hadoop jar Hadoop-examples-1.0.4.jar Pi 4 1000, unfortunately the following error has occurred,

[email protected]:/usr/share/hadoop$ sudo hadoop jar hadoop-examples-1.2.1.jar Pi 4 1Number of Maps = 4Samples per M AP = 115/04/17 21:54:44 INFO util. nativecodeloader:loaded the Native-hadoop librarywrote input for map #0Wrote an input for map #1Wrote an input for map #2Wrote Input for Map #3Starting job15/04/17 21:54:44 INFO mapred. Fileinputformat:total input paths to PROCESS:415/04/17 21:54:44 INFO mapred. Jobclient:running job:job_local1032904958_000115/04/17 21:54:44 INFO mapred. Localjobrunner:waiting for map Tasks15/04/17 21:54:44 INFO mapred. localjobrunner:starting task:attempt_local1032904958_0001_m_000000_015/04/17 21:54:44 INFO util. Processtree:setsid exited with exit code 015/04/17 21:54:44 INFO mapred. Task:using resourcecalculatorplugin: [Email protected]15/04/17 21:54:44 INFO mapred. Maptask:processing SPLIT:FILE:/USR/SHARE/HADOOP/PIESTIMATOR_TMP_3_141592654/IN/PART2:0+11815/04/17 21:54:44 INFO Mapred. MAPTASK:NUMREDUCETASKS:115/04/17 21:54:45 INFO mapred. MAPTASK:IO.SORT.MB = 10015/04/17 21:54:45 INFO mapred. Localjobrunner:starting task:attempt_local1032904958_0001_m_000001_015/04/17 21:54:45 INFO mapred. Task:using resourcecalculatorplugin: [Email protected]15/04/17 21:54:45 INFO mapred. Maptask:processing SPLIT:FILE:/USR/SHARE/HADOOP/PIESTIMATOR_TMP_3_141592654/IN/PART1:0+11815/04/17 21:54:45 INFO Mapred. MAPTASK:NUMREDUCETASKS:115/04/17 21:54:45 INFO mapred. MAPTASK:IO.SORT.MB = 10015/04/17 21:54:45 INFO mapred. Localjobrunner:starting task:attempt_local1032904958_0001_m_000002_015/04/17 21:54:45 INFO mapred. Task:using resourcecalculatorplugin: [Email protected]15/04/17 21:54:45 INFO mapred. Maptask:processing SPLIT:FILE:/USR/SHARE/HADOOP/PIESTIMATOR_TMP_3_141592654/IN/PART0:0+11815/04/17 21:54:45 INFO Mapred. MAPTASK:NUMREDUCETASKS:115/04/17 21:54:45 INFO mapred. MAPTASK:IO.SORT.MB = 10015/04/17 21:54:45 INFO mapred. Localjobrunner:starting Task:attempt_local1032904958_0001_m_000003_015/04/17 21:54:45 INFO mApred. Task:using resourcecalculatorplugin: [Email protected]15/04/17 21:54:45 INFO mapred. Maptask:processing SPLIT:FILE:/USR/SHARE/HADOOP/PIESTIMATOR_TMP_3_141592654/IN/PART3:0+11815/04/17 21:54:45 INFO Mapred. MAPTASK:NUMREDUCETASKS:115/04/17 21:54:45 INFO mapred. MAPTASK:IO.SORT.MB = 10015/04/17 21:54:45 INFO mapred. Localjobrunner:map Task Executor Complete.15/04/17 21:54:45 WARN mapred. LocalJobRunner:job_local1032904958_0001java.lang.Exception:java.lang.OutOfMemoryError:Java Heap Spaceat Org.apache.hadoop.mapred.localjobrunner$job.run (localjobrunner.java:354) caused by:java.lang.OutOfMemoryError: Java Heap Spaceat org.apache.hadoop.mapred.maptask$mapoutputbuffer.<init> (maptask.java:954) at Org.apache.hadoop.mapred.MapTask.runOldMapper (maptask.java:422) at Org.apache.hadoop.mapred.MapTask.run ( maptask.java:366) at Org.apache.hadoop.mapred.localjobrunner$job$maptaskrunnable.run (LocalJobRunner.java:223) at Java.util.concurrent.executors$runnableadapter.call (Executors.java:511) at Java.util.concurrent.FutureTask.run (futuretask.java:266) at Java.util.concurrent.ThreadPoolExecutor.runWorker (threadpoolexecutor.java:1142) at Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:617) at Java.lang.Thread.run ( thread.java:744) 15/04/17 21:54:45 INFO mapred. Jobclient:map 0% reduce 0%15/04/17 21:54:45 INFO mapred. Jobclient:job complete:job_local1032904958_000115/04/17 21:54:45 INFO mapred. JOBCLIENT:COUNTERS:015/04/17 21:54:45 INFO mapred. Jobclient:job Failed:NAjava.io.IOException:Job failed!at Org.apache.hadoop.mapred.JobClient.runJob (Jobclient.java : 1357) at Org.apache.hadoop.examples.PiEstimator.estimate (piestimator.java:297) at Org.apache.hadoop.examples.PiEstimator.run (piestimator.java:342) at Org.apache.hadoop.util.ToolRunner.run ( TOOLRUNNER.JAVA:65) at Org.apache.hadoop.examples.PiEstimator.main (piestimator.java:351) at SUN.REFLECT.NATIVEMETHODACCESSORIMPL.INVOKE0 (Native Method) at Sun.reflect.NativeMethodAccessorImpl.invoke ( Nativemethodaccessorimpl.java:62) at Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:483) at org.apache.hadoop.util.programdriver$ Programdescription.invoke (programdriver.java:68) at Org.apache.hadoop.util.ProgramDriver.driver ( programdriver.java:139) at Org.apache.hadoop.examples.ExampleDriver.main (exampledriver.java:64) at SUN.REFLECT.NATIVEMETHODACCESSORIMPL.INVOKE0 (Native Method) at Sun.reflect.NativeMethodAccessorImpl.invoke ( nativemethodaccessorimpl.java:62) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:483) at Org.apache.hadoop.util.RunJar.main (runjar.java:160) [email protected]:/usr/share/hadoop$ sudo hadoop jar Hadoop-examples-1.2.1.jar Pi 1 1Number of Maps = 1Samples per Map = 115/04/17 21:54:51 INFO util. nativecodeloader:loaded the Native-hadoop librarywrote input for Map #0Starting job15/04/17 21:54:51 INFO mapred. FileinpUtformat:total input paths to PROCESS:115/04/17 21:54:51 INFO mapred. Jobclient:running job:job_local406287877_000115/04/17 21:54:52 INFO mapred. Localjobrunner:waiting for map Tasks15/04/17 21:54:52 INFO mapred. localjobrunner:starting task:attempt_local406287877_0001_m_000000_015/04/17 21:54:52 INFO util. Processtree:setsid exited with exit code 015/04/17 21:54:52 INFO mapred. Task:using resourcecalculatorplugin: [Email protected]15/04/17 21:54:52 INFO mapred. Maptask:processing SPLIT:FILE:/USR/SHARE/HADOOP/PIESTIMATOR_TMP_3_141592654/IN/PART0:0+11815/04/17 21:54:52 INFO Mapred. MAPTASK:NUMREDUCETASKS:115/04/17 21:54:52 INFO mapred. MAPTASK:IO.SORT.MB = 10015/04/17 21:54:52 INFO mapred. Localjobrunner:map Task Executor Complete.15/04/17 21:54:52 WARN mapred. LocalJobRunner:job_local406287877_0001java.lang.Exception:java.lang.OutOfMemoryError:Java Heap Spaceat Org.apache.hadoop.mapred.localjobrunner$job.run (localjobrunner.java:354) caused by:java.lang.OutOfMemoryError: Java Heap Spaceat org.apache.hadoop.mapred.maptask$mapoutputbuffer.<init> (maptask.java:954) at Org.apache.hadoop.mapred.MapTask.runOldMapper (maptask.java:422) at Org.apache.hadoop.mapred.MapTask.run ( maptask.java:366) at Org.apache.hadoop.mapred.localjobrunner$job$maptaskrunnable.run (LocalJobRunner.java:223) at Java.util.concurrent.executors$runnableadapter.call (executors.java:511) at Java.util.concurrent.FutureTask.run ( futuretask.java:266) at Java.util.concurrent.ThreadPoolExecutor.runWorker (threadpoolexecutor.java:1142) at Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:617) at Java.lang.Thread.run ( thread.java:744) 15/04/17 21:54:52 INFO mapred. Jobclient:map 0% reduce 0%15/04/17 21:54:52 INFO mapred. Jobclient:job complete:job_local406287877_000115/04/17 21:54:52 INFO mapred. JOBCLIENT:COUNTERS:015/04/17 21:54:52 INFO mapred. Jobclient:job Failed:NAjava.io.IOException:Job failed!at Org.apache.hadoop.mapred.JobClient.runJob (Jobclient.java : 1357) at Org.apacHe.hadoop.examples.PiEstimator.estimate (piestimator.java:297) at Org.apache.hadoop.examples.PiEstimator.run ( piestimator.java:342) at Org.apache.hadoop.util.ToolRunner.run (toolrunner.java:65) at Org.apache.hadoop.examples.PiEstimator.main (piestimator.java:351) at SUN.REFLECT.NATIVEMETHODACCESSORIMPL.INVOKE0 (Native Method) at Sun.reflect.NativeMethodAccessorImpl.invoke ( nativemethodaccessorimpl.java:62) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:483) at Org.apache.hadoop.util.programdriver$programdescription.invoke (programdriver.java:68) at Org.apache.hadoop.util.ProgramDriver.driver (programdriver.java:139) at Org.apache.hadoop.examples.ExampleDriver.main (exampledriver.java:64) at SUN.REFLECT.NATIVEMETHODACCESSORIMPL.INVOKE0 (Native Method) at Sun.reflect.NativeMethodAccessorImpl.invoke ( nativemethodaccessorimpl.java:62) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( delegatingmethodaccessorimpl.java:43) at JAva.lang.reflect.Method.invoke (method.java:483) at Org.apache.hadoop.util.RunJar.main (runjar.java:160) 

Check out the countless data, have no effect, through every means to find, debugging, found that the key is the/etc/hadoop/hadoop-env.sh file allocation of insufficient memory, resulting in insufficient memory errors. The modified file is now pasted as follows:

# Set hadoop-specific Environment Variables here.# The only required environment variable are java_home. All others are# optional. When running a distributed configuration It's best to# set java_home in the this file, so it's correctly defined on# re Mote nodes.# the Java implementation to Use.<span style= "color: #ff0000;" >export java_home=/usr/lib/jvm/jdk1.8.0</span>export hadoop_conf_dir=${hadoop_conf_dir:-"/etc/hadoop"}# The maximum amount of heap to use, in MB. Default is 1000.<span style= "color: #ff0000;" >export hadoop_heapsize=100</span> #export hadoop_namenode_init_heapsize= "" # Extra Java runtime options. Empty by Default.export hadoop_opts= "-djava.net.preferipv4stack=true $HADOOP _client_opts" # Command specific options Appended to hadoop_opts when Specifiedexport hadoop_namenode_opts= "-dhadoop.security.logger=info,drfas- Dhdfs.audit.logger=info,drfaaudit $HADOOP _namenode_opts "hadoop_jobtracker_opts="-dhadoop.security.logger=info, Drfas-dmapred.audit.logger=Info,mraudit-dhadoop.mapreduce.jobsummary.logger=info,jsa $HADOOP _jobtracker_opts "hadoop_tasktracker_opts="- Dhadoop.security.logger=error,console-dmapred.audit.logger=error,console $HADOOP _tasktracker_opts "HADOOP_ datanode_opts= "-dhadoop.security.logger=error,drfas $HADOOP _datanode_opts" Export hadoop_secondarynamenode_opts= " -dhadoop.security.logger=info,drfas-dhdfs.audit.logger=info,drfaaudit $HADOOP _secondarynamenode_opts "# The Following applies to multiple commands (FS, DFS, fsck, distcp etc) <span style= "color: #ff0000;" >export hadoop_client_opts= "-xmx200m $HADOOP _client_opts" </span> #HADOOP_JAVA_PLATFORM_OPTS = "-xx:- Useperfdata $HADOOP _java_platform_opts "# on Secure Datanodes, user to run the Datanode as after dropping Privilegesexport hadoop_secure_dn_user=# Where log files are stored. $HADOOP _home/logs by Default.export hadoop_log_dir=/var/log/hadoop/$USER # Where LOG files is stored in the secure data en Vironment.export hadoop_secure_dn_log_dir=/var/log/hadoop/# the DiRectory where PID files are stored. /tmp by Default.export hadoop_pid_dir=/var/run/hadoopexport hadoop_secure_dn_pid_dir=/var/run/hadoop# A string Representing this instance of Hadoop. $USER by Default.export hadoop_ident_string= $USER

After the modification is complete, make it effective immediately, and execute the source hadoop-env.sh.

Java.lang.OutOfMemoryError:Java Heap Space Hadoop

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.