Hadoop: Java. Lang. outofmemoryerror: Unable to create new Native thread

Source: Internet
Author: User

Recently running hadoopProgram, Encountered two problems:

 

1. outofmemoryerror in hadoop

Error: Unable to create new Native thread error initializing attempt_2011100000003_0013_r_000000_0: Java. lang. outofmemoryerror: Unable to create new Native thread at java. lang. thread. start0 (native method) at java. lang. thread. start (thread. java: 614) at java. lang. unixprocess $ 1.run( unixprocess. java: 157) at java. security. accesscontroller. doprivileged (native method) at java. lang. unixprocess. (unixprocess. java: 119) at java. lang. processimpl. start (processimpl. java: 81) at java. lang. processbuilder. start (processbuilder. java: 468) at Org. apache. hadoop. util. shell. runcommand (shell. java: 149) at Org. apache. hadoop. util. shell. run (shell. java: 134) at Org. apache. hadoop. FS. DF. getavailable (DF. java: 73) at Org. apache. hadoop. FS. localdirallocator $ allocatorpercontext. getlocalpathforwrite (localdirallocator. java: 329) at Org. apache. hadoop. FS. localdirallocator. getlocalpathforwrite (localdirallocator. java: 124) at Org. apache. hadoop. mapred. tasktracker. localizejob (tasktracker. java: 750) at Org. apache. hadoop. mapred. tasktracker. startnewtask (tasktracker. java: 1664) at Org. apache. hadoop. mapred. tasktracker. access $1200 (tasktracker. java: 97) at Org. apache. hadoop. mapred. tasktracker $ tasklauncher. run (tasktracker. java: 1629)

when you have this kind of erros when runnning hadoop jobs, there might be a numer of possible reasons thanks to the feeble Implementation of hadoop. one possible reason is because in your mapreduce programs you open too much processes exceeding the default setting of your OS, for example, the default number is 1024 (you can check this number by executing 'ulimit-U '). A perfect example of using processing processes wocould be such a case, in which you want control the output file name based on key-value pair in the reduce stage. to solve this problem, you need to modify some configuration files to raise up the maximum process number you can use, which can be done by editing/etc/security/limits. conf. simply adding the following two lines to the llimits. conf to set the 100000 as the maximum number of processs in your system for user hadoop.

Hadoop soft nproc 100000
Hadoop hard nproc 100000

Other useful resources about OOM in hadoop can be found in the following links:
The dark side of hadoop; nosql; Dealing with outofmemoryerror-in-hadoop;

2. task id: attempt_200912131946_0001_m_000000_0, status: failed too ‑fetch-failures

Reduce taskThe first phase after startup isShuffle, ForwardMapEndFetchData. Each timeFetchBecauseConnectTimeout,ReadTimeout,ChecksumErrors and other causes.Reduce taskFor eachMapA counter is set to recordFetchTheMapNumber of failures in the output. Notification will be sent when the number of failures reaches a certain threshold.Jobtracker fetchTheMapThe output operation failed too many times and printed as follows:Log:

Failed to fetch map-output from attempt_201105261254_102769_m_001802_0 even after max_fetch_retries_per_map retries... reporting to the jobtracker

Blog: http://blog.csdn.net/liangliyin/article/details/6455713

I think it is a communication problem between machine nodes.

 

There are some good posts below:

Http://www.360doc.com/content/11/0323/23/23378_104035203.shtml

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.