Hadoop Common Errors

Source: Internet
Author: User

Hadoop Common Errors

Hadoop Common Errors:

1. DataXceiver error processing WRITE_BLOCK operation

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 192-168-11-58:50010:DataXceiver error processing WRITE_BLOCK operation  src: 
1) modify the maximum number of files opened by a process
Vi/etc/security/limits. conf
# End of file*               -       nofile          1000000*               -       nproc          1000000
2) modify the number of data transmission threads
Vi hdfs-site.xml
<property>       <name>dfs.datanode.max.transfer.threads</name>       <value>8192</value>       <description>           Specifies the maximum number of threads to use for transferring data          in and out of the DN.       </description>  </property>  
Copy to another node and restart datanode.

2. jobhistory cannot be started. The logs are as follows:

FATAL org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: Error starting JobHistoryServerorg.apache.hadoop.yarn.YarnException: Error creating done directory: [hdfs://]at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.init(HistoryFileManager.java:424)at org.apache.hadoop.mapreduce.v2.hs.JobHistory.init(JobHistory.java:87)at org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.init(JobHistoryServer.java:87)at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:145)Caused by: java.net.NoRouteToHostException: No Route to Host from  hadoop-62/ to hadoop-61:8020 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost 
Solution: Disable the firewalls of each node;

3. The memory of the Hadoop cluster is shown to be only 16 GB through port 8088, and the actual physical memory is 64 GB/node.

Hadoop 2. for Versions later than X, the default NodeManager total available physical memory is 8 GB (8192 MB), which is write-dead and needs to be added by adding yarn in the yarn-site.xml configuration file. nodemanager. resource. memory-mb, and change it to the physical memory size you need to set. Note: once set, it cannot be dynamically modified throughout the running process. After modifying the configuration file, you need to restart the NodeManager service. In addition, the default value of this parameter is 8192 MB. Even if your machine memory is less than 8192 MB, YARN will use the memory. Therefore, this value must be configured. However, Apache is already trying to make this parameter dynamically changeable. It may be improved in later versions.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.