Java. Io. ioexception: task process exit with nonzero status of 1. Many blog posts on the Internet say that the disk is not enough.
In fact, I often encounter this problem because the program throws Org. apache. hadoop. mapred. child: Error running child: Java. lang. outofmemoryerror: Unable to create new Native thread this problem causes the program to fail and delete the work interval, and eventually the file cannot be read or written. Therefore, this problem may also be caused by memory overflow.
Solution:
1. Increase the hadoop_heapsize value in the hadoop-env.sh
2. Increase the value of mapred. Child. java. opts in the mapred-site.xml (default: 200 m)
property > name > mapred. child. java. opts name > value >-xmx2048m
value > property >
3. decrease mapred-site.xml mapred. tasktracker. map. tasks. maximumde and mapred. tasktracker. reduce. tasks. maximum value
<Property><Name>Mapred. tasktracker. Map. Tasks. Maximum</Name><Value>15</Value></Property>
Hadoop: task process exit with nonzero status of 1 exception