I restarted the hadoop cluster today and reported an error when I used eclipse to debug HDFS APIs:
[WARNING] java.lang.NullPointerException at org.conan.kafka.HdfsUtil.batchWrite(HdfsUtil.java:50) at org.conan.kafka.SingleTopicConsumer.run(SingleTopicConsumer.java:144) at java.lang.Thread.run(Thread.java:745) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
Find row 50th of hdfsutil. Java
os.write(buff, 0, buff.length);
I found that an error was reported during writing, and after a while, I did not find the cause. Later I wondered if it was a permission problem (the computer is not an administrator, and the hadoop cluster is running as root ), however, the permission of the HDFS folder has been changed to 777. No, try hdfs dfs-chmod-r 777/input, and then DEBUG again. OK.
So the question is, does the hadoop cluster need to modify the file (folder) Permission after each restart (without namenode format?
According to practice, it should be.
HDFS directory permission problems after hadoop is restarted