HDFS Safemode and hdfssafemode
Clusters in safemode cannot receive any write operations, including creating directories, deleting files, modifying files, and uploading files.
For more information about safemode, see http://www.iteblog.com/archives/977. Hdfs clusters are usually in safemode for a period of time when they are started or shut down, if a cluster contains a large number of block copies that are less than the configured copy data volume (the number of copies is not necessarily configured in the hdfs configuration file, but only the default value is in the configuration file, when creating a file, the client can specify the number of copies for the file), or if a large number of nodes fail, the cluster may be stuck in the safemode for a long time.
In the above case, you need to check whether the cluster or file is faulty.
Sometimes the Administrator forces the cluster to enter the safemode. You can exit through hdfs dfsadmin-safemode leave
This exception is reported in the jobtracker log when the hadoop cluster is started. It instructs you how to solve the problem.
This should be because the node version is different. If you format the node too many times, the node version on the machine will be chaotic. I also encountered this problem, and there is no way to change it, if this error is changed ,,,
Therefore, my solution is to delete the hadoop environment on all machines and create a new environment. It is okay to follow the steps ,,
An error occurred while creating a folder on hdfs.
Are you only using the hadoop fs-mkdir input command, hadoop fs-ls/to check whether the output is normal, and whether/usr/bin/python3 is in the Environment Variable