Yesterday because Datanode appeared large-scale offline situation, the preliminary judgment is dfs.datanode.max.transfer.threads parameter set too small. the hdfs-site.xml configuration files for all Datanode nodes are then adjusted. After restarting the cluster, in order to verify, try to run a job, see the configuration of the job in Jobhistory, it is surprising that the display is still the old value, that is, the job is still running with the old value, Obviously all datanode nodes have already done the update of the configuration file.
It suddenly occurred to me that the scheduling of Hadoop jobs was initiated by another machine outside the cluster, and the configuration file for this machine was not updated. Is it possible to read the Hadoop configuration of this client machine when the job is started? To change this client configuration and then test it, the new job is using the new configured value.
From this perspective, it is estimated that the Hadoop configuration file does not appear to be read entirely by daemons such as Namenode, Datanode, and that some of the configuration items are actually read by the client initiating the job and are passed to the job in Hadoop and affect the job run. So if Hadoop scheduling is initiated by an out-of-cluster machine, keep in mind that the configuration file for the originating machine is consistent with the cluster configuration file to avoid confusing problems.
PS: The above conclusions are only based on the practice of speculation, follow-up should be verified according to the resources.
This article is from the Big Data Learning Quest blog, so be sure to keep this source http://bigdata1024.blog.51cto.com/6098731/1889993
Work Diary: Hadoop client configuration needs to be consistent with cluster nodes