When testing Hadoop, The dfshealth. jsp Management page on NameNode found that the LastContact parameter often exceeded 3 during the running process of DataNode. LC (LastContact) indicates how many seconds the DataNode has not sent a heartbeat packet to the NameNode. However, by default, DataNode is sent once every 3 seconds. We all know that NameN
When testing Hadoop, useDfThe shealth. jsp Management page shows that during the running of DataNode, the Last Contact parameter often exceeds 3. LC (Last Contact) indicates how many seconds the DataNode has not sent a heartbeat packet to the NameNode. However, by default, DataNode is sent once every 3 seconds. We all know that NameNode uses 10 minutes as the DN's death timeout by default. What causes the LC parameter on the JSP Management page to exceed 3, it may even reach more than 200. Does this affect the stable operation of our system?
As a matter of fact, I have observed this phenomenon for a while. The reasons that affect the increase of LC parameters are as follows:
1. HDFS receives a large number of BLOCK DeletionCommand. See: https: // isSuEs.apache.org/jira/browse/HDFS-611;
2. HDFS has a large number of blocks that need to be reported to NN;
3. Organize heartbeat packet data;
4. network environment.
In the first two cases, the LC value generally does not exceed 100, which has no significant impact on performance. Hadoop has also been improved in Versions later than Hadoop-0.22.0.
The value of this parameter is that during the organization heartbeat packetSetThe Interface calls related methods. For details, refer to the methods in the FSDatasetMBean Interface:
- /**
- * Returns the total space (in bytes) usEdBy dfs datanode
- * @ Return the total space uSedBy dfs datanode
- * @ Throws IOException
- */
- Public LongGetDfSUSEd ()ThrowsIOException;
- /**
- * Returns total capacity (in bytes) of storage (used and unused)
- * @ Return total capacity of storage (used and unused)
- * @ Throws IOException
- */
- Public LongGetCapacity ()ThrowsIOException;
- /**
- * Returns the amountFreeStorage space (in bytes)
- * @ Return The amount of free storage space
- * @ Throws IOException
- */
- Public LongGetRemaining ()ThrowsIOException;
All three methods mean that they are actually DF and DU classes. They occasionally execute system commands through the runComamnd method of the Shell class, to get the df of the current directory,DuValue.
However, an interesting thing happened during the execution process. I have 13 partitions and there are more than 0.14 million blocks in total,
The average execution time of Df and du exceeds two seconds. The dramatic difference is that it takes more than 180 seconds to execute the command of a partition directory for DU and DF. (In the Shell # runCommand method, instantiate from ProcessBuilder to process. start () execution time ).
Is it because the number of blocks in the partition directory is too large, resulting in slow running? in linux, the results of running the same command df du are all ended in milliseconds. The problem is obviously caused by ProcessBuilder. This class is used by the JVM to fork sub-processes through the Linux kernel. The sub-processes will of course completely inherit all the memory handles of the parent process, jstack sees that most of the threads in the JVM State are in the WAITING state at this time. This process has been tested and will indeed affect the DFSClient write timeout or stream close errors (as mentioned in the next article, as the DFSClient for long-Running, close the stream, 0.21-TrThe off of the unk stream still has security risks .) Finally, I moved from 32-bit to 64-bit, but the problem still exists.
Finally, we had to open the HDFS again and reconstruct DU, DF, and our IOStat and UpTimeClass, which is executed in Linux and output to a temporary file, which is then read by Hadoop. LC issues no longer occur.