Killer Shell that has a major impact on Hadoop-HDFS Performance

Source: Internet
Author: User
When testing Hadoop, The dfshealth. jsp Management page on NameNode found that the LastContact parameter often exceeded 3 during the running process of DataNode. LC (LastContact) indicates how many seconds the DataNode has not sent a heartbeat packet to the NameNode. However, by default, DataNode is sent once every 3 seconds. We all know that NameN

When testing Hadoop, useDfThe shealth. jsp Management page shows that during the running of DataNode, the Last Contact parameter often exceeds 3. LC (Last Contact) indicates how many seconds the DataNode has not sent a heartbeat packet to the NameNode. However, by default, DataNode is sent once every 3 seconds. We all know that NameNode uses 10 minutes as the DN's death timeout by default. What causes the LC parameter on the JSP Management page to exceed 3, it may even reach more than 200. Does this affect the stable operation of our system?

As a matter of fact, I have observed this phenomenon for a while. The reasons that affect the increase of LC parameters are as follows:

1. HDFS receives a large number of BLOCK DeletionCommand. See: https: // isSuEs.apache.org/jira/browse/HDFS-611;
2. HDFS has a large number of blocks that need to be reported to NN;
3. Organize heartbeat packet data;
4. network environment.

In the first two cases, the LC value generally does not exceed 100, which has no significant impact on performance. Hadoop has also been improved in Versions later than Hadoop-0.22.0.
The value of this parameter is that during the organization heartbeat packetSetThe Interface calls related methods. For details, refer to the methods in the FSDatasetMBean Interface:

  1. /** 
  2. * Returns the total space (in bytes) usEdBy dfs datanode 
  3. * @ Return the total space uSedBy dfs datanode 
  4. * @ Throws IOException 
  5. */
  6. Public LongGetDfSUSEd ()ThrowsIOException;
  7. /** 
  8. * Returns total capacity (in bytes) of storage (used and unused) 
  9. * @ Return total capacity of storage (used and unused) 
  10. * @ Throws IOException 
  11. */
  12. Public LongGetCapacity ()ThrowsIOException;
  13. /** 
  14. * Returns the amountFreeStorage space (in bytes) 
  15. * @ Return The amount of free storage space 
  16. * @ Throws IOException 
  17. */
  18. Public LongGetRemaining ()ThrowsIOException;

All three methods mean that they are actually DF and DU classes. They occasionally execute system commands through the runComamnd method of the Shell class, to get the df of the current directory,DuValue.

However, an interesting thing happened during the execution process. I have 13 partitions and there are more than 0.14 million blocks in total,

The average execution time of Df and du exceeds two seconds. The dramatic difference is that it takes more than 180 seconds to execute the command of a partition directory for DU and DF. (In the Shell # runCommand method, instantiate from ProcessBuilder to process. start () execution time ).

Is it because the number of blocks in the partition directory is too large, resulting in slow running? in linux, the results of running the same command df du are all ended in milliseconds. The problem is obviously caused by ProcessBuilder. This class is used by the JVM to fork sub-processes through the Linux kernel. The sub-processes will of course completely inherit all the memory handles of the parent process, jstack sees that most of the threads in the JVM State are in the WAITING state at this time. This process has been tested and will indeed affect the DFSClient write timeout or stream close errors (as mentioned in the next article, as the DFSClient for long-Running, close the stream, 0.21-TrThe off of the unk stream still has security risks .) Finally, I moved from 32-bit to 64-bit, but the problem still exists.

Finally, we had to open the HDFS again and reconstruct DU, DF, and our IOStat and UpTimeClass, which is executed in Linux and output to a temporary file, which is then read by Hadoop. LC issues no longer occur.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.