The mysterious killer-shell.

Source: Internet
Author: User

If you want to know the source code of HDFS, you can read his javaeye from Brother Cai bin. 

Sorry, I used the word "mysterious killer" because it really hurt me so much that I took a lot of energy to pick it up.

Recently, when testing hadoop, The dfshealth. jsp Management page on namenode found that during the running of datanode, the last contact parameter often exceeded 3. LC (last contact) indicates how many seconds the datanode has not sent a heartbeat packet to the namenode. however, by default, datanode is sent once every 3 seconds. We all know that namenode uses 10 minutes as the DN's death timeout by default. What causes the LC parameter on the JSP Management page to exceed 3, it may even reach more than 200. Does this affect the stable operation of our system?
As a matter of fact, I have observed this phenomenon for a while. The reasons that affect the increase of LC parameters are as follows:

1. HDFS receives a large number of block deletion commands. See https://issues.apache.org/jira/browse/hdfs-611;
2. HDFS has a large number of blocks that need to be reported to nn;
3. Organize heartbeat packet data;
4. network environment.

In the first two cases, the LC value generally does not exceed 100, which has no significant impact on performance. Hadoop has also been improved in Versions later than Hadoop-0.22.0.

The value of this parameter is that dn calls related methods to the fsdatasetinterface interface during the organization heartbeat packet. For details, refer to several methods in the fsdatasetmbean interface: Java code.

  1. /**
  2. * Returns the total space (in bytes) used by DFS datanode
  3. * @ Return the total space used by DFS datanode
  4. * @ Throws ioexception
  5. */
  6. Public long getdfsused () throws ioexception;
  7. /**
  8. * Returns total capacity (in bytes) of storage (used and unused)
  9. * @ Return total capacity of storage (used and unused)
  10. * @ Throws ioexception
  11. */
  12. Public long getcapacity () throws ioexception;
  13. /**
  14. * Returns the amount of free storage space (in bytes)
  15. * @ Return the amount of free storage space
  16. * @ Throws ioexception
  17. */
  18. Public long getremaining () throws ioexception;

All three methods mean that they are actually DF and Du classes. They occasionally execute system commands through the runcomamnd method of the shell class, to obtain the DF and Du values of the current directory.
However, an interesting thing happened during the execution process. I have 13 partitions and there are more than 0.14 million blocks in total,
The average execution time of DF and Du exceeds two seconds. The dramatic difference is that it takes more than 180 seconds to execute the command of a partition directory for DU and DF. (In the shell # runcommand method, instantiate from processbuilder to process. Start () execution time ).

Is it because the number of blocks in the partition directory is too large, resulting in slow running? in Linux, the results of running the same command DF du are all ended in milliseconds. The problem is obviously caused by processbuilder. This class is used by the JVM to Fork sub-processes through the Linux kernel. The sub-processes will of course completely inherit all the memory handles of the parent process, jstack sees that most of the threads in the JVM State are in the waiting state at this time. This process has been tested and will indeed affect the dfsclient write timeout or stream close errors (as mentioned in the next article, as the dfsclient for long-running, close the stream well. The 0.21-trunk streaming is still prone to security risks .) Finally, I moved from 32-bit to 64-bit, but the problem still exists.

In the end, we had to open the HDFS again, reconstruct the du, DF, and iostat and uptime classes, run them through the Linux system, output the results to the temporary files, and then read them by hadoop. LC issues no longer occur. Of course, a friend met, and a solution can contact me dongtalk@gmail.com.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.