JMX monitoring for Hadoop and HBase Clusters

Source: Internet
Author: User

Speaking of Hadoop and HBase Cluster Monitoring, we all know and use third-party monitoring tools, such as cacti, ganglia, and zabbix. Zenoss is used for playing more deeply. These tools are indeed good and can play a major role, but after a long time, I always feel that the monitoring granularity is still relatively coarse, not detailed enough. After all, it is a third-party monitoring. Even if Hadoop comes with the ganglia interface, it still feels insufficient.


In fact, Hadoop itself has monitoring interfaces. The release versions of various companies also have their own custom interfaces, but there may be fewer people to know. This non-detailed documentation and Source Code cannot be found and is a hidden attribute. In fact, this monitoring interface is used in the EasyHadoop management interface I wrote to provide detailed status monitoring for the entire cluster, which is still being expanded. The next step is to monitor the Heap usage of the Java Process, which plays an important role in optimizing the performance of the entire cluster.


In fact, this interface is very simple, but very detailed and convenient, that is, JMX.


All users of Hadoop's http Monitoring Ports know that namenode 50070, jobtracker 50030, datanode 50075, and tasktracker 50060. However, when you access these ports, the monitoring page such as dfshealth. jsp or jobtracker. jsp is automatically displayed. Jmx access is simple. You only need to change the webpage name to jmx.

For example

Set

Http: // your_namenode: 50070/dfshealth. jsp

Address

Http:/// your_namenode: 50070/jmx

You can obtain other information, such as, and so on. HBase system information can also be obtained using this method.


The returned values are all in JSON format, which is easy to process by yourself. The returned information is also very detailed, including the memory status, memory pool status, and java heap information. The operating system information, version, and JVM version information are comprehensive.


However, the JSON obtained by this interface is often very large, and it is too waste to capture a JSON string for monitoring a certain item. Therefore, this is another hidden attribute, which can be found only in the Hadoop source code. The source code is in src/core/org/apache/hadoop/jmx.


A public class of JMXJsonServlet. java

public void doGet(HttpServletRequest request, HttpServletResponse response) {  try {    // Do the authorization    if (!HttpServer.hasAdministratorAccess(getServletContext(), request,        response)) {      return;    }    response.setContentType("application/json; charset=utf8");    PrintWriter writer = response.getWriter();    JsonFactory jsonFactory = new JsonFactory();    JsonGenerator jg = jsonFactory.createJsonGenerator(writer);    jg.useDefaultPrettyPrinter();    jg.writeStartObject();    if (mBeanServer == null) {      jg.writeStringField("result", "ERROR");      jg.writeStringField("message", "No MBeanServer could be found");      jg.close();      return;    }    String qry = request.getParameter("qry");    if (qry == null) {      qry = "*:*";    }    listBeans(jg, new ObjectName(qry));    jg.close();  } catch (IOException e) {    LOG.error("Caught an exception while processing JMX request", e);    response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);  } catch (MalformedObjectNameException e) {    LOG.error("Caught an exception while processing JMX request", e);    response.setStatus(HttpServletResponse.SC_BAD_REQUEST);  }}


From the source code, we can see that the retrieval of json can carry http verification, and there can also be a parameter called qry. The value of this parameter is the name corresponding to each "name" key in the entire long JSON. That is, you can use

Http: // your_tasktracker: 50060/jmx? Qry = GarbageCollector, name = PS MarkSweep

This method is used to obtain the processing status of JVM garbage collection. It's easy, isn't it?


Real-time status monitoring for obtaining HDFS and MR in EasyHadoop Agent is through the JMX interface. The processing code is as follows:


class EasyHadoopHandler:    def GetJmx(self,host,port,qry):        url = 'http://'+host+':'+port+'/jmx?qry='+qry        jmx = urllib.urlopen(url)        json = jmx.read().replace('\n','')        jmx.close()        return json


The Central initiates a JMX query request. After obtaining the relevant monitoring information, the Agent returns the json to the Central, and then uses js to plot the data and return it to the user in real time.


Different releases also have different monitoring interfaces. For example, cloudera also adds an interface called metrics and not metrics2.


In this way, Hadoop monitoring is much more detailed than simply using cacti and ganglia. HBase can also process and monitor ports such as 60010.



This article was posted on the "practice test truth" blog and declined to be reproduced!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.