Use Ganglia to monitor Hadoop and HBase Clusters

Source: Internet
Author: User
Tags rrd rrdtool dns names

Use Ganglia to monitor Hadoop and HBase Clusters

1. Introduction to Ganglia

Ganglia is an open-source monitoring project initiated by UC Berkeley designed to measure thousands of nodes. Each computer runs a gmond daemon that collects and sends metric data (such as processor speed and memory usage. It is collected from the operating system and the specified host. Hosts that receive all metric data can display the data and pass the simplified form of the data to the hierarchy. Ganglia can be well expanded just because of this hierarchical structure. Gmond has very little system load, which makes it a piece of code running on each computer in the cluster without affecting user performance.

1.1 Ganglia component

The Ganglia Monitoring Suite consists of three main parts: gmond, gmetad, and webpage interfaces, which are generally called ganglia-web.

Gmond: a daemon that runs on every node to be monitored and collects monitoring statistics, send and receive statistics on the same multicast or unicast channel. If he is a sender (mute = no), he will collect basic metrics, such as system load (load_one ), CPU usage. It also sends the metric that is customized by adding the C/Python module. If he is a receiver (deaf = no), he aggregates all the metrics sent from other hosts and saves them in the memory buffer.

Gmetad: it is also a daemon. It regularly checks gmonds, pulls data from it, and stores their metrics in the RRD storage engine. It can query multiple clusters and aggregate metrics. It is also used to generate the web Front-end of the user interface.

Ganglia-web: as the name suggests, it should be installed on a machine running gmetad to read the RRD file. Clusters are logical groups of hosts and metric data, such as database servers, Web servers, production, testing, and QA. They are all completely separated, you need to run a separate gmond instance for each cluster.

Generally, each cluster needs a received gmond, and each website needs a gmetad.

Ganglia workflow 1 is shown below:

Figure 1 ganglia Workflow

On the left is the gmond process running on each node. The configuration of this process is only determined by the/etc/gmond. conf file on the node. Therefore, you must install and configure the file on each monitoring node.

The upper-right corner is a more responsible center machine (usually one of the clusters, or not ). Run the gmetad process on this machine to collect information from each node and store the information on RRDtool. The configuration of this process is only determined by/etc/gmetad. conf.

The bottom right corner shows some information about the web page. When browsing the website, we call the php script to capture information from the RRDTool database and dynamically generate various charts.

1.2 Ganglia running mode (unicast and Multicast)

Ganglia's data collection can work in unicast (unicast) or multicasting (multicast) mode. The default mode is multicast.

Unicast: Send the monitoring data collected by the user to a specific machine or several machines. The monitoring data can be distributed across network segments.

Multicast: Send the monitoring data collected by yourself to all machines in the same network segment, and collect the monitoring data sent by all machines in the same network segment. Because it is sent in the form of a broadcast package, it must be within the same network segment. However, different transmission channels can be defined within the same network segment.

Ii. Install ganglia

1. Topology description

Three hosts:

10.171.29.191 master
10.171.94.155 slave1
10.251.0.197 slave3

The master node performs gmeta and web operations on all three machines as gmon.
Perform the following steps with the root user:

2. Install gmeta and web on the master

Yum install ganglia-web.x86_64
Yum install ganglia-gmetad.x86_64

3. appease gmond on all three machines

Yum install ganglia-gmond.x86_64

4. Configure/etc/ganglia/gmond. conf on the three machines and modify the following content:

Udp_send_channel {
# Bind_hostname = yes # Highly recommended, soon to be default.
# This option tells gmond to use a source address
# That resolves to the machine's hostname.
# This, the metrics may appear to come from any
# Interface and the DNS names associated
# Those IPs will be used to create the RRDs.
Mcast_join = 10.171.29.191
Port = 8649
Ttl = 1
}
/* You can specify as your udp_recv_channels as you like as well .*/
Udp_recv_channel {
# Mcast_join = 239.2.11.71
Port = 8649
# Bind = 239.2.11.71
}

Change the default multicast address to the master Address, and comment out the two IP addresses of udp_recv_channel.

5. Modify/etc/ganglia/gmetad. conf on the master.

Modify data_source:

Data_source "my cluster" 10.171.29.191

6. ln-s/usr/share/ganglia/var/www/ganglia

If you have any questions, copy the/usr/share/ganglia content directly to/var/www/ganglia.

7. Modify/etc/httpd/conf. d/ganglia. conf:

#
# Ganglia monitoring system php web frontend
#
 
Alias/ganglia/usr/share/ganglia

<Location/ganglia>
Order deny, allow
Allow from all
Allow from 127.0.0.1
Allow from: 1
# Allow from .example.com
</Location>

Change Deny from all to Allow from all.

8. Start

Service gmetad start
Service gmond start
/Usr/sbin/apachectl start

9. access from the page
Http: // ip/ganglia

Notes:
1. The information collected by gmetad is stored in/var/lib/ganglia/rrds/

2. Run the following command to check whether data is being transmitted:

Tcpdump port 8649

3. Configure Hadoop and hbase

1. Configure hadoop

Hadoop-metrics2.properties

# Syntax: [prefix]. [source | sink | jmx]. [instance]. [options]
# See package.html for org. apache. hadoop. metrics2 for details

*. Sink. file. class = org. apache. hadoop. metrics2.sink. FileSink

# Namenode. sink. file. filename = namenode-metrics.out

# Datanode. sink. file. filename = datanode-metrics.out

# Jobtracker. sink. file. filename = jobtracker-metrics.out

# Tasktracker. sink. file. filename = tasktracker-metrics.out

# Maptask. sink. file. filename = maptask-metrics.out

# Reducetask. sink. file. filename = reducetask-metrics.out
# Below are for sending metrics to Ganglia
#
# For Ganglia 3.0 support
# *. Sink. ganglia. class = org. apache. hadoop. metrics2.sink. ganglia. GangliaSink30
#
# For Ganglia 3.1 support
*. Sink. ganglia. class = org. apache. hadoop. metrics2.sink. ganglia. GangliaSink31

*. Sink. ganglia. period = 10

# Default for supportsparse is false
*. Sink. ganglia. supportsparse = true

*. Sink. ganglia. slope = jvm. metrics. gcCount = zero, jvm. metrics. memHeapUsedM = both
*. Sink. ganglia. dmax = jvm. metrics. threadsBlocked = 70, jvm. metrics. memHeapUsedM = 40
Menode. sink. ganglia. servers = 10.171.29.191: 8649

Datanode. sink. ganglia. servers = 10.171.29.191: 8649

Jobtracker. sink. ganglia. servers = 10.171.29.191: 8649
Tasktracker. sink. ganglia. servers = 10.171.29.191: 8649

Maptask. sink. ganglia. servers = 10.171.29.191: 8649

Reducetask. sink. ganglia. servers = 10.171.29.191: 8649

2. Configure hbase

Hadoop-metrics.properties

# See http://wiki.apache.org/hadoop/GangliaMetrics
# Make sure you know whether you are using ganglia 3.0 or 3.1.
# If 3.1, you will have to patch your hadoop instance with HADOOP-4675
# And, yes, this file is the named hadoop-metrics.properties rather
# Hbase-metrics.properties because we're re leveraging the hadoop metrics
# Package and hadoop-metrics.properties is an hardcoded-name, at least
# For the moment.
#
# See also http://hadoop.apache.org/hbase/docs/current/metrics.html
# GMETADHOST_IP is the hostname (or) IP address of the server on which the ganglia
# Meta daemon (gmetad) service is running

# Configuration of the "hbase" context for NullContextWithUpdateThread
# NullContextWithUpdateThread is a null context which has a thread calling
# Periodically when monitoring is started. This keeps the data sampled
# Correctly.
Hbase. class = org. apache. hadoop. metrics. spi. NullContextWithUpdateThread
Hbase. period = 10

# Configuration of the "hbase" context for file
# Hbase. class = org. apache. hadoop. hbase. metrics. file. TimeStampingFileContext
# Hbase. fileName =/tmp/metrics_hbase.log

# HBase-specific configuration to reset long-running stats (e.g. compactions)
# If this variable is left out, then the default is no expiration.
Hbase. extendedperiod = 3600

# Configuration of the "hbase" context for ganglia
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
# Hbase. class = org. apache. hadoop. metrics. ganglia. GangliaContext
Hbase. class = org. apache. hadoop. metrics. ganglia. GangliaContext31
Hbase. period = 10
Hbase. servers = 10.171.29.191: 8649

# Configuration of the "jvm" context for null
Jvm. class = org. apache. hadoop. metrics. spi. NullContextWithUpdateThread
Jvm. period = 10

# Configuration of the "jvm" context for file
# Jvm. class = org. apache. hadoop. hbase. metrics. file. TimeStampingFileContext
# Jvm. fileName =/tmp/metrics_jvm.log

# Configuration of the "jvm" context for ganglia
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
# Jvm. class = org. apache. hadoop. metrics. ganglia. GangliaContext
Jvm. class = org. apache. hadoop. metrics. ganglia. GangliaContext31
Jvm. period = 10
Jvm. servers = 10.171.29.191: 8649

# Configuration of the "rpc" context for null
Rpc. class = org. apache. hadoop. metrics. spi. NullContextWithUpdateThread
Rpc. period = 10

# Configuration of the "rpc" context for file
# Rpc. class = org. apache. hadoop. hbase. metrics. file. TimeStampingFileContext
# Rpc. fileName =/tmp/metrics_rpc.log

# Configuration of the "rpc" context for ganglia
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
# Rpc. class = org. apache. hadoop. metrics. ganglia. GangliaContext
Rpc. class = org. apache. hadoop. metrics. ganglia. GangliaContext31
Rpc. period = 10
Rpc. servers = 10.171.29.191: 8649

# Configuration of the "rest" context for ganglia
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
# Rest. class = org. apache. hadoop. metrics. ganglia. GangliaContext
Rest. class = org. apache. hadoop. metrics. ganglia. GangliaContext31
Rest. period = 10
Rest. servers = 10.171.29.191: 8649

Restart hadoop and hbase.

Use Ganglia to monitor Hadoop Clusters

Install and configure Hadoop and Ganglia in Ubuntu of VMware Workstation

Create a Grid

Ganglia installation tutorial yum

Ganglia Quick Start Guide (translated from the official wiki)

Install Ganglia-3.6.0 monitoring Hadoop-2.2.0 and HBase-0.96.0 on CentOS Cluster

Install Ganglia on CentOS 6.5

Install Ganglia on Ubuntu 14.04 Server

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.