network segment. However, different transmission channels can be defined within the same network segment. 2 Environment
Platform: ubuntu12.04
Hadoop: hadoop-1.0.4
Hbase: hbase-0.94.5.
Topology:
Figure 2 hadoop and hbase Topology
Software Installation: APT-Get 3. installation and deployment (unicast) 3.1 deployment Method
Monitoring node (gmond):
Deploying Ganglia to monitor Hadoop and HBase
Some performance problems often occur during Hadoop O M. However, performance problems cannot be simply analyzed through web pages and logs. Many metrics are required. Ganglia is one of the more practical monitoring tools.
Many people have shared a lot about deploying
possibleRpc. Number of times the detailed-metrics.reportDiagnosticInfo_num_ops reports task error messages to the parent processRpc. Average time for the detailed-metrics.startBlockRecovery_avg_time to start block recoveryRpc. Number of times the detailed-metrics.startBlockRecovery_num_ops starts to recover the blockRpc. The average time that the detailed-metrics.statusUpdate_avg_time reports the progress of the child process to the parent processRpc. The number of times the detailed-metrics.st
Use ganglia to monitor hadoop and hbase clusters and gangliahadoop
Introductory content from: http://www.uml.org.cn/sjjm/201305171.asp
1. Introduction to Ganglia
Ganglia is an open-source monitoring project initiated by UC Berkeley designed to measure thousands of nodes. Each computer runs a gmond daemon that collec
8649239.2. 11.71 }Modified to:/**/239.2. 11.71 8649239.2. 11.71 }2. Configure gmetad.confVim/etc/ganglia/gmetad.confData_source "My cluster" localhostModified to:Data_source "My Cluster" 192.168.10.128:86493. Restart Service required:/etc/init.d/ganglia-Monitor Restart/etc/init.d/Gmetad restart/etc/init.d/apache2 restartIf you encounter a situation where apache2 cannot be restartedVim/etc/apache2/apache2
Ganglia is a monitoring server, cluster of open source software, can be used to graph the last one hours, the most recent day, last week, the last January, the last year of the server or cluster CPU load, memory, network, hard disk and other indicators.The power of ganglia is that the ganglia server isable to collect data from all clients on the same network segm
) View HDFs system[[emailprotected] ~] $ hadoop fs -ls /View the Hadoop HDFs file management system through Hadoop fs-ls/commands, as shown in the Linux file system directory. The results shown above indicate that the Hadoop standalone installation was successful. So far, we have not made any changes to the
Hadoop has always been the technology I want to learn, just as the recent project team to do e-mall, I began to study Hadoop, although the final identification of Hadoop is not suitable for our project, but I will continue to study, more and more do not press.The basic Hadoop tutor
To do well, you must first sharpen your tools.
This article has built a hadoop standalone version and a pseudo-distributed development environment starting from scratch. It is illustrated in the following figures and involves:
1. Develop basic software required by hadoop;
2. Install each software;
3. Configure the hadoop standalone mode and run the wordco
Reprinted from http://blessht.iteye.com/blog/2095675Hadoop has always been the technology I want to learn, just as the recent project team to do e-mall, I began to study Hadoop, although the final identification of Hadoop is not suitable for our project, but I will continue to study, more and more do not press.The basic Hadoop
1. Download Hadoop source codeSource code of each Hadoop Member: Just pull it out. Note that only the contents in the trunk directory on SVN are checked-out, for example:Http://svn.apache.org/repos/asf/hadoop/common/trunk,Instead of http://svn.apache.org/repos/asf/hadoop/common,The reason is that the http://svn.apache.
-02-06 17:41/user/test_hiveCan see the creation of a folder belonging to HTTPFS. ABC Open File upload a text file from the background test.txt to the/USER/ABC directory, the content isHello world!Access with HTTPFS[[email protected] hadoop-httpfs]# curl-i-x GET "http://xmseapp03:14000/webhdfs/v1/user/abc/test.txt?op=open User.name=httpfs "http/1.1 okserver:apache-coyote/1.1set-cookie:hadoop.auth=" u=httpfsp=httpfst= Simplee=1423574166943s=jtxqijusblvb
Basic Hadoop tutorial
This document uses the Basic Environment configuration of the K-Master server as an example to demonstrate user configuration, sudo permission configuration, network configuration, firewall shutdown, and JDK installation. Follow these steps to complete KVMSlave1 ~ The Basic Environment configuration of the KVMSlave3 server.Development Environment
Hardware environment: Four CentOS 6.5
Excerpt from: http://www.powerxing.com/install-hadoop-cluster/This tutorial describes how to configure a Hadoop cluster, and the default reader has mastered the single-machine pseudo-distributed configuration of Hadoop, otherwise check out the Hadoop installation
are going to install our Hadoop lab environment on a single computer (virtual machine). If you have not yet installed the virtual machine, please check out the VMware Workstations Pro 12 installation tutorial. If you have not installed the Linux operating system in the virtual machine, please install the Ubuntu or CentOS tutorial under VMware.
The installed mode
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.