Last Update:2014-12-23
Source: Internet
Author: User
Keywords
Dfs
name
installation
Value
java
Hypertable on HDFS (hadoop) installation
Hadoop-hdfs Installation Guide
Process &http://www.aliyun.com/zixun/aggregation/37954.html ">nbsp;4.2." Hypertable on HDFS
Create working directory
$ hadoop fs-mkdir/hypertable$ Hadoop fs-chmod 777/hypertable
Installing the Java Runtime environment
Yum install Java-1.7.0-openjdkyum localinstall http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/ hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
modifying JRun bugs
cp/opt/hypertable/current/bin/jrun/opt/hypertable/current/bin/jrun.oldvim/opt/hypertable/current/bin/jrun# Ht_jar= ' Ls-1/opt/hypertable/doug/current/lib/java/*.jar | grep "Hypertable-[^-]*.jar" | awk ' BEGIN {fs= '/} {print $NF} ' ht_jar= ' ls-1/opt/hypertable/current/lib/java/*.jar | grep "Hypertable-[^-]*.jar" | awk ' BEGIN {fs= '/'} {print $NF} ' export JAVA_HOME=/USR export HADOOP_HOME=/USR export hypertable_home=/opt/hypertable/ Current
Hypertable.cfg
# cat conf/hypertable.cfg## hypertable.cfg## HDFS broker#hdfsbroker.hadoop.confdir=/etc/hadoop/ confhdfsbroker.hadoop.confdir=/etc/hadoop# Ceph brokercephbroker.monaddr=192.168.6.25:6789# Local brokerdfsbroker.local.root=fs/local# DFS broker-for clientsdfsbroker.port=38030# HyperspaceHyperspace.Replica.Host =localhosthyperspace.replica.port=38040hyperspace.replica.dir=hyperspace# hypertable.master# hypertable.master.host=localhosthypertable.master.port=38050# Hypertable.RangeServerHypertable.RangeServer.Port =38060hyperspace.keepalive.interval=30000hyperspace.lease.interval=1000000hyperspace.graceperiod=200000# thriftbrokerthriftbroker.port=38080
/etc/hadoop/hdfs-site.xml
# Cat/etc/hadoop/hdfs-site.xml<?xml version= "1.0" ><?xml-stylesheet type= "text/xsl" Configuration.xsl "?><!--put the site-specific property overrides with this file. --><configuration> <property> <name>dfs.name.dir</name> <value>/var/hadoop/name1 </value> <description> </description> </property> <property> <name>dfs.data.dir </name> <value>/var/hadoop/hdfs/data1</value> <description> </description> </ property> <property> <name>dfs.replication</name> <value>2</value> </property ></configuration>
Start Dfsbroker
#/opt/hypertable/current/bin/set-hadoop-distro.sh cdh4hypertable successfully configured for Hadoop CDH4 #/opt/ hypertable/current/bin/start-dfsbroker.sh hadoopdfs broker:available file descriptors:1024started DFS broker (Hadoop)
View Startup log
# tail/opt/hypertable/current/log/dfsbroker.hadoop.loglog4j:warn No appenders could be found for logger ( org.apache.hadoop.conf.Configuration). Log4j:warn Please initialize the log4j system properly. Hdfsbroker.dfs.client.read.shortcircuit=falsehdfsbroker.dfs.replication=2hdfsbroker.server.fs.default.name= HDFS://NAMENODE.EXAMPLE.COM:9000APR, 2013 6:43:18 PM org.hypertable.AsyncComm.IOHandler delivereventinfo: [/ 192.168.6.25:53556; Tue APR 18:43:18 HKT 2013] Connection establishedapr, 2013 6:43:18 PM Org.hypertable.DfsBroker.hadoop.ConnectionHandler handleinfo: [/192.168.6.25:53556; Tue APR 18:43:18 HKT 2013] Disconnect-comm broken connection:closing all open handles from/192.168.6.25:53556closed 0 Input streams and 0 output streams for client connection/192.168.6.25:53556