Open source Cloud Computing Technology Series (vi) hypertable (Hadoop HDFs)

Source: Internet
Author: User
Keywords Nbsp; name dfs nbsp; name dfs

Select VirtualBox to establish Ubuntu server 904 as the base environment for the virtual machine.

hadoop@hadoop:~$ sudo apt-get install g++ cmake libboost-dev liblog4cpp5-dev git-core cronolog Libgoogle-perftools-dev li Bevent-dev zlib1g-dev Libexpat1-dev Libdb4.6++-dev libncurses-dev Libreadline5-dev

hadoop@hadoop:~/build/hypertable$ sudo apt-get install ant autoconf automake libtool Bison Flex pkg-config php5 R Uby-dev Libhttp-access2-ruby Libbit-vector-perl

hadoop@hadoop:~/build/hypertable$ sudo ln-f-s/bin/bash/bin/sh
[sudo] password for Hadoop:

hadoop@hadoop:~$ Tar xvzf hyperic-sigar-1.6.3.tar.gz

hadoop@hadoop:~$ sudo cp hyperic-sigar-1.6.3/sigar-bin/include/*.h/usr/local/include/
[sudo] password for Hadoop:
hadoop@hadoop:~$ sudo cp hyperic-sigar-1.6.3/sigar-bin/lib/libsigar-x86-linux.so/usr/local/lib/

hadoop@hadoop:~$ sudo ldconfig

hadoop@hadoop:~/build/hypertable$

hadoop@hadoop:~$ wget http://hypertable.org/pub/thrift.tgz
--2009-08-17 21:12:14--http://hypertable.org/pub/thrift.tgz
Resolving hypertable.org ... 72.51.43.91
Connecting to hypertable.org|72.51.43.91|:80 ... Connected.
HTTP request sent, awaiting response ... OK
length:1144224 (1.1M) [Application/x-gzip]
Saving to: ' Thrift.tgz '

100%[======================================>] 1,144,224 20.9k/s in 44s

2009-08-17 21:13:00 (25.3 kb/s)-' thrift.tgz ' saved [1144224/1144224]

hadoop@hadoop:~$ Tar xvzf thrift.tgz

hadoop@hadoop:~$ cd thrift/
hadoop@hadoop:~/thrift$ ls
aclocal       config.guess  contrib       lib           news              Ylwrap
aclocal.m4    config.h      contributors  LICENSE       NOTICE
bootstrap.sh  config.hin    depcomp        ltmain.sh    print_version.sh
changes       config.sub     disclaimer    makefile.am  README
cleanup.sh    Configure      doc           makefile.in  Test
compiler      configure.ac  install-sh    missing       Tutorial
hadoop@hadoop:~/thrift$./bootstrap.sh
Configure.ac:warning:missing Ac_prog_awk wanted by:test/py/makefile:80
Configure.ac:warning:missing Ac_prog_ranlib wanted by:test/py/makefile:151
Configure.ac:44:installing './config.guess '
configure.ac:44:installing './config.sub '
configure.ac:26: Installing './install-sh '
configure.ac:26:installing './missing '
compiler/cpp/makefile.am:installing '. Depcomp '
configure.ac:installing './ylwrap '

hadoop@hadoop:~/thrift$/configure

hadoop@hadoop:~/thrift$ Make-j 3

hadoop@hadoop:~/thrift$ sudo make install

hadoop@hadoop:~/thrift$ sudo ldconfig

hadoop@hadoop:~$ ls
Hypertable-0.9.2.6-alpha-src.tar.gz Jdk-6u15-linux-i586.bin
hadoop@hadoop:~$ chmod +x Jdk-6u15-linux-i586.bin
hadoop@hadoop:~$ pwd
/home/hadoop

hadoop@hadoop:~$/jdk-6u15-linux-i586.bin

hadoop@hadoop:~$ ls
hypertable-0.9.2.6-alpha-src.tar.gz jdk1.6.0_15 Jdk-6u15-linux-i586.bin
hadoop@hadoop:~$ Tar xvzf hypertable-0.9.2.6-alpha-src.tar.gz

Create an install directory (optional)

mkdir ~/hypertable

Create a build directory

Mkdir-p ~/build/hypertable

hadoop@hadoop:~$ CD ~/build/hypertable
hadoop@hadoop:~/build/hypertable$
hadoop@hadoop:~/build/hypertable$ CMake ~/hypertable
Hypertable/hypertable-0.9.2.5-alpha-src.tar.gz
hypertable-0.9.2.5-alpha/
hadoop@hadoop:~/build/hypertable$ cmake-dcmake_build_type=release-dcmake_install_prefix=~/hypertable ~/ Hypertable-0.9.2.5-alpha

hadoop@hadoop:~/build/hypertable$ cmake-dcmake_build_type=release-dcmake_install_prefix=~/hypertable ~/ Hypertable-0.9.2.5-alpha

hadoop@hadoop:~/build/hypertable$ Make-j 3

This step is longer and the machine is running at full load, and you can have a cup of coffee to see the results.

hadoop@hadoop:~/build/hypertable$ make Install

Next verify that the installation was successful:

hadoop@hadoop:~/build/hypertable$ make alltests
Scanning dependencies of Target Runtestservers

Starting test Servers


Shutdown Hyperspace Complete


Error:dfsbroker not running, database not cleaned


Shutdown Thrift Broker Complete


Shutdown hypertable Master Complete


Shutdown Range Server Complete


DFS broker:available file descriptors:1024


successfully started Dfsbroker (local)


successfully started hyperspace


successfully started Hypertable.master


successfully started Hypertable.rangeserver


successfully started Thriftbroker


Built Target Runtestservers


scanning dependencies of target alltests


Running tests ...


Start Processing tests


Test project/home/hadoop/build/hypertable


1/60 Testing Common-exception passed


2/60 Testing common-logging passed


3/60 Testing Common-serialization passed


4/60 Testing Common-scopeguard passed


5/60 Testing common-inetaddr passed


6/60 Testing Common-pagearena passed


7/60 Testing Common-properties passed


8/ Testing Common-bloomfilter passed


9/60 Testing Common-hash passed


10/60 Testing Hypercomm passed


11/60 Testing Hypercomm-datagram passed


12/60 Testing Hypercomm-timeout passed


13/60 Testing Hypercomm-timer passed


14/60 Testing Hypercomm-reverse-request passed


15/60 Testing Berkeleydbfilesystem passed


16/60 Testing Fileblockcache passed


17/60 Testing Tableidcache passed


18/60 Testing Cellstorescanner passed


19/60 Testing Cellstorescanner-delete passed


20/60 Testing Schema passed


21/60 Testing Locationcache passed


22/60 Testing Loaddatasource passed


23/60 Testing Loaddataescape passed


24/60 Testing Blockcompressor-bmz passed


25/60 Testing Blockcompressor-lzo passed


26/60 Testing Blockcompressor-none passed


27/60 Testing Blockcompressor-quicklz passed


28/60 Testing Blockcompressor-zlib passed


29/60 Testing Commitlog passed


30/60 Testing Metalog-rangeserver passed


31/60 Testing Client-large-block passed


32/60 Testing Client-periodic-flush passed


33/60 Testing Hyperdfsbroker passed


34/60 Testing Hyperspace passed


35/60 Testing Hypertable-shell passed


36/60 Testing Hypertable-shell-ldi-stdin passed


37/60 Testing Rangeserver passed


38/60 Testing Thriftclient-cpp passed


39/60 Testing Thriftclient-perl passed


40/60 Testing Thriftclient-java passed


41/60 Testing Client-random-write-read passed


42/60 Testing RANGESERVER-COMMIT-LOG-GC passed


43/60 Testing Rangeserver-load-exception passed


44/60 Testing Rangeserver-metadata-split passed


45/60 Testing Rangeserver-maintenance-thread passed


46/60 Testing Rangeserver-row-overflow passed


47/60 Testing Rangeserver-rowkey-ag-imbalanc passed


48/60 Testing Rangeserver-split-recovery-1 passed


49/60 Testing Rangeserver-split-recovery-2 passed


50/60 Testing rangeserver-split-recovery-3 passed


51/60 Testing rangeserver-split-recovery-4 passed


52/60 Testing rangeserver-split-recovery-5 passed


53/60 Testing rangeserver-split-recovery-6 passed


54/60 Testing rangeserver-split-recovery-7 passed


55/60 Testing rangeserver-split-recovery-8 passed


56/60 Testing Rangeserver-split-merge-loop10 passed


57/60 Testing Rangeserver-bloomfilter-rows passed


58/60 Testing Rangeserver-bloomfilter-rows-c passed


59/60 Testing Rangeserver-scanlimit passed


60/60 Testing Client-no-log-sync passed

100% tests passed, 0 tests failed out of 60
Built Target alltests
hadoop@hadoop:~/build/hypertable$

At this point, hypertable stand-alone version OK.

Next, let's experience the hypertable on the HDFs of Hadoop.

hadoop@hadoop:~/hadoop-0.20.0/conf$ more hadoop-env.sh
# Set hadoop-specific Environment variables here.

# The only required environment variable is java_home. All others are
# Optional. When running a distributed configuration it's best to
# set Java_home in ' This file ', so ' it's correctly defined on
# remote nodes.

# The Java implementation to use. Required.
Export Java_home=~/jdk1.6.0_15

# Extra Java CLASSPATH elements. Optional.
# Export Hadoop_classpath=

# The maximum amount of heap to use, in MB. Default is 1000.
Export hadoop_heapsize=100

hadoop@hadoop:~/hadoop-0.20.0/conf$ VI Core-site.xml
<?xml version= "1.0"?>
<?xml-stylesheet type= "text/xsl" href= "configuration.xsl"?>

<!--put site-specific property overrides in this file. -->

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-0.20.0/tmp/dir/hadoop-${user.name}</value>
<description>a base for other temporary directories.</description>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>the name of the default file system. A URI whose
Scheme and authority determine the filesystem implementation. The
URI ' scheme determines the Config property (fs. Scheme.impl) naming
The FileSystem implementation class. The URI ' s authority is used to
Determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>

hadoop@hadoop:~/hadoop-0.20.0/conf$ VI Mapred-site.xml
<?xml version= "1.0"?>
<?xml-stylesheet type= "text/xsl" href= "configuration.xsl"?>

<!--put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
<description>the host and port that the MapReduce job tracker runs
At. If ' local ', then jobs are run in-process as a single map
and reduce task.
</description>
</property>

</configuration>
~

~
"Mapred-site.

hadoop@hadoop:~/hadoop-0.20.0/conf$ VI Hdfs-site.xml
<?xml version= "1.0"?>
<?xml-stylesheet type= "text/xsl" href= "configuration.xsl"?>

<!--put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>default block replication.
The actual number of replications can be specified to the file is created.
The default is used if replication isn't specified in Create time.
</description>
</property>

</configuration>
~
~

hadoop@hadoop:~$ more. Bash_profile
Export Java_home=~/jdk1.6.0_15
Export hadoop_home=~/hadoop-0.20.0
Export path= $PATH: $JAVA _home/bin: $HADOOP _home/bin

hadoop@hadoop:~$ Hadoop namenode-format
09/08/18 21:10:25 INFO namenode. Namenode:startup_msg:
/************************************************************
Startup_msg:starting Namenode
startup_msg:   host = hadoop/127.0.1.1
startup_msg:   args = [-format]
STARTUP _msg:   Version = 0.20.0
startup_msg:   build = https://svn.apache.org/repos/asf/hadoop/core/ Branches/branch-0.20-r 763504; Compiled by ' Ndaley ' on Thu apr  9 05:18:40 UTC 2009
********************************************************** **/
09/08/18 21:10:26 INFO namenode. Fsnamesystem:fsowner=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,sambashare,admin
09/08/18 21:10:26 INFO Namenode. Fsnamesystem:supergroup=supergroup
09/08/18 21:10:26 INFO namenode. Fsnamesystem:ispermissionenabled=true
09/08/18 21:10:26 INFO Common. Storage:image file of size saved in 0 seconds.
09/08/18 21:10:26 INFO Common. Storage:storage DirectoRy/home/hadoop/hadoop-0.20.0/tmp/dir/hadoop-hadoop/dfs/name has been successfully formatted.
09/08/18 21:10:26 INFO namenode. Namenode:shutdown_msg:
/************************************************************
SHUTDOWN_MSG: Shutting down Namenode at hadoop/127.0.1.1
************************************************************/
hadoop@hadoop:~$

hadoop@hadoop:~$ start-all.sh
Starting Namenode, logging to/home/hadoop/hadoop-0.20.0/bin/. /logs/hadoop-hadoop-namenode-hadoop.out
Localhost:starting Datanode, logging to/home/hadoop/hadoop-0.20.0/bin/. /logs/hadoop-hadoop-datanode-hadoop.out
Localhost:starting Secondarynamenode, logging to/home/hadoop/hadoop-0.20.0/bin/. /logs/hadoop-hadoop-secondarynamenode-hadoop.out
Starting Jobtracker, logging to/home/hadoop/hadoop-0.20.0/bin/. /logs/hadoop-hadoop-jobtracker-hadoop.out
Localhost:starting Tasktracker, logging to/home/hadoop/hadoop-0.20.0/bin/. /logs/hadoop-hadoop-tasktracker-hadoop.out

hadoop@hadoop:~$ JPS
12959 Jobtracker
12760 Datanode
12657 Namenode
13069 Tasktracker
13149 Jps
12876 Secondarynamenode

Ok,hadoop 0.20.0 configuration complete.

Then we integrate hypertable and Hadoop HDFs.

hadoop@hadoop:~/hypertable/0.9.2.5/bin$ more ~/.bash_profile
Export Java_home=~/jdk1.6.0_15
Export hadoop_home=~/hadoop-0.20.0
Export HYPERTABLE_HOME=~/HYPERTABLE/0.9.2.5/
Export path= $PATH: $JAVA _home/bin: $HADOOP _home/bin: $HYPERTABLE _home/bin

hadoop@hadoop:~/hypertable/0.9.2.5/conf$ ls
Hypertable.cfg Metadata.xml
hadoop@hadoop:~/hypertable/0.9.2.5/conf$ more Hypertable.cfg
#
# hypertable.cfg
#

# Global Properties
hypertable.request.timeout=180000

# HDFS Broker
hdfsbroker.port=38030
hdfsbroker.fs.default.name=hdfs://localhost:9000
Hdfsbroker.workers=20

# local Broker
dfsbroker.local.port=38030
Dfsbroker.local.root=fs/local

# DFS Broker-for Clients
Dfsbroker.host=localhost
dfsbroker.port=38030

# hyperspace
Hyperspace.master.host=localhost
hyperspace.master.port=38040
Hyperspace.master.dir=hyperspace
Hyperspace.master.workers=20

# Hypertable.master
Hypertable.master.host=localhost
hypertable.master.port=38050
Hypertable.master.workers=20

# Hypertable.rangeserver
hypertable.rangeserver.port=38060

hyperspace.keepalive.interval=30000
hyperspace.lease.interval=1000000
hyperspace.graceperiod=200000

# Thriftbroker
thriftbroker.port=38080
hadoop@hadoop:~/hypertable/0.9.2.5/conf$

Start a hypertalbe based on Hadoop

hadoop@hadoop:~/hypertable/0.9.2.5/bin$ start-all-servers.sh Hadoop
DFS broker:available File descriptors:1024
Successfully started Dfsbroker (Hadoop)
Successfully started hyperspace
Successfully started Hypertable.master
Successfully started Hypertable.rangeserver
Successfully started Thriftbroker

hadoop@hadoop:~/hypertable/0.9.2.5/log$ Hadoop fs-ls/
Found 2 Items
Drwxr-xr-x-hadoop supergroup 0 2009-08-18 21:25/home
Drwxr-xr-x-hadoop supergroup 0 2009-08-18 21:28/hypertable
hadoop@hadoop:~/hypertable/0.9.2.5/log$ Hadoop fs-ls/hypertable
Found 2 Items
Drwxr-xr-x-hadoop supergroup 0 2009-08-18 21:28/hypertable/server
Drwxr-xr-x-hadoop supergroup 0 2009-08-18 21:28/hypertable/tables

At this point, the hypertable based on Hadoop started successfully. The next HQL experience.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.