[HBase] detailed explanation of the fully distributed Installation Process

Source: Internet
Author: User
Tags hadoop fs
[HBase] Detailed description of the fully distributed installation process HBase version: 0.90.5Hadoop version: 0.20.2OS version: CentOS Installation Method: completely distributed (one master, three regionservers) 1) decompress the HBase Installation File [hadoop @ node01 ~] $Tar-zxvfhbase-0.90.5.tar.gz the HBase main directory structure after decompression is as follows

[HBase] Detailed description of the fully distributed installation process HBase version: 0.90.5 Hadoop version: 0.20.2 OS Version: CentOS Installation Method: completely distributed (one master, three regionservers) 1) decompress the HBase Installation File [hadoop @ node01 ~] The HBase main directory structure after the $ tar-zxvf hbase-0.90.5.tar.gz is decompressed is as follows:

[HBase] detailed explanation of the fully distributed Installation Process

HBase version: 0.90.5

Hadoop version: 0.20.2

OS Version: CentOS

Installation Method: completely distributed (one master and three regionservers)

1) decompress the HBase Installation File

[Hadoop @ node01 ~] $ Tar-zxvf hbase-0.90.5.tar.gz

The HBase main directory structure after decompression is as follows:

[Hadoop @ node01 hbase-0.90.5] $ ls-l

Total 3636

Drwxr-xr-x. 3 hadoop root 4096 Dec 8 2011 bin

-Rw-r --. 1 hadoop root 217043 Dec 8 2011 CHANGES.txt

Drwxr-xr-x. 2 hadoop root 4096 Dec 8 2011 conf

Drwxr-xr-x. 4 hadoop root 4096 Dec 8 2011 docs

-Rwxr-xr-x. 1 hadoop root 2425490 Dec 8 2011 hbase-0.90.5.jar

-Rwxr-xr-x. 1 hadoop root 997956 Dec 8 2011 hbase-0.90.5-tests.jar

Drwxr-xr-x. 5 hadoop root 4096 Dec 8 2011 hbase-webapps

Drwxr-xr-x. 3 hadoop root 4096 Apr 12 lib

-Rw-r --. 1 hadoop root 11358 Dec 8 2011 LICENSE.txt

-Rw-r --. 1 hadoop root 803 Dec 8 2011 NOTICE.txt

-Rw-r --. 1 hadoop root 31073 Dec 8 2011 pom. xml

-Rw-r --. 1 hadoop root 1358 Dec 8 2011 README.txt

Drwxr-xr-x. 8 hadoop root 4096 Dec 8 2011 src

2) Configure hbase-env.sh

[Hadoop @ node01 conf] $ vi hbase-env.sh

# The java implementation to use. Java 1.6 required.

Export JAVA_HOME =/usr/java/jdk1.6.0 _ 38

# Extra Java CLASSPATH elements. Optional.

Export HBASE_CLASSPATH =/home/hadoop/hadoop-0.20.2/conf.

3) Configure hbase-site.xml

[Hadoop @ node01 conf] $ vi hbase-site.xml

Hbase. rootdir

Hdfs: // node01: 9000/hbase

Hbase. cluster. distributed

True

Hbase. zookeeper. quorum

Node01, node02, node03, node04

Hbase. zookeeper. property. dataDir

/Var/zookeeper

4) Configure regionservers

[Hadoop @ node01 conf] $ vi regionservers

Node02

Node03

Node04

5) Replace the Jar package

[Hadoop @ node01 lib] $ mv hadoop-core-0.20-append-r1056497.jar hadoop-core-0.20-append-r1056497.sav

[Hadoop @ node01 lib] $ cp.../hadoop-0.20.2/hadoop-0.20.2-core.jar.

[Hadoop @ node01 lib] $ ls

Activation-1.1.jar commons-net-1.4.1.jar jasper-compiler-5.5.23.jar jetty-util-6.1.26.jar

Asm-3.1.jar core-3.1.1.jar jasper-runtime-5.5.23.jar jruby-complete-1.6.0.jar

Avro-1.3.3.jar guava-r06.jar jaxb-api-2.1.jar jsp-2.1-6.1.14.jar

Commons-cli-1.2.jar hadoop-0.20.2-core.jar jaxb-impl-2.1.12.jar jsp-api-2.1-6.1.14.jar

Commons-codec-1.4.jar hadoop-core-0.20-append-r1056497.sav jersey-core-1.4.jar jsr311-api-1.1.1.jar

Commons-el-1.0.jar jackson-core-asl-1.5.5.jar jersey-json-1.4.jar log4j-1.2.16.jar

Commons-httpclient-3.1.jar jackson-jaxrs-1.5.5.jar jersey-server-1.4.jar protobuf-java-2.3.0.jar

Commons-lang-2.5.jar jackson-mapper-asl-1.4.2.jar jettison-1.1.jar ruby

Commons-logging-1.1.1.jar jackson-xc-1.5.5.jar jetty-6.1.26.jar servlet-api-2.5-6.1.14.jar

6) Copy Hbase configurations to the other three Nodes

[Hadoop @ node01 ~] $ Scp-r./hbase-0.90.5 node02:/home/hadoop

[Hadoop @ node01 ~] $ Scp-r./hbase-0.90.5 node03:/home/hadoop

[Hadoop @ node01 ~] $ Scp-r./hbase-0.90.5 node04:/home/hadoop

7) add environment variables related to HBase (all nodes)

[Hadoop @ node01 conf] $ su-root

Password:

[Root @ node01 ~] # Vi/etc/profile

Export HBASE_HOME =/home/hadoop/hbase-0.90.5

Export PATH = $ PATH: $ HBASE_HOME/bin

8) Start Hadoop and create the HBase main directory

[Hadoop @ node01 ~] $ HADOOP_INSTALL/bin/start-all.sh

Starting namenode, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-node01.out

Node02: starting datanode, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node02.out

Node04: starting datanode, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node04.out

Node03: starting datanode, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node03.out

Hadoop @ node01's password:

Node01: starting secondarynamenode, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-node01.out

Starting jobtracker, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-node01.out

Node04: starting tasktracker, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node04.out

Node02: starting tasktracker, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node02.out

Node03: starting tasktracker, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node03.out

[Hadoop @ node01 ~] $ Jps

Jps 5332

5030 NameNode

5259 JobTracker

5185 SecondaryNameNode

[Hadoop @ node02 ~] $ Jps

Jps 4603

4528 TaskTracker

4460 DataNode

[Hadoop @ node01 ~] $ Hadoop fs-mkdir hbase

9) Start HBase

[Hadoop @ node01 conf] $ start-hbase.sh

Hadoop @ node01's password: node03: starting zookeeper, logging to/home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node03.out

Node04: starting zookeeper, logging to/home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node04.out

Node02: starting zookeeper, logging to/home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node02.out

Node01: starting zookeeper, logging to/home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node01.out

Starting master, logging to/home/hadoop/hbase-0.90.5/logs/hbase-hadoop-master-node01.out

Node03: starting regionserver, logging to/home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-node03.out

Node02: starting regionserver, logging to/home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-node02.out

Node04: starting regionserver, logging to/home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-node04.out

[Hadoop @ node01 conf] $ jps

7437 HQuorumPeer

7495 HMaster

5030 NameNode

5259 JobTracker

5185 SecondaryNameNode

Jps 7597

[Hadoop @ node02 ~] $ Jps

5965 HRegionServer

4528 TaskTracker

4460 DataNode

5892 HQuorumPeer

Jps 6074

10) test: Create a table on HBase

[Hadoop @ node01 logs] $ hbase shell

HBase Shell; enter 'help 'For list of supported commands.

Type "exit "To leave the HBase Shell

Version 0.90.5, r1212209, Fri Dec 9 05:40:36 UTC 2011

Hbase (main): 001: 0> status

3 servers, 0 deactive, 0.6667 average load

Hbase (main): 002: 0> create 'testtable', 'colfam1'

0 row (s) in 1.4820 seconds

Hbase (main): 003: 0> list 'testtable'

TABLE

Testtable

1 row (s) in 0.0290 seconds

Hbase (main): 004: 0> put 'testtable', 'myrow-1', 'colfam1: q1', 'value-1'

0 row (s) in 0.1980 seconds

Hbase (main): 005: 0> put 'testtable', 'myrow-2', 'colfam1: q2 ', 'value-2'

0 row (s) in 0.0140 seconds

Hbase (main): 006: 0> put 'testtable', 'myrow-2', 'colfam1: q3 ', 'value-3'

0 row (s) in 0.0070 seconds

Hbase (main): 007: 0> scan 'testtable'

Row column + CELL

Myrow-1 column = colfam1: q1, timestamp = 1365829054040, value = value-1

Myrow-2 column = colfam1: q2, timestamp = 1365829061470, value = value-2

Myrow-2 column = colfam1: q3, timestamp = 1365829066386, value = value-3

2 row (s) in 0.0690 seconds

Hbase (main): 008: 0> get 'testtable', 'myrow-1'

COLUMN CELL

Colfam1: q1 timestamp = 1365829054040, value = value-1

1 row (s) in 0.0330 seconds

Hbase (main): 009: 0> delete 'testtable', 'myrow-2', 'colfam1: q2'

0 row (s) in 0.0220 seconds

Hbase (main): 010: 0> scan 'testtable'

Row column + CELL

Myrow-1 column = colfam1: q1, timestamp = 1365829054040, value = value-1

Myrow-2 column = colfam1: q3, timestamp = 1365829066386, value = value-3

2 row (s) in 0.0330 seconds

Hbase (main): 011: 0> exit

11) Stop HBase

[Hadoop @ node01 logs] $ stop-hbase.sh

Stopping hbase ..........

Hadoop @ node01's password: node02: stopping zookeeper.

Node03: stopping zookeeper.

Node04: stopping zookeeper.

Node01: stopping zookeeper.

[Hadoop @ node01 logs] $ jps

5030 NameNode

5259 JobTracker

5185 SecondaryNameNode

Jps 7952

[Hadoop @ node02 logs] $ jps

Jps 6351

4528 TaskTracker

4460 DataNode

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.