Fully Distributed HBase Installation

Source: Internet
Author: User
HBase is a distributed, column-oriented open source database based on Hadoop. It uses Google's BigTable as its prototype. High Availability, high performance, column storage, scalability, real-time read/write. Fully Distributed HBase installation is based on fully distributed Hadoop installation. HBase versions and Hadoop versions must be matched.

HBase is a distributed, column-oriented open source database based on Hadoop. It uses Google's BigTable as its prototype. High Availability, high performance, column storage, scalability, real-time read/write. Fully Distributed HBase installation is based on fully distributed Hadoop installation. HBase versions and Hadoop versions must be matched.

HBase is a distributed, column-oriented open source database based on Hadoop. It uses Google's BigTable as its prototype.

High Availability, high performance, column storage, scalability, real-time read/write.

Fully Distributed HBase installation is based on fully distributed Hadoop installation.

The HBase version and Hadoop version must match each other. Try not to select the latest version. Select a stable version.

I use Hadoop-0.20.2 and HBase-0.90.5 here.

The following operations are performed on the namenode master node of hadoop. After the configuration is completed on the master node, the operation is replicated to each slave node.

1. Download and install HBase

1) I put hbase-0.90.5.tar.gz IN THE/home/coder/directory.

2)decompress hbase-0.90.5.tar.gz to/home/coder/hbase-0.90.5/

[Coder @ h1 ~] $ Tar-zxvf hbase-0.90.5.tar.gz

2. Configure the hbase-env.sh File

The file is in the hbase-0.90.5/conf/directory.

1) configure the JDK installation directory

# The java implementation to use. Java 1.6 required. export JAVA_HOME =/usr/java/jdk1.6.0 _ 37

2) configure the Hadoop installation directory

# Extra Java CLASSPATH elements. Optional. export HBASE_CLASSPATH =/home/coder/hadoop-0.20.2/conf

3) HBase is responsible for starting and disabling zookeeper.

# Tell HBase whether it shoshould manage it's own instance of Zookeeper or not. export HBASE_MANAGES_ZK = true

3. Configure the hbase-site.xml File

The file is in the hbase-0.90.5/conf/directory and the file content configuration is as follows:

/**** Copyright 2010 The Apache Software Foundation ** Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. see the NOTICE file * distributed with this work for additional information * regarding copyright ownership. the ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except T in compliance * with the License. you may obtain a copy of the License at ***** Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "as is" BASIS, * without warranties or conditions of any kind, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */hbase. rootdirhdfs: // h1: 9000/hbasehbase. cluster. distributedtruehbase. masterh1: 60000hbase. zookeeper. quorumh2, h3hbase. zookeeper. property. dataDir/home/coder/hbase-0.90.5/zookeeper

4. Configure the regionservers File

The file is in the hadoop-0.90.5/conf/directory, I have configured two slave nodes h2 and h3 as Region server, USA space, the contents of the file is as follows:

H2h3

5. Replace HBase's hadoop core jar package

Go to the hbase-0.90.5/lib/directory.

1) back up the original hadoop-core-0.20-append-r1056497.jar, or directly Delete

[Coder @ h1 lib] $ mv hadoop-core-0.20-append-r1056497.jar hadoop-core-0.20-append-r1056497.jar.bak

2) copy the hadoop-0.20.2-core.jar from the Hadoop installation directory to the hbase-0.90.5/lib/directory

[Coder @ h1 lib] $ cp/home/coder/hadoop-0.20.2/hadoop-0.20.2-core.har/home/coder/hbase-0.90.5/lib/

6. Here, the simplest HBase configuration is OK. Now you need to copy the configured Hbase to other nodes and execute the following command:

[Coder @ h1 ~] $ Scp-r hbase-0.90.5 h2:/home/coder/[coder @ h1 ~] $ Scp-r hbase-0.90.5 h3:/home/coder/

7. Start Hadoop and then start HBase.

1) Start hadoop, enter the hadoop installation directory, and execute:

[Coder @ h1 hadoop-0.20.2] $ bin/start-all.sh

2) Start HBase, enter the HBase installation directory, and run the following command on the VM:

[Coder @ h1 hbase-0.90.5] $ bin/start-hbase.sh

3) check whether the startup is successful:

Master node h1:

[Coder @ h1 hbase-0.90.5] $ jps2167 NameNode2777 Jps2300 SecondaryNameNode2657 HMaster2376 JobTracker [coder @ h1 hbase-0.90.5] $

Slave node h2:

[Coder @ h2 ~] $ Jps2051 DataNode2342 HRegionServer2279 HQuorumPeer2105 TaskTracker2905 Jps [coder @ h2 ~] $

Slave node h3:

[Coder @ h3 ~] $ Jps2353 HRegionServer2116 TaskTracker2933 Jps2292 HQuorumPeer2062 DataNode [coder @ h3 ~] $

8. HBase shell mode

1) Enter shell mode

[Coder @ h1 hbase-0.90.5] $ bin/hbase shellHBase Shell; enter 'help 'For list of supported commands. Type to leave the HBase ShellVersion 0.90.5, r1212209, Fri Dec 9 05:40:36 UTC 2011 hbase (main): 001: 0>

2) use the status command to view the HBase running status

Hbase (main): 001: 0> status2 servers, 0 dead, 1.0000 average loadhbase (main): 002: 0>

3) Exit shell

Hbase (main): 002: 0> exit

9. HBase user interface

1) access through: 60010/master. jsp on the master interface. 192.168.0.129 is the ip address of my master node.

2) on the zookeeper page, enter the zookeeper link provided by the master attribute on the master page.

3) The user table page can also be accessed through the corresponding link on the master page.

4) The Region Server Page can also be accessed through the link provided by the Region Servers information on the master page.

9. Stop HBase

Run

[Coder @ h1 hbase-0.90.5] $ bin/stop-hbase.sh

, Hong Kong server

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.