Hbase distributed Installation

Source: Internet
Author: User
Hbase distributed installation (transfer)

(15:06:36)

ReprintedBytes
Tags:

Hadoop environment it
Category: hadoop
The following describes how to install hbase in a completely distributed manner: 1. Use hadoop 0.20.2 + zookeeper3.3.3 + hbase 0.90.3,2. download hbase0.90.3 and decompress it to/usr/local/hbase3. check whether zookeeper is installed correctly. Therefore, make sure that zookeeper runs properly before configuring hbase. hbase can use the zookeeper service in two ways. The first is to use an existing independent zookeeper service, and the other is to manage the zookeeper service by hbase itself. Here we have hbase manage zookeeper, you can run fewer commands: 4. add the following lines to configure hbase/CONF/hbase-site.xml:
  1. Hbase. zookeeper. Quorum
  2. Hadoop1.ahau.edu.cn, hadoop2.ahau.edu.cn, hadoop3.ahau.edu.cn, hadoop4.ahau.edu.cn
  3. Comma Separated list of servers in the zookeeper quorum.
  4. For example, "host1.mydomain.com, host2.mydomain.com, host3.mydomain.com ".
  5. By default this is set to localhost for local andpseudo-distributed Modes
  6. Of operation. For a fully-distributed setup, this shocould be set Toa full
  7. List of zookeeper quorum servers. If hbase_manages_zk is set inhbase-env.sh
  8. This is the list of servers which we will start/stop zookeepeon.
  9. Hbase. zookeeper. Property. datadir
  10. /Home/GRID/zookeeper/
  11. Property from Zookeeper's config zoo. cfg.
  12. The directory where the snapshot is stored.
  13. Hbase. rootdir
  14. HDFS: // hadoop1.ahau.edu.cn: 9100/hbase
  15. Hbase. Cluster. Distributed
  16. True
Hbase. zookeeper. Quorum specifies all zookeeper nodes hbase. zookeeper. Property. datadir specifies the data directory hbase. rootdir of zookeeper and specifies the path hbase. Cluster. Distributed of HDFS. True indicates fully distributed deployment.
5. Modify hbase/CONF/hbase-evn.sh
  1. Exportjava_home =/usr/local/jdk1.6.0 _ 25# Java directory
  2. Exporthbase_classpath =/home/GRID/hadoop/conf # directory of hadoop Configuration
  3. Export hbase_opts = "-Ea-XX: + useconcmarksweepgc-XX: + cmsincrementalmode"
  4. Exporthbase_manages_zk = true # Whether hbase manages zookeeper
6. run the zookeeper configuration file Zoo. copy CFG to the directory specified by hbase_classpath. copy hadoop/CONF/hdfs-site.xml to hbase/conf directory 8. add all nodes to hbase/CONF/regionservers. Each row has a Host Name of 9. reverse resolution is required for all host names; otherwise, an error 10 is reported when hbase is started. check whether hbase can start zookeeperhbase $ bin/hbase-daemon.shstart zookeeperhbase # bin/hbase-daemon.shstop zookeeper11. synchronize the hbase directory to all node servers 12. start hadoop first, then hbase, stop hbase first, then stop hadoop13. start hadoophadoop $ bin/start-all.sh   Start hbaseHbase $ bin/start-hbase.sh14. problems may occur after startup. The specific problem logs will be written. At the end of this article, there are some common solutions. open the page http://hadoop1.ahau.edu.cn: 60010/master. view the hbase status in JSP. A prompt box is displayed in the distributed system, which is not a pain point.
16. Add DFS. Support. append support to hadoop/CONF/hdfs-site.xml and hbase/CONF/hdfs-site.xml as prompted by Wiki
  1. DFS. Support. append
  2. True
17. if no result is found, there are still some tips on the official website of google18.hbase. hbase does not support the official DFs of 0.20.2. support. append, need to compile their own branch hadoop19. compilation and installation can refer to the north flying lone geese blog http://blog.csdn.net/lansine2005/article/details/6595294   Or Bytes
FAQ Summary: 1. When the official hadoop0.20.2 is used, an error is reported when hbase is started:
  1. Fatal master. hmaster: unhandled exception. startingshutdown.org. Apache. hadoop. IPC. RPC $ versionmismatch: protocolorg. Apache. hadoop. HDFS. Protocol. clientprotocol version mismatch. (client = 42, Server
    = 41)
The reason is hbase/lib package is inconsistent with hadoop, copy the hadoop-0.20.2-core.jar in the hadoop directory to hbase lib can solve 2. Start hbase, report:
  1. Novalid quorum servers found in zoo. cfg
Zoo. cfg must be copied to the hadoop/conf directory. Otherwise, this error will be reported. 3. hbase reports an error.
  1. Cocould not find myaddress
This error may occur in two cases. 1 is mainly because no reverse Parsing is performed, and 2 is because no zoo. cfg is copied, so hbase cannot read the zookeeper configuration.
Post-integration: 1. It seems that the following file is added to the hbase_env.sh file: exporthbase_home =/usr/local/hbase   Hbase home pathExportpath = $ path:/usr/local/hbase/bin Hbase homebin pathExporthadoop_home =/usr/local/hadoop Hadoop home path(Test found that there is no relationship) 2. Modified hbase. rootdir in hbase_site.xml In the port, HDFS: // xukangde-01: 9100 changed HDFS: // xukangde-01: 9000, turned into a unified (test found, related)
3. Add the host name to master Slaver in the hadoopconf folder of each sub-machine (test later, no relation found)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.