Hbase distributed Installation

Source: Internet
Author: User

Required installation package:

Installation Package for hbase-0.94.1

Installation Process:

1. Configure hbase-env.xml

export JAVA_HOME=/usr/java/jdk1.6.0_37export HBASE_MANAGES_ZK=true

2. Configure hbase-site.xml

<configuration><property><name>hbase.rootdir</name><value>hdfs://master:9000/hbase</value></property><property><name>hbase.cluster.distributed</name><value>true</value></property><property><name>hbase.zookeeper.quorum</name><value>dd1,dd2,dd3</value></property><property><name>hbase.zookeeper.property.dataDir</name><value>/home/hadoop/zookeeper</value></property></configuration>
  • Hbase. rootdir sets the directory of hbase on HDFS. The host name is the host where the namenode node of HDFS is located.
  • Hbase. Cluster. distributed is set to true, indicating that it is a fully distributed hbase cluster.
  • Hbase. zookeeper. Quorum sets the zookeeper host. We recommend that you use the singular number.
  • Hbase. zookeeper. Property. datadir sets the data path of zookeeper.

3. Configure regionservers

dd1dd2dd3

4. Configure the hdfs-site.xml in hadoop and add the configuration

<property><name>dfs.datanode.max.xcievers</name><value>4096</value></property>

This parameter limits the number of send and receive tasks that datanode allows to execute simultaneously. The default value is 256, which is usually not set in the hadoop-defaults.xml. This restriction seems to be a little too small. In high loads, dfsclient reports the exception of cocould not read from stream when put data.

An hadoop HDFS datanode has an upper bound on the number of files that it will serve at any one time. The upper bound parameter is calledxcievers(Yes, This is misspelled ).

Not having this configuration in place makes for strange looking failures. Eventually you'll see a complain in the datanode logs complaining about the xcievers exceeded, but on the run up to this one manifestation
Is complaint about missing blocks. For example:10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from
namenode and retry...

5. Delete the hadoop-core-1.0.X.jar and commons-collections-3.2.1.jar under the hbase/lib package, copy the core package and commons-collections-3.2.1.jar under the hadoop directory to the Lib of hbase --- this operation is to ensure that the version of the hadoop and hbase package is consistent, think there are some similar version incompatibility problems.

6. Check whether/etc/hosts is bound to 127.0.0.1.

The following is a description on the official website.

Before we proceed, make sure you are good on the below loopback prerequisite.

Loopback IP

Hbase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions, for example, will default to 127.0.1.1 and this will cause problems for you.

/etc/hostsShoshould look something like this:

            127.0.0.1 localhost            127.0.0.1 ubuntu.ubuntu-domain ubuntu

The default address of hbase is 127.0.0.1. The configuration should be like the above, and 127.0.0.1 localhost is added here. In fact, you can also comment out the following sentence without adding it. The problem is solved!

Error record: Case 1:

22:11:46, 018 info org. Apache. zookeeper. Client. zookeepersaslclient: client will not sasl-authenticate because the default JAAS configuration section 'client' could not

Be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.

22:11:46, 024 warn org. Apache. zookeeper. clientcnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect

Java.net. connectexception: Connection refused

Solution:

Modify/etc/hosts

Comment out the bold line

#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.130.170 master192.168.130.168 dd1192.168.130.162 dd2192.168.130.248 dd3192.168.130.164 dd4

7. Exceptions caused by different server time intervals: Time Difference of 39431466 MS> MAX allowed of 30000 ms

Solution:
1). solution 1
Add configuration in hbase-site.xml

<property>        <name>hbase.master.maxclockskew</name>        <value>180000</value>        <description>Time difference of regionserver from master</description> </property>

2). solution 2
Modify the time of each node so that the error is within 30 s.
Clock -- set -- date = "10/29/2011 18:46:50"
Clock -- hctosys

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.