HBase installation configuration, using independent Zookeeper,shell testing

Source: Internet
Author: User
Tags zookeeper
Preface

Pre-installation requirements, using Hadoop, basically do not need to change anything:

1. Java Environment

2. Hadoop (HBase-based HDFs)

3, zookeeper (I use the independent zookeeper here, because previously used has been installed, of course, can also use HBase management zookeeper, you can refer to the previous article http://blog.csdn.net/smile0198/article/ details/17659537)

4, SSH,NTP time synchronization

5, system tuning, this can wait for the installation after the change, the number of open files (Ulimit and Nproc)

6, modify the Hadoop HDFS datanode simultaneously processing the file's upper limit: dfs.datanode.max.xcievers


One, distributed installation configuration 1, download unpack the package I use is hbase-0.94.6, go to the official website to download, extract directly to the installation directory. 2, configuration conf/hbase-env.sh added a sentence, do not let HBase management zookeeper

Export Hbase_manages_zk=false
The default should be true, which can be set to True if you want HBase to manage zookeeper. 3, configuration Conf/hbase-site.xmlThis step is the core:
<configuration>
 <property>  
  
    <name>hbase.rootdir</name>  
  
    <value>hdfs:// master:9099/hbase</value>  
  
    <description>the Directory shared byregionservers.  
  
    </description>  
  
  </property>  
  
  <property>  
  
    <name>hbase.cluster.distributed</ name>  
  
    <value>true</value>  
  
  </property>  

    <property>  
  
      <name> hbase.zookeeper.property.clientport</name>  
  
      <value>2181</value>  
  
    </property>  
  
    <property>  
  
      <name>hbase.zookeeper.quorum</name>  
  
      <value>haier002, haier003,haier004</value>  
  
    <property>  
  
      <name>hbase.zookeeper.property.datadir</ name>  
  
      <value>/opt/zookeeper-3.4.5/dataDir</value>  
  
    </property>  
   
</ Configuration>

Parameter description: (1) Hbase.rootdir,hdfs's entry address, address and port should be the same as your Hadoop configuration (<name>fs.default.name</name> in Core-site.xml) , all nodes public address (2) hbase.cluster.distributed,ture represents distributed (3) Hbase.zookeeper.property.clientPort, zookeeper Port (4) Hbase.zookeeper.quorum, zookeeper node (5) Hbase.zookeeper.property.dataDir, zookeeper keep the information of the file, default to/tmp restart will be lost
4, Configuration conf/regionserversThis slave configuration equivalent to Hadoop
Slave1
slave2
slave3
5. SCP to other machinesCopy the folder hbase-0.94.6 to several other machines
Scp-r hbase-0.94.6 hadoop@slave1:/usr/local/

6, runEnter the bin directory of the master installation directory
./start-hbase.sh
It's done, hahaha. JPS, please.
15675 NameNode
18205 hmaster
1264 asmain
15840 jobtracker
875 asmain
19017 Jps

Just a second. WebUI Address: master:60010


second, test, HBase Shell1, connect the shell into the bin directory, command
./hbase Shell
Enter help to see the command Note: The shell removes the need to hold down CTRL2. Examples of common commands create tables, add data
Create ' testajl ', ' CF '
put ' testajl ', ' row1 ', ' cf:a ', ' value1 '
put ' testajl ', ' row2 ', ' cf:b ', ' value2 '
put ' Testajl ', ' row3 ', ' cf:c ', ' value3 '
View data
Scan ' TESTAJL '
ROW                                      column+cell                                                                                                        
 row1                                    column=cf:a, timestamp=1388327667793, value=value1                                                                 
 row2                                    column=cf:b, timestamp=1388327866650, value=value2                                                                 
 row3                                    column=cf:c, timestamp=1388327785678, value= Value3                                                                 
3 Row (s) in 0.0630 seconds
Fetch a row of data
Get ' testajl ', ' row1 '
COLUMN                                   CELL                                                                                                               
 cf:a                                    timestamp=1388327667793, value=value1                                                                              
1 row (s) in 0.0290 Seconds
Delete Table First invalidates
Disable ' TESTAJL '
Delete
Drop ' TESTAJL '

OK, start getting started, you can play, a piece of refueling.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.