Environment:
Operating System: Ubuntu 12.10 64bit
JDK: Sun JDK 1.6 64bit
Hadoop: Apache hadoop 1.02
Hbase: Apache hbase 0.92
Prerequisites: Configure Apache hadoop append. The default attribute is false and must be set to true.
1) download hbase
Extract/data/soft to each server
Extract
Root @ master:/data/soft#Tar zxvf hbase-0.92.0.tar.gz
Establish soft connection
Root @ master:/data/soft#Ln-s hbase-0.92.0 hbase
2) Configure hbase
The premise is that hadoop is installed, which is performed on namenode by default.
1. Modify CONF/hbase-env.sh and add JDK support
Export java_home =/usr/local/JDK export hbase_manages_zk=TrueExport hbase_log_dir=/Data/logs/hbase
2. Modify CONF/hbase-site.xml,
< Property > < Name > Hbase. rootdir </ Name > < Value > HDFS: // master: 9000/hbase </ Value > </ Property > < Property > < Name > Hbase. Cluster. Distributed </ Name > < Value > True </ Value > </ Property > < Property > < Name > Hbase. Master </ Name > < Value > HDFS: // master: 60000 </ Value > </ Property > < Property > < Name > Hbase. zookeeper. Quorum </ Name > < Value > Slave-001, slave-002, slave-003 </ Value > < Description > Comma Separated list of servers in the zookeeper quorum. for example, "host1.mydomain.com, host2.mydomain.com, host3.mydomain.com ". by default this is set to localhost for local and pseudo-distributed modes of operation. for a fully-distributed setup, this shocould be set to a full list of zookeeper quorum servers. if hbase_manages_zk is set in hbase-env.sh this is the list of servers which we will start/stop zookeeper on. </ Description > </ Property > < Property > < Name > Hbase. zookeeper. Property. datadir </ Name > < Value > /Data/work/zookeeper </ Value > < Description > Property from Zookeeper's config zoo. cfg. The directory where the snapshot is stored.</ Description > </ Property >
Hbase. rootdir sets the directory of hbase on HDFS. The host name is the host where the namenode node of HDFS is located.
Hbase. Cluster. distributed is set to true, indicating that it is a fully distributed hbase cluster.
Hbase. Master sets the master host name and port of hbase.
Hbase. zookeeper. Quorum sets the zookeeper host. We recommend that you use the singular number.
3. Modify CONF/hdfs-site.xml under the hadoop directory
<Property> <Name>DFS. datanode. Max. xcievers</Name> <Value>4096</Value> </Property>
4. Modify CONF/regionservers
Add all datanode to this file, similar to the slaves file in hadoop.
5. Copy hbase to all nodes
6. Start hbase
$ ../Bin/start-hbase.sh
7. hbase Web Interface
Http: // master: 60010/
8 Test
1). log on to the hbase Client
./Bin/hbase Shell
2) create a data table and insert 3 Records
Hbase (main): 003: 0>Create 'test', 'cf'0 row (s)In1.2200Seconds hbase (main ):003:0>List 'table' Test1 row (s)In0.0550Seconds hbase (main ):004:0>Put 'test', 'row1', 'cf: A', 'value1'0 row (s)In0.0560Seconds hbase (main ):005:0>Put 'test', 'row2', 'cf: B ', 'value2'0 row (s)In0.0370Seconds hbase (main ):006:0>Put 'test', 'row3', 'cf: C', 'value3'0 row (s)In0.0450 seconds
3). view the inserted data
Hbase (main): 007: 0>Scan 'test' row column+Cell row1 Column= Cf: A, timestamp = 1288380727188, value =Value1 row2 Column= Cf: B, timestamp = 1288380738440, value =Value2 row3 Column= Cf: C, timestamp = 1288380747365, value =Value33 row (s)In0.0590 seconds
4). Read a single record
Hbase (main): 008: 0>Get 'test', 'row1' Column cell Cf: a timestamp= 1288380727188, value =Value11 row (s)In0.0400 seconds
5) Disable and delete the data table
Hbase (main): 012: 0>Disable 'test'0 row (s)In1.0930Seconds hbase (main ):013:0>Drop 'test'0 row (s)In0.0770 seconds
6). Exit
Hbase (main): 014: 0> exit