Installation environment: centos6.0 + jdk1.6.0 _ 29 + hadoop1.0.0 + hbase0.90.4
Environment centos6.0 + jdk1.6.0 _ 29 + hadoop1.0.0 has been installed
1. Download hbase-0.90.4.tar.gz from the official website and decompress the hbase installation package to an available Directory (for example,/OPT)
[HTML]
View plaincopyprint?
- CD/OPT
- Tar zxvf hbase-0.90.4.tar.gz
- Chown-r hadoop: hadoop/opt/hbase-0.90.4
CD/OPT <br/> tar zxvf hbase-0.90.4.tar.gz <br/> chown-r hadoop: hadoop/opt/hbase-0.90.4
2. Set environment variables:
[HTML]
View plaincopyprint?
- Vim ~ /. Bashrc
- Export hbase_home =/opt/hbase-0.90.4 # set according to your JDK installation directory
- Paht = $ path: $ hbase_home/bin
Vim ~ /. Bashrc <br/> export hbase_home =/opt/hbase-0.90.4 # set according to your JDK installation directory <br/> paht = $ path: $ hbase_home/bin3. hbase Configuration:
In the $ hbase_home/conf directory, configure java_home in the hbase-env.sh according to your JDK installation, as shown below:
[HTML]
View plaincopyprint?
- # The JAVA Implementation to use. Java 1.6 required.
- Export java_home =/usr/local/JDK/jdk1.6.0 _ 29
# The JAVA Implementation to use. Java 1.6 required. <br/> export java_home =/usr/local/JDK/jdk1.6.0 _ 29
In the conf directory under the $ hbase_home directory, make sure that hbase in hbase-site. the host and port number of rootdir and Fs in the core-site.xml In the conf directory under $ hadoop_home directory. default. the host and port number of the name are the same. Add the following content:
[HTML]
View plaincopyprint?
- <Configuration>
- <Property>
- <Name> hbase. rootdir </Name>
- <Value> HDFS: /localhost: 9000/hbase </value>
- </Property>
- <Property>
- <Name> hbase. Cluster. Distributed </Name>
- <Value> true </value>
- </Property>
- <Property>
- <Name> hbase. Master </Name>
- <Value> localhost: 60000 </value>
- </Property>
- <Property>
- <Name> hbase. zookeeper. Quorum </Name>
- <Value> localhost </value>
- </Property>
- </Configuration>
<Configuration> <br/> <property> <br/> <Name> hbase. rootdir </Name> <br/> <value> HDFS: // localhost: 9000/hbase </value> <br/> </property> <br/> <Name> hbase. cluster. distributed </Name> <br/> <value> true </value> <br/> </property> <br/> <Name> hbase. master </Name> <br/> <value> localhost: 60000 </value> <br/> </property> <br/> <Name> hbase. zookeeper. quorum </Name> <br/> <value> localhost </value> <br/> </property> <br/> </configuration>
3. Start hadoop and then start hbase:
[HTML]
View plaincopyprint?
- <PRE name = "code" class = "html"> $ start-all.sh # Start hadoop
- $ JPs # Check the hadoop startup status. Check that datanode, secondarynamenode, datanode, jobtracker, and tasktracker are all started.
- 31557 datanode
- 31432 namenode
- 31902 tasktracker
- 31777 jobtracker
- JPS 689
- 31683 secondarynamenode
- $ Start-hbase.sh # Start hbase after hadoop is fully started
- $ JPs # Check the hbase startup status and confirm that hquorumpeer, hmaster, and hregionserver are all started.
- 31557 datanode
- 806 hquorumpeer
- 31432 namenode
- 853 hmaster
- 31902 tasktracker
- 950 hregionserver
- JPS 1110
- 31777 jobtracker
- 31683 secondarynamenode
- $ Hbase # view hbase commands
- Usage: hbase <command>
- Where <command> is one:
- Shell run the hbase Shell
- Zkcli run the zookeeper Shell
- Master run an hbase hmaster Node
- Regionserver run an hbase hregionserver Node
- Zookeeper run a zookeeper Server
- Rest run an hbase rest Server
- Thrift run an hbase thrift Server
- Avro run an hbase Avro Server
- Migrate upgrade an hbase. rootdir
- Hbck run the hbase 'fsck' Tool
- Classpath dump hbase classpath
- Or
- Classname run the class named classname
- Most commands print help when invoked w/o parameters.
- $ Hbase shell # Start hbase Shell
- Hbase shell; enter 'help <return> 'for list of supported commands.
- Type "Exit <return>" to leave the hbase Shell
- Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011
- Hbase (main): 001: 0>
<PRE name = "code" class = "html"> $ start-all.sh # Start hadoop <br/> $ JPs # view hadoop startup status, confirm datanode, secondarynamenode, datanode, jobtracker, tasktracker all started <br/> 31557 datanode <br/> 31432 namenode <br/> 31902 tasktracker <br/> 31777 jobtracker <br/> 689 JPs <br/> 31683 secondarynamenode <br/> $ start-hbase.sh # Start hbase after hadoop is fully started <br/> $ JPs # view hbase startup status, confirm hquorumpeer, hmaster, hregionserver all started <br/> 31557 datanode <br/> 806 hquorumpeer <br/> 31432 namenode <br/> 853 hmaster <br/> 31902 tasktracker <br/> 950 hregionserver <br/> 1110 JPs <br/> 31777 jobtracker <br/> 31683 secondarynamenode <br/> $ hbase # view hbase command <br/> usage: hbase <command> <br/> where <command> is one: <br/> shell run the hbase shell <br/> zkcli run the zookeeper shell <br/> Master run an hbase hmaster node <br/> regionserver run an hbase hregionserver node <br /> zookeeper run a zookeeper server <br/> rest run an hbase rest server <br/> thrift run an hbase thrift server <br/> Avro run an hbase Avro server <br/> migrate upgrade an hbase. rootdir <br/> hbck run the hbase 'fsck' tool <br/> classpath dump hbase classpath <br/> or <br/> classname run the class named classname <br/> most commands print help when invoked w/o parameters. </P> <p> $ hbase shell # Start hbase shell <br/> hbase shell; enter 'help <return> 'for list of supported commands. <br/> type "Exit <return>" to leave the hbase shell <br/> Version 0.90.4, r1150278, sun Jul 24 15:53:29 PDT 2011 </P> <p> hbase (main): 001: 0>
Hbase startup may fail due to an error (this problem occurs when I set up hbase0.90.4 in hadoop0.20.203.0. hadoop1.0.0 is not tested and the following steps are taken directly ), in this case, you need to copy the hadoop-core-1.0.0.jar under the $ hadoop_home directory and the commons-configuration-1.6.jar under the $ hadoop_home/lib directory to the $ hbase_home/lib directory, delete the hadoop-core-0.20-append-r1056497.jar under the $ hbase_home/lib directory, avoid version conflicts and incompatibility.
4. Practice hbase Shell
[HTML]
View plaincopyprint?
- Hbase (main): 001: 0> Create 'test', 'data' # create a table named 'test', which contains a column named 'data'
- 0 row (s) in 2.0960 seconds
- Hbase (main): 002: 0> list # output all tables in the user space to verify whether the table is created successfully.
- Table
- Test
- 1 row (s) in 0.0220 seconds
- # Insert three types of data to different rows and columns in the column family data
- Hbase (main): 003: 0> put 'test', 'row1', 'Data: 1', 'value1'
- 0 row (s) in 0.2970 seconds
- Hbase (main): 004: 0> put 'test', 'row2', 'Data: 2', 'value2'
- 0 row (s) in 0.0120 seconds
- Hbase (main): 005: 0> put 'test', 'row3', 'Data: 3', 'value3'
- 0 row (s) in 0.0180 seconds
- Hbase (main): 006: 0> scan 'test' # view the data insertion result
- Row column + cell
- Row1 column = data: 1, timestamp = 1330923873719, value = value1
- Row2 column = data: 2, timestamp = 1330923891483, value = value2
- Row3 column = data: 3, timestamp = 1330923902702, value = value3
- 3 row (s) in 0.0590 seconds
- Hbase (main): 007: 0> disable 'test' # disable Table Test
- 0 row (s) in 2.0610 seconds
- Hbase (main): 008: 0> drop 'test' # Delete the table test
- 0 row (s) in 1.2120 seconds
- Hbase (main): 009: 0> list # confirm that the table test is deleted
- Table
- 0 row (s) in 0.0180 seconds
- Hbase (main): 010: 0> quit # exit hbase Shell
Hbase (main): 001: 0> Create 'test', 'data' # create a table named 'test, contains a column named 'data' <br/> 0 row (s) in 2.0960 seconds </P> <p> hbase (main): 002: 0> list # output all tables in the user space to verify whether the table is successfully created <br/> table <br/> test <br/> 1 row (s) in 0.0220 seconds <br/> # insert three types of data to different rows and columns in the column family data <br/> hbase (main): 003: 0> put 'test ', 'row1', 'Data: 1', 'value1' <br/> 0 row (s) in 0.2970 seconds </P> <p> hbase (main): 004: 0> put 'test', 'row2', 'Data: 2', 'value2' <br/> 0 row (s) in 0.0120 seconds </P> <p> hbase (main): 005: 0> put 'test', 'row3', 'Data: 3 ', 'value3 '<br/> 0 row (s) in 0.0180 seconds </P> <p> hbase (main): 006: 0> scan 'test' # view the data insertion result <br/> row column + cell <br/> row1 column = data: 1, timestamp = 1330923873719, value = value1 <br/> row2 column = data: 2, timestamp = 1330923891483, value = value2 <br/> row3 column = data: 3, timestamp = 1330923902702, value = value3. <br/> 3 row (s) in 0.0590 seconds </P> <p> hbase (main): 007: 0> disable 'test' # disable table test <br/> 0 row (s) in 2.0610 seconds </P> <p> hbase (main): 008: 0> drop 'test' # Delete table test <br/> 0 row (s) in 1.2120 seconds </P> <p> hbase (main): 009: 0> list # confirm that table test is deleted <br/> table <br/> 0 row (s) in 0.0180 seconds </P> <p> hbase (main): 010: 0> quit # exit hbase shell <br/>5. Stop an hbase instance:
[HTML]
View plaincopyprint?
- $ Stop-hbase.sh
- Stopping hbase ......
- Localhost: Stopping zookeeper.
$ Stop-hbase.sh <br/> stopping hbase ...... <br/> localhost: Stopping zookeeper.6. Check the HDFS directory. You will find an hbase directory is added to the root directory.
[HTML]
View plaincopyprint?
- $ Hadoop FS-ls/
- Found 4 items
- Drwxr-XR-X-hadoop supergroup 0/hbase # hbase generated directory
- Drwxr-XR-X-hadoop supergroup 0/home
- Drwxr-XR-X-hadoop supergroup 0 2012-03-04 20:44/tmp
- Drwxr-XR-X-hadoop supergroup 0 2012-03-04 20:47/user
$ Hadoop FS-ls/<br/> found 4 items <br/> drwxr-XR-X-hadoop supergroup 0/hbase # hbase generation directory <br/> drwxr- XR-X-hadoop supergroup 0 2012-02-24/home <br/> drwxr-XR-X-hadoop supergroup 0 2012-03-04/tmp <br/> drwxr-XR-X-hadoop supergroup 0/user
If the hregionserver cannot be started when hbase is started, modify the regionservers file under $ hbase_home/conf and change the content to the name of your hadoop running host, as long as it is consistent with the hadoop configuration.