Hbase pseudo-distributed installation explanation and Error Analysis

Source: Internet
Author: User
Tags hadoop fs

Installation environment: centos6.0 + jdk1.6.0 _ 29 + hadoop1.0.0 + hbase0.90.4
Environment centos6.0 + jdk1.6.0 _ 29 + hadoop1.0.0 has been installed

1. Download hbase-0.90.4.tar.gz from the official website and decompress the hbase installation package to an available Directory (for example,/OPT)

[HTML]
View plaincopyprint?
  1. CD/OPT
  2. Tar zxvf hbase-0.90.4.tar.gz
  3. Chown-r hadoop: hadoop/opt/hbase-0.90.4

CD/OPT <br/> tar zxvf hbase-0.90.4.tar.gz <br/> chown-r hadoop: hadoop/opt/hbase-0.90.4
2. Set environment variables:

[HTML]
View plaincopyprint?
  1. Vim ~ /. Bashrc
  2. Export hbase_home =/opt/hbase-0.90.4 # set according to your JDK installation directory
  3. Paht = $ path: $ hbase_home/bin

Vim ~ /. Bashrc <br/> export hbase_home =/opt/hbase-0.90.4 # set according to your JDK installation directory <br/> paht = $ path: $ hbase_home/bin3. hbase Configuration:
In the $ hbase_home/conf directory, configure java_home in the hbase-env.sh according to your JDK installation, as shown below:

[HTML]
View plaincopyprint?
  1. # The JAVA Implementation to use. Java 1.6 required.
  2. Export java_home =/usr/local/JDK/jdk1.6.0 _ 29

# The JAVA Implementation to use. Java 1.6 required. <br/> export java_home =/usr/local/JDK/jdk1.6.0 _ 29
In the conf directory under the $ hbase_home directory, make sure that hbase in hbase-site. the host and port number of rootdir and Fs in the core-site.xml In the conf directory under $ hadoop_home directory. default. the host and port number of the name are the same. Add the following content:

[HTML]
View plaincopyprint?
  1. <Configuration>
  2. <Property>
  3. <Name> hbase. rootdir </Name>
  4. <Value> HDFS: /localhost: 9000/hbase </value>
  5. </Property>
  6. <Property>
  7. <Name> hbase. Cluster. Distributed </Name>
  8. <Value> true </value>
  9. </Property>
  10. <Property>
  11. <Name> hbase. Master </Name>
  12. <Value> localhost: 60000 </value>
  13. </Property>
  14. <Property>
  15. <Name> hbase. zookeeper. Quorum </Name>
  16. <Value> localhost </value>
  17. </Property>
  18. </Configuration>

<Configuration> <br/> <property> <br/> <Name> hbase. rootdir </Name> <br/> <value> HDFS: // localhost: 9000/hbase </value> <br/> </property> <br/> <Name> hbase. cluster. distributed </Name> <br/> <value> true </value> <br/> </property> <br/> <Name> hbase. master </Name> <br/> <value> localhost: 60000 </value> <br/> </property> <br/> <Name> hbase. zookeeper. quorum </Name> <br/> <value> localhost </value> <br/> </property> <br/> </configuration>

3. Start hadoop and then start hbase:

[HTML]
View plaincopyprint?
  1. <PRE name = "code" class = "html"> $ start-all.sh # Start hadoop
  2. $ JPs # Check the hadoop startup status. Check that datanode, secondarynamenode, datanode, jobtracker, and tasktracker are all started.
  3. 31557 datanode
  4. 31432 namenode
  5. 31902 tasktracker
  6. 31777 jobtracker
  7. JPS 689
  8. 31683 secondarynamenode
  9. $ Start-hbase.sh # Start hbase after hadoop is fully started
  10. $ JPs # Check the hbase startup status and confirm that hquorumpeer, hmaster, and hregionserver are all started.
  11. 31557 datanode
  12. 806 hquorumpeer
  13. 31432 namenode
  14. 853 hmaster
  15. 31902 tasktracker
  16. 950 hregionserver
  17. JPS 1110
  18. 31777 jobtracker
  19. 31683 secondarynamenode
  20. $ Hbase # view hbase commands
  21. Usage: hbase <command>
  22. Where <command> is one:
  23. Shell run the hbase Shell
  24. Zkcli run the zookeeper Shell
  25. Master run an hbase hmaster Node
  26. Regionserver run an hbase hregionserver Node
  27. Zookeeper run a zookeeper Server
  28. Rest run an hbase rest Server
  29. Thrift run an hbase thrift Server
  30. Avro run an hbase Avro Server
  31. Migrate upgrade an hbase. rootdir
  32. Hbck run the hbase 'fsck' Tool
  33. Classpath dump hbase classpath
  34. Or
  35. Classname run the class named classname
  36. Most commands print help when invoked w/o parameters.
  37. $ Hbase shell # Start hbase Shell
  38. Hbase shell; enter 'help <return> 'for list of supported commands.
  39. Type "Exit <return>" to leave the hbase Shell
  40. Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011
  41. Hbase (main): 001: 0>

<PRE name = "code" class = "html"> $ start-all.sh # Start hadoop <br/> $ JPs # view hadoop startup status, confirm datanode, secondarynamenode, datanode, jobtracker, tasktracker all started <br/> 31557 datanode <br/> 31432 namenode <br/> 31902 tasktracker <br/> 31777 jobtracker <br/> 689 JPs <br/> 31683 secondarynamenode <br/> $ start-hbase.sh # Start hbase after hadoop is fully started <br/> $ JPs # view hbase startup status, confirm hquorumpeer, hmaster, hregionserver all started <br/> 31557 datanode <br/> 806 hquorumpeer <br/> 31432 namenode <br/> 853 hmaster <br/> 31902 tasktracker <br/> 950 hregionserver <br/> 1110 JPs <br/> 31777 jobtracker <br/> 31683 secondarynamenode <br/> $ hbase # view hbase command <br/> usage: hbase <command> <br/> where <command> is one: <br/> shell run the hbase shell <br/> zkcli run the zookeeper shell <br/> Master run an hbase hmaster node <br/> regionserver run an hbase hregionserver node <br /> zookeeper run a zookeeper server <br/> rest run an hbase rest server <br/> thrift run an hbase thrift server <br/> Avro run an hbase Avro server <br/> migrate upgrade an hbase. rootdir <br/> hbck run the hbase 'fsck' tool <br/> classpath dump hbase classpath <br/> or <br/> classname run the class named classname <br/> most commands print help when invoked w/o parameters. </P> <p> $ hbase shell # Start hbase shell <br/> hbase shell; enter 'help <return> 'for list of supported commands. <br/> type "Exit <return>" to leave the hbase shell <br/> Version 0.90.4, r1150278, sun Jul 24 15:53:29 PDT 2011 </P> <p> hbase (main): 001: 0>

 

 

Hbase startup may fail due to an error (this problem occurs when I set up hbase0.90.4 in hadoop0.20.203.0. hadoop1.0.0 is not tested and the following steps are taken directly ), in this case, you need to copy the hadoop-core-1.0.0.jar under the $ hadoop_home directory and the commons-configuration-1.6.jar under the $ hadoop_home/lib directory to the $ hbase_home/lib directory, delete the hadoop-core-0.20-append-r1056497.jar under the $ hbase_home/lib directory, avoid version conflicts and incompatibility.

4. Practice hbase Shell

[HTML]
View plaincopyprint?
  1. Hbase (main): 001: 0> Create 'test', 'data' # create a table named 'test', which contains a column named 'data'
  2. 0 row (s) in 2.0960 seconds
  3. Hbase (main): 002: 0> list # output all tables in the user space to verify whether the table is created successfully.
  4. Table
  5. Test
  6. 1 row (s) in 0.0220 seconds
  7. # Insert three types of data to different rows and columns in the column family data
  8. Hbase (main): 003: 0> put 'test', 'row1', 'Data: 1', 'value1'
  9. 0 row (s) in 0.2970 seconds
  10. Hbase (main): 004: 0> put 'test', 'row2', 'Data: 2', 'value2'
  11. 0 row (s) in 0.0120 seconds
  12. Hbase (main): 005: 0> put 'test', 'row3', 'Data: 3', 'value3'
  13. 0 row (s) in 0.0180 seconds
  14. Hbase (main): 006: 0> scan 'test' # view the data insertion result
  15. Row column + cell
  16. Row1 column = data: 1, timestamp = 1330923873719, value = value1
  17. Row2 column = data: 2, timestamp = 1330923891483, value = value2
  18. Row3 column = data: 3, timestamp = 1330923902702, value = value3
  19. 3 row (s) in 0.0590 seconds
  20. Hbase (main): 007: 0> disable 'test' # disable Table Test
  21. 0 row (s) in 2.0610 seconds
  22. Hbase (main): 008: 0> drop 'test' # Delete the table test
  23. 0 row (s) in 1.2120 seconds
  24. Hbase (main): 009: 0> list # confirm that the table test is deleted
  25. Table
  26. 0 row (s) in 0.0180 seconds
  27. Hbase (main): 010: 0> quit # exit hbase Shell

Hbase (main): 001: 0> Create 'test', 'data' # create a table named 'test, contains a column named 'data' <br/> 0 row (s) in 2.0960 seconds </P> <p> hbase (main): 002: 0> list # output all tables in the user space to verify whether the table is successfully created <br/> table <br/> test <br/> 1 row (s) in 0.0220 seconds <br/> # insert three types of data to different rows and columns in the column family data <br/> hbase (main): 003: 0> put 'test ', 'row1', 'Data: 1', 'value1' <br/> 0 row (s) in 0.2970 seconds </P> <p> hbase (main): 004: 0> put 'test', 'row2', 'Data: 2', 'value2' <br/> 0 row (s) in 0.0120 seconds </P> <p> hbase (main): 005: 0> put 'test', 'row3', 'Data: 3 ', 'value3 '<br/> 0 row (s) in 0.0180 seconds </P> <p> hbase (main): 006: 0> scan 'test' # view the data insertion result <br/> row column + cell <br/> row1 column = data: 1, timestamp = 1330923873719, value = value1 <br/> row2 column = data: 2, timestamp = 1330923891483, value = value2 <br/> row3 column = data: 3, timestamp = 1330923902702, value = value3. <br/> 3 row (s) in 0.0590 seconds </P> <p> hbase (main): 007: 0> disable 'test' # disable table test <br/> 0 row (s) in 2.0610 seconds </P> <p> hbase (main): 008: 0> drop 'test' # Delete table test <br/> 0 row (s) in 1.2120 seconds </P> <p> hbase (main): 009: 0> list # confirm that table test is deleted <br/> table <br/> 0 row (s) in 0.0180 seconds </P> <p> hbase (main): 010: 0> quit # exit hbase shell <br/>5. Stop an hbase instance:

[HTML]
View plaincopyprint?
  1. $ Stop-hbase.sh
  2. Stopping hbase ......
  3. Localhost: Stopping zookeeper.

$ Stop-hbase.sh <br/> stopping hbase ...... <br/> localhost: Stopping zookeeper.6. Check the HDFS directory. You will find an hbase directory is added to the root directory.

[HTML]
View plaincopyprint?
  1. $ Hadoop FS-ls/
  2. Found 4 items
  3. Drwxr-XR-X-hadoop supergroup 0/hbase # hbase generated directory
  4. Drwxr-XR-X-hadoop supergroup 0/home
  5. Drwxr-XR-X-hadoop supergroup 0 2012-03-04 20:44/tmp
  6. Drwxr-XR-X-hadoop supergroup 0 2012-03-04 20:47/user

$ Hadoop FS-ls/<br/> found 4 items <br/> drwxr-XR-X-hadoop supergroup 0/hbase # hbase generation directory <br/> drwxr- XR-X-hadoop supergroup 0 2012-02-24/home <br/> drwxr-XR-X-hadoop supergroup 0 2012-03-04/tmp <br/> drwxr-XR-X-hadoop supergroup 0/user

 

If the hregionserver cannot be started when hbase is started, modify the regionservers file under $ hbase_home/conf and change the content to the name of your hadoop running host, as long as it is consistent with the hadoop configuration.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.