For more information about Hbase, see http://wiki.apache.org/hadoop/hbaseand http://en.wikipedia.org/wiki/hbase. This article describes how to install and configure HBase for standalone in Ubuntu10.04. The articles found on the Internet are either vague or ungeliable. Therefore, record the installation and configuration process and create a step by step graphic tutorial. Please remember to forget it later.
The installation of Hbase mainly involves configuring the java environment and Hadoop and Hbase configuration files.
1. install and configure the Java environment. OpenJDK used by default in Ubuntu10.04 is not supported in some applications (such as abicloud). To save time or install sun java, open the terminal and run the following command:
(1) Installation
?
1234 |
sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner" sudo apt-get update sudo apt-get install sun-java6-jre sun-java6-plugin sun-java6-fonts sudo apt-get install sun-java6-jdk sun-java6-plugin sun-java6-fonts |
Check whether the installation is successful
?
(2) set the default java interpreter.
?
1 |
sudo update-alternatives --config java |
The following figure is displayed:
Enter the number you want to select.
(3) EDIT java environment variables
?
1 |
sudo gedit /etc/environment |
Add the following two lines to the pop-up environment file: CLASSPATH =. :/usr/lib/jvm/java-6-sun/lib JAVA_HOME =/usr/lib/jvm/java-6-sun
Save and exit. Now, the java environment is configured.
2. install and configure Hadoop. Although I installed Hbase on a standalone version, Hadoop is a distributed system and uses SSH for communication.
(1) Install ssh
?
1 |
sudo apt-get install ssh |
(2) Set logon password not required
$ ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
After logon, you do not need to enter the password. You need to enter the Enter key for the first time.
?
1234 |
$ ssh localhost $ exit $ ssh localhost $ exit |
(3) download and decompress Hadoop and go to Hadoop
(4) Modify hadoop-env.sh. On my machine decompress the path is/home/viki/hadoop-0.20.2, enter the unzipped folder, modify it (requires root permission ).
?
12 |
cd hadoop- 0.20 . 2 gedit conf/hadoop-env.sh |
Copy the following java environment.
?
1 |
export JAVA_HOME=/usr/lib/jvm/java- 6 -sun- 1.6 . 0.22 |
(5) set the xml file, you need to set the conf folder under the three files core-site.xml, hdfs-site.xml, mapred-site.xml.
?
1 |
gedit conf/core-site.xml |
Copy the following content to the file:
?
1234567891011 |
<configuration> <property> <name>fs. default .name</name> <value>hdfs: //localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/var/hadoop/hadoop-\${user.name}</value> </property> </configuration> |
Save exit, continue to modify another file hdfs-site.xml
?
1 |
gedit conf/hdfs-site.xml |
Copy the following content to the file
?
123456 |
<configuration> <property> <name>dfs.replication</name> <value> 1 </value> </property> </configuration> |
Save and exit. Modify the last file.
?
Copy the following content to the file
?
123456 |
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost: 9001 </value> </property> </configuration> |
After completing the above steps, all the files have been modified and the Hadoop standalone test environment has been set up. The following describes how to start the Hadoop service.
3. Format Namenode and enable all Hadoop services to view service status.
(1) format Namenode
?
1 |
bin/hadoop namenode -format |
The following figure is displayed:
(2) Start all Hadoop services
?
The following figure is displayed:
(3) view the service status.
Management page: http: // localhost: 50030/jobtracker. jsp
Address: http://www.cnblogs.com/ventlam/archive/2010/11/24/hadoop.html